It’s Not Rocket Science
As you may have suspected—and as UX professionals are fond of saying—the answer to this problem is not rocket science. It’s actually pretty simple: Organizations making technology investments need to do a few things in addition to their typical processes for evaluating technology:
- Identify and describe the target user groups that currently perform the task or process the software will automate, so their characteristics, motivations, and appetite for change are well understood.
- Model and describe the current workflow the target users employ to accomplish the task or process, using simple methods like task analysis and time-on-task measurement.
- Discover what the target users and other staff typically do before and after the task being automated, to gain an understanding of whether—and, if so, how—you can automate the task’s precursors and antecedents or somehow include them in the potential solution.
- Finally—and only after doing all of the above—begin to assess the technology solutions in detail for their goodness-of-fit to the qualitative, real-world characteristics of the target users and the existing workflow.
At this point in technology assessment, feature lists and demos matter a whole lot less than actually putting real target users on the system and having them perform their tasks. Does doing this consume more time and resources? Yes. Is it worth it? Absolutely! Not doing this increases the risk that your organization will suffer reduced productivity, decreased morale, and the other risks attendant on technology rejection that I described in Part 1. And, just in case you don’t really buy the examples I described there, let me relate two more stories of technology rejection that I recently encountered—this time, in high-risk, mission-critical environments.
Stories of Technology Rejection
Let me tell you a couple of stories about users who rejected new technology.
Story of a Carrier Flight Deck Crew
Recently, I met someone who had been an aircraft carrier flight deck crewman. During his service on the carrier, the Navy had automated the deck crews’ process for preflight aircraft inspection. Before adopting the new process, the deck crew used a paper checklist on a clipboard—both as a memory aid and for data capture. They later logged the data into a database for reporting and safety analysis.
The crewman described the automated process the Navy had deployed to replace their paper-and-pencil inspection process. It required the deck crew to use a hand-held device for both data entry and scanning during their inspections—entering data manually at certain points and connecting the device directly to the aircraft to capture instrumentation data at other points. The crewman was adamant in his view that the device had detracted from the deck crews’ ability to rely on their experience and exercise their judgment, because they interacted primarily with the scanning device rather than the aircraft itself.
Story of a Beat Cop
During a recent conversation I had with a usability test participant who was a patrolman, he revealed this interesting anecdote: His municipality had recently “upgraded” the computer system in the cruisers, which patrolmen used for reporting and receiving information in the field. This cop and others had come to the conclusion that the new system, with its high-resolution graphics and touch-screen interface, actually slowed down the reporting and receiving of information. More critically, because using the computer required greater attention and more time, it had also reduced their situational awareness, increasing risk to them and the citizens they served.