Increasingly, product teams want speedy UX design processes. Even though launching minimum viable products (MVPs) has led to some very public product-design failures, they are becoming an accepted norm—or even something to celebrate—within some organizations. However, when such failures occur in the marketplace, they can alienate a product’s users and damage business results.
While usability testing and ethnographic user research can prevent such failures, many product teams believe they take too much time. But it’s easy to conduct usability testing and user research rapidly within the context a Lean or agile software-development process, enabling you improve your UX designs and avoid wasted investment and embarrassment.
Champion Advertisement
Continue Reading…
Failing Fast
Back in the 1950s, North American Aviation represented the definition of an innovative company. It followed success by success, creating record-breaking, effective aircraft, even during periods when government contracts had dried up. When company President James Howard “Dutch” Kindelberger was asked how they were able to do this, he said:
“Nobody ever pulled a rabbit out of a hat without carefully putting one there in the first place.”
The company invested in new technologies and sold off or canceled unsuccessful, money-losing projects. Projects involving futuristic technologies sometimes changed their goals midstream, when a better idea or trajectory emerged from the research. They initiated research and development (R&D) for such projects years or even decades before anyone needed a product that used the new technology.
Of course, they failed often, but only during the research phase, not during product development. They planned for failure and failed in relative privacy. Products that reached the marketplace always had the opportunity to be great.
Elon Musk is now big in aerospace, too. A few years ago, he said:
“There’s a silly notion that failure’s not an option at NASA. Failure is an option here. If things are not failing, you are not innovating enough.”
While Musk might seem to be saying that he doesn’t mind blowing up rockets, with actual payloads, in public, that would be taking the concept of failing faster to an illogical extreme. Of course, NASA says that failure is not an option when it’s about risking lives. They do know about risk, and they learn from their public failures. But carefully, privately, and without intentionally risking people’s lives, NASA always plans for failure on research projects. They embrace failure—in the same way as North American—because they are scientists.
Learning from failure is not something new or innovative, but it is a key part of the scientific and engineering methods we claim to practice every day.
Usability Testing
As you might expect, I create designs that fail all the time. But no one really minds because we plan for it.
At the beginning of a project, we try to gather as much information as we can. As I discussed in my last column, we conduct workshops of some sort to elicit information from the project team.
Ideally, we perform user-needs research to make sure the information we share during a workshop is accurate. But in my experience, there is rarely enough time or money for this. So we usually get to basic design by relying on patterns and heuristics. They give us a good base of knowledge that lets us create most of an app, Web site, or other digital product.
But then we do usability testing—preferably in the field.
Field Methods
I have done a lot of usability testing in my career. I think I’ve done every method you can do, really. I do occasionally use guerrilla methods such as intercepts, as well as remote methods, but I haven’t done a traditional lab study in around five years.
Instead, these days, I almost always use in-person, field methods. That means going to where people actually live or work and having them use a product in their natural environment.
Why? Because people don’t normally work in a usability lab. Plus, in an era when everyone uses mobile devices, it is very hard to emulate people’s environment or the way they actually work inside a lab.
Recently, I’ve performed tests inside stores, on street corners, in factories and warehouses, in moving cars, in parking lots, and even in cubicles and offices. Friends of mine have done tests on tugboats, in farm fields and on combines, on oil platforms, in aircraft cockpits, on Army firing ranges, and more.
Location-based services, the variability of networks, and testing IoT (Internet of Things) devices makes field testing even more important. Simulations are getting increasingly unrealistic, and I see Quality Assurance and Development doing system testing in the wild more and more. Today, you simply have to see how people are working with products in their real environment.
Field-Test Setup
At a high level, field testing does not differ from lab testing. You have a prototype, create a test plan, and get participants to walk through a product, while gathering information about how they used it and reacted to it.
Logistics are where the differences arise. The prototype must work in the field. Even paper prototypes can work, but you have to make sure they’re portable. For digital products or prototypes, you must ensure there’s Internet access—or no need for it.
I like to bring my own test phones or tablets, so the prototype is already installed and we know it works. I also bring clipboards and spare pens because I currently take all of my notes on paper. I’ve tried using some digital forms for note-taking, but there’s just too much free-form note-taking. Plus, in my experience, it’s a lot easier to scribble words on paper when standing than on any digital device.
Of course, my plan is not to scribble too much. I arrange the test plan to use as many check boxes and other selection mechanisms as possible. There is a lot of pre-testing and a few pilot studies to make sure the prototype works as expected and I’ll get all of the likely answers.
Run your pilot studies as though they’re real test sessions. You’ll probably be standing during field sessions, so practice taking notes standing up. I find my clipboard works better sideways, so I format my note-taking that way, and I’m very picky about what pens I use.
When and Where to Test
Since you aren’t testing in a lab, where do you test? Well, aside from going where the users are, the usual considerations are time and cost. Both travel and travel time are expensive, and many researchers find travel inconvenient, so they try to find participants in their home town.
I once planned some research in homes regarding the use of mobile Internet news and entertainment. I submitted a plan for it—instead of immediately carrying it out—because the participants were in North Africa, so travel time and cost would be a problem. Also because the participants would mostly be women at home, so we’d have to hire a service with a cadre of Muslim women researchers. While I could have physically gotten to the region and gone to their homes, as a white, English-speaking man, I absolutely would not have gotten the same responses. Selecting a moderator is critical and can provide some additional opportunities to obtain data that you could not get yourself.
Consider all possible issues when arranging visits. Where do most people use your product? How much will your visit interfere with their life? Are there safety issues or privacy or security issues that would prevent your getting in or taking photos? Would you need an escort or an introduction from someone else in the business?
Would you need to visit multiple locations? Regional testing might not matter, if you’re really designing for everyone, but it may be important. For example, we’ve found that people in India and China tend to use their own phones at work, but in the US, more companies issue mobile devices to employees for work-specific purposes. This can result in huge differences in the way people perceive and use a product.
Other than these considerations, make sure you do everything just like you would for any other business event. Schedule test sessions well in advance, and try to be flexible to accommodate the participants’ schedules more than your own. Be sure to arrange plenty of spare time when traveling, so you aren’t late, stressed, or tired.
Field-Test Procedures
While I give participants a phone or tablet to use for the test, I also try to get to see their phone. The first thing in my test plan is a bunch of questions about the participants. Not demographics, so much as contextually interesting stuff such as:
Where do they normally do their work?
What phone and computer do they have?
How much connectivity is available where they normally work?
Just as in a usability lab, the moderator ideally just asks questions, someone else takes notes, and a third person records video as a backup. Get permission to take photos and record video. Think about the environment and recognize that, in some cases, participants will be shy or sites or products will secret. Some places are too loud for recording audio or for an observer who is a few feet away to hear well.
Where should the observers be? Some places are too small or dangerous or the participant moves about too much for it to be convenient to have more than one person observing. While I’ve run field studies with assistants and half a dozen business observers, that’s rare. More often than no,t I have to moderate, take notes, and take photos all by myself.
One trick that lets me do this is using video glasses. I know other researchers who use a GoPro that is clipped either to the participant or to themselves, but I like my modified spy glasses because they let me see what is on the screen and record the participant’s voice pretty clearly. They also reduce the amount of time I have to hover over the participant’s shoulder, letting me position myself at a comfortable distance in front of the participant and converse like a person normally would.
Interviewing, Observing, and Surveying
When you conduct a usability test, you are conjuring up a largely fake experience of users interacting with your system. There is no way to avoid testing’s having some influence over the way participants act. The point of field research is to reduce that influence as much as possible.
When doing usability testing, you need to sit right next to participants, hover over their shoulder, or follow them around, but must still try not to get in their way.
Remember to follow good, basic test procedures. Don’t share too much information, and let participants find their own way. This ensures you are free to observe and take notes during any time gaps while participants think and try to work out the next step in a process.
During each test session, try to stick pretty closely to the test plan and take notes about any issues you didn’t expect to emerge. The first time you visit someone in their home or office, you’ll discover huge surprises. These are points of failure from which you can learn.
At the end of a test session, follow up with the participant on the problems it has revealed. Ask what the participant would do to fix the issues. Answer any questions the participant might have about how to do something your product doesn’t yet support or that you hadn’t considered implementing.
I always like to finish each test session by having the participant fill in a System Usability Survey (SUS). It’s good to get data on participants’ preferences, as well as to observe their performance during test sessions. It’s always useful to have a single number that determines whether it’s okay to launch a product and that you can track over time. Plus, it’s helpful in transitioning participants out of the test session. They can change modes, and I can take a moment to pack up the phone or tablet, my paperwork, cameras, and other equipment that I used during the test session.
Lean Ethnography
Now, let’s consider what can happen when a participant wants to explore a difficult part of your product, has problems you didn’t expect, or asks questions you don’t fully understand.
An Accidental Ethnographer
Some years ago, I was conducting a pilot test in a repair center for a local dairy. During a test session, a participant was very suspicious of the mileage data that we had displayed in the mobile app. So I made a note and, at the end of the session, asked the participant about it. I didn’t get much of an answer then, but the next participant had the same concern.
When all of the sessions were finished, I asked the shop supervisor about this issue, hoping he could elaborate further on it. After a brief attempt to answer my question, he took me to a back office where a pile of electronic boxes was gathering dust behind a desk. He explained that these were data loggers they had pulled off their trucks because of problems with their accuracy. This was super interesting and provided valuable context that I noted.
Even more interesting, the office looked like a mail room, with cubbyholes all around the walls. “Why?” I asked him. He told me this was for a related reason. With computer systems, they’d had no end of issues proving to the government that they keep their data. So they decided it would be best to keep everything on paper, as well as on computers.
Even though corporate sales and support people had often visited this place, and I’d talked to them about their process before, this information had simply never come up. They hadn’t thought it was worth mentioning when I interviewed them about their process. I just stumbled across it.
What I’d also stumbled upon was an entirely different field of study called ethnography. This is now one of my favorite big words to use in justifying that User Experience is a unique set of skills.
Incidental Ethnography
Ethnography is the study of how people act naturally and how they typically interact within their environment and work culture. As I said earlier, usability studies are a bit staged. Participants follow along with your script, interacting with your new product, whether it’s fake or real. Ethnography doesn’t do any of this. Instead, it lets you simply follow people around at work for day or a week or observe them living in a village for a month. While traditional ethnography can be a great addition to user-needs research, it does take considerable time to do it in the usual way.
Now, since the ethnography I do is quick and dirty, I call it Lean Ethnography. I apply the following key ethnographic principles to each of my field usability studies:
Be prepared. Give yourself more time to take notes. When it’s permissible, keep your camera at hand. Always be ready to pop off in another direction or ask details about something unexpected that happens.
Before each usability study, get a baseline. Almost everything I design is an improvement on something people already use. They might do something in a very different or non-digital way, but I ask them to demonstrate how they do it now. Aside from finding useful information, this is a good way to get participants comfortable with your observing them while they do their work.
Let users do whatever they want to do. When doing usability testing, we have to explain that it’s not a demo, a feedback session, or a focus group. But in Lean Ethnography, I just let people give me a tour of their home, office, or factory if they offer. This helps me to see how they talk about things and gives me opportunities to observe things I might not have asked about. It also lets me engage with participants and show my interest in them.
During usability-test sessions, note everything participants do. In addition to what they do with your app or Web site, note things like their movements, use of paper, not knowing where to put the phone while messing with an IoT adapter; what the weather was like; and what they do if they have no coverage.
Most of all, be curious. On one workplace tour, I asked, “What are these bins on the floor for?” I learned they were a unique way of handling parts returns. Asking this question provided some interesting information about an ongoing problem. Especially when something is unique or is a good idea or you just seem interested, people will almost always go on and on about how things work.
Conclusion: Failure Is Flexibility
In this modern digital era, we often build complex systems, so we accept that user choice is more important than forcing people down our Web site’s or app’s happy path. We know it’s important to understand users.
As UX designers, we’ve learned to be flexible and resilient. We know how to live with change, uncertainty, and failure without becoming discouraged. However, all too often, we still assume that we’re all brilliant. That heuristics, turnkey systems, and gut instincts can get us close enough to an optimal solution. But think about the many bad products you encounter every day—products that would have benefited from usability testing and ethnography.
When I test early design concepts, I find more conceptual, structural, or architectural problems than I do problems with the details of a user interface. At this stage in the development process, it’s easy to reconsider, redesign, reconfirm, and move on—just as it would be for any R&D project or lab experiment.
When I test products late in the development phase or soon after launch, I find the same types of serious issues, but it’s almost always far too late to fix them. Launching an MVP with serious design issues may confuse your users so much that it may turn out to be the only release of that product. Public failures destroy too many products and even companies.
Shortcutting your investigations into how products meet user needs is risky. Usability testing and Lean Ethnography let you get closer to an optimal product. So test your designs. Test them early and often and in the right way. Observe the way your users actually work, and, while you are at it, let them teach you a little about themselves and what they really need from you.
Steven, Your research and depth of coverage and conversation are appreciated and compelling. Thanks for being excellent, seeking excellence, and then sharing your findings and journey with authenticity and as much transparency as possible in a world of intellectual property (NDAs)!
For his entire 15-year design career, Steven has been documenting design process. He started designing for mobile full time in 2007 when he joined Little Springs Design. Steven’s publications include Designing by Drawing: A Practical Guide to Creating Usable Interactive Design, the O’Reilly book Designing Mobile Interfaces, and an extensive Web site providing mobile design resources to support his book. Steven has led projects on security, account management, content distribution, and communications services for numerous products, in domains ranging from construction supplies to hospital record-keeping. His mobile work has included the design of browsers, ereaders, search, Near Field Communication (NFC), mobile banking, data communications, location services, and operating system overlays. Steven spent eight years with the US mobile operator Sprint and has also worked with AT&T, Qualcomm, Samsung, Skyfire, Bitstream, VivoTech, The Weather Channel, Bank Midwest, IGLTA, Lowe’s, and Hallmark Cards. He runs his own interactive design studio at 4ourth Mobile. Read More