anything else that undermines the delivery of effective UX design
And although I’ve never before considered usability testing as something that falls into the large—and growing—list of things that undermine effective UX design work, I’ve recently had a number of conversations with designers that suggest their perception of usability testing is fundamentally wrong. I’ve heard both junior and senior designers express their perception of usability testing in different ways, but the core message is the same: They believe that nothing can be known about a design that a team is going to implement unless that design has been tested with the target audience. That no knowledge is possible and nothing can be said about a design with any degree of confidence, unless its usability has been validated for specific use cases, in specific circumstances, with a specific set of users, and for a specific combination of browser and device.
Champion Advertisement
Continue Reading…
Bullshit.
Now, to clarify my position before the hate mail starts rolling in: I’m a big fan of usability testing. It’s a useful tool, and one that all UX designers should have in their toolbox. But we need to see usability testing in a broader context and consider its strengths and limitations.
The Role and Practicality of Usability Testing
While usability testing may take many different forms, I’m primarily going to consider the testing of a UX design solution with a representative sample of users. Of course, there are other forms of testing, but most UX professionals would consider doing perhaps two or three rounds of usability testing of a prototype in a realistic setting as meeting the need to test the design with users.
The first problem is that usability testing is expensive. If you have a testing budget, that’s great! But like any budget, you need to focus your resources where they will do the most good. Designing a really effective test that isolates the key variables that you’re evaluating, creating the prototypes necessary to test them, recruiting suitable people to participate in the testing, and analyzing the results of a study are all time-consuming tasks that require highly skilled people and typically necessitate UX designers’ taking time away from UX design tasks.
Yes, there are other types of testing available—for example, guerrilla testing and A/B testing. Guerrilla testing can be very resource efficient and offers a good balance between the time it takes and the quality of the results. A/B or multivariate testing can deliver a lot of value—if you have the development resources to create two or more versions of a feature and test them with users on a live site or application. But any form of testing takes skill and money to plan, create the necessary code, run the test, and analyze the data. When the pressure is on to deliver a product or User Experience has not yet become fully established within an organization, usability testing will more than likely take a backseat to other tasks. Of course, this is not ideal, but in reality, it happens; so it is important for UX designers to keep pushing to make usability testing part of the UX design process.
What We Know About UX Design
Now, let’s take a step back. Design is something that people have been doing for centuries, and there is a large body of relevant knowledge about UX design that stretches back for decades. I came to UX design from a human factors, or ergonomics, background—a discipline that really got started during World War II as a result of studying how people used tools to help them work and fight more effectively. The word ergonomics derives from the Greek words ergon, meaning work, and nomos, meaning laws. The term ergonomics is in wider use in Europe, while the term human factors tends to be more prevalent in the US—particularly in relation to information technology (IT). Ergonomics considers biology, psychology, the surrounding environment, and other disciplines as the foundation for design.
Coming from this background, rather than saying, “We know nothing about a design until we’ve tested it with the target user population”—to oversimplify massively!—I know that we have a solid understanding of people and how they perceive and interact with the tools they use to accomplish their work.
Let’s look at a trivial example: when designing a form, ergonomics would inform the design by considering the following:
the physical attributes of users—These would inform color choice and coding for accessibility by people who are physically impaired and the appropriate controls to use.
how people scan pages—We can draw on eyetracking research to inform the layout of forms.
how people complete online forms—This knowledge helps us to define the appropriate relationship between a label and the corresponding control on various devices.
More specific knowledge—perhaps in the form of personas that have derived from research with the target audience—would help inform our choice of the optimal language to use in a user interface and what Help content to provide.
My perspective on usability testing isn’t unique to UX designers who have trained in human factors, or ergonomics. From speaking to designers who have training in other disciplines—including product, vehicle, graphic, or digital design—I know that the basic principle of designing with as much of this baseline knowledge as possible—together with an understanding of a specific audience of users—plus conducting expensive usability testing to fine-tune a design is fundamental to most UX design training. Trained UX designers continually build on this fundamental knowledge by keeping up to speed on advances in UX design principles and patterns by reading articles on sites such as UXmatters, following designers on Twitter, and reading UX design blogs.
Since this is the case, why do so many UX designers play down the value of proven knowledge? It’s possible that one reason could be frustration on their part. UX designers may just be tired of arguing their case and unwilling to accept what the highest-paid person in the company is asking them to do, so may suggest deferring to the results of usability testing as a way to sidestep a potentially difficult challenge. On the other hand, there are situations when two competing design solutions may seem to have equal merit, and usability testing can be a good way to decide which one to choose.
Summary
User experience may, in some respects, be a victim of its own success. There are many articles about how to get started in UX—of course, not all are of great quality. Because everyone uses the Web, everyone has an opinion that they feel is valid. However, not everyone has learned to be objective and step back from their own experiences.
Even experienced UX designers encounter daunting challenges in managing stakeholders. They may be unwilling to defer to the highest-paid person’s opinion and hope to avoid conflict during the design process by suggesting, “We won’t really know unless we test it.” Doing this can sometimes make a designer’s life considerably easier. But the long-term cost of doing this is huge:
Designers may be condemning themselves to reinventing the wheel instead of innovating.
Users labor under working with user interfaces that fail to meet their needs.
Most insidious of all, stakeholders lack respect for UX designers who apparently know no more than anyone else on the team—if, after all, we don’t really know anything until we test it.
In an ideal world, yes, designers would draw on the sources that you mention. And I’m sure that some of them do. For example, I published “Best Practices for Buttons” right here on UXmatters, and I know that thousand of people have visited that article.
But lots of designers don’t seem to know these basics. Nothing I put in that article was even slightly novel or controversial, and yet in 2012, I was easily able to gather plentiful examples of violations of each best practice.
And if you do know the basics, you must have contact with users to apply some of them properly.
For example: Best Practice 6 in that article: “If users don’t want to do something, don’t have a button for it.”
How do you know what users want to do if you don’t find out from them? How do you know whether users will still want to do it, based on the rest of your design?
You also claim that usability testing is expensive—well, it really isn’t. It’s one of the easiest things to learn to do adequately, and it’s not that hard to learn to do it really well. I’ve seen people do a usability test that obtained helpful insights after being given three minutes of instruction in how to do it during a training course on another topic.
I believe it’s crucial to iterate. You can’t create a great design merely by doing usability testing. You have to have good ideas, informed by what you learned in the tests. But it’s so much easier to create a great design if you’re willing to build the thing, do some testing, then learn from what you’ve found and improve the thing.
I’ve been saying for a long time that usability is subjective; usability testing, not so much.
We need to have boundaries established and a defined context when doing usability testing*#8212;otherwise we would never stop! These boundaries call on your past experience, your knowledge, and your research—to which you’ve referred.
We can’t always have the excuse: “Every user is different.”
Organisations deal with complex and often competing dynamics which place UX designers in an unenviable position. Designers need to cater for a complex range of elements including:
Meeting specific product/business goals
Satisfying user needs and desires
Reflecting best-practice UX design principles
Encapsulating overarching brand attributes and style guides
Presenting a seamless experience to users across all touch points—online and offline
The reality is: the design process often involves numerous stakeholders with competing interests. We have been working along-side UX designers in the enterprise space, channeling strategic research to combat subjective influence and the HiPPO effect. In doing so, we have collaborated to establish a framework which unearths innovative design concepts, then validates how these designs satisfy elements such as those listed above.
This often starts with exploratory, qualitative usability testing of comparative prototypes, then after whittling down the concepts—and incorporating best-performing elements—moves to a quantitative evaluation comprising a smaller set of prototypes.
This process takes stakeholders on a journey, which migrates discussion from subjective opinion to objective analysis. Focus is maintained on the issues that matter. I also feel the argument that testing is expensive is fundamentally flawed. What is the cost of missing the mark or the fact that stockholders don’t believe in or own the designs—so they need to be scrapped and started over? Research and testing can provide UX designers with confidence in their execution and communication.
The research and testing process also provides a great vehicle for evolution and education for UX designers. Without objectively assessing and communicating what actually works and what doesn’t, how are designers to hone their craft or foster credibility?
Hi Caroline! You mention that “lots of designers don’t seem to know these basics.” Absolutely—so why are they designing? If testing is being used to substitute for basic design knowledge, something is fundamentally wrong.
The cost of testing—time, money, equipment, and particpants—largely depends on the type of testing being done. Learning to test in a way that gets good data without unduly influencing the users takes time—again, depending on the type of testing. But testing is something that’s easy to take to stakeholders—regardless of the quality of the test—which stakeholders may not be in the best position to judge.
As I said, I’m not against testing or research. What I am against is designers’ ducking behind a cover of “no one knows anything unless we test it,” particularly where testing is being used as a substitute for basic design knowledge.
Hi Scott! Nothing wrong with heuristic evaluations—although Ari Weissman gives a good overview of their weaknesses. The key point is knowing the fundamentals of good design and using this knowledge before testing, whatever form of testing you use.
The following statement from your article ignores the digital divide: “Because everyone uses the Web, everyone has an opinion that they feel is valid.” Perhaps more accurate language would be to state “everyone who uses the Web has an opinion that they feel is valid.” According to a recent Washington Post article, an estimated 4.4 billion people worldwide do not have Internet access. Interesting to me would be comparing use and design opinions between active and novice Web interface users.
In my experience, usability testing doesn’t just test the visual design. It tests very specific scenarios, stories, and use cases. It tests comprehension, the interaction, the end-to-end flow, the user’s thoughts, the psychology behind it—in addition to gathering feedback on potential functionality that is missing or perhaps not required. No UX or visual designer can know all of this with 100% certainty. Usability testing is where I “go to school” as a UX designer. No matter how smart I think I am or how many times I know my design assumptions are right, I always want the users to train and teach me what’s on their minds throughout the experience. I learn something every time, and that’s very valuable to me. The learning never ends and only improves my heuristics knowledge. Each designer’s experience is different, just as each user’s experience is unique, so there are huge variables to take into consideration. And yes, I’ve absolutely used testing to help evangelize its value to executives who had a strong opinion about a design, and—knock on wood—it’s never failed. Good article to spark the conversation…thanks!
I’ve got no argument with the crux of your article: that design must be informed by assumptions, empathy, established knowledge, and a mastery of the laws of human-computer interaction. And that, in lieu of expensive testing, design can still be responsibly performed.
Amen.
So how does that thought lead to the title of your article? How then, therefore, does usability testing lead to undermining skillful UI design? I don’t see it.
There’s nothing to send you hate-mail about, because you’re not making any conclusion.
Or did I miss it? Maybe, though your article was written by a smart guy with the best intent, its value was completely lost on this reader. :P
In the fast-changing technological world we live in today, users develop new habits and reflexes that they use regardless of the platform—PCs, tablets, smartphones. You cannot predict these. They will show up in tests.
Knowing your basics and gaining experience will certainly lead to better designs, but you should still make sure that the words, graphical concepts, and overall logic of your interface are understood by the target users. It’s especially true if your design is innovative.
In my opinion, usability testing will always be a must!
Well, as a former board member of both the UXPA and the Information Architecture Institute, and as a findability and usability practitioner, what I found the most troubling about this article is how the author does not provide his definition of the user experience.
I conduct usability tests and studies. Some are more expensive than others. I learn something new and insightful with every study. People say they will or will not do things, and actually say and do the opposite far more often than they realize.
My overall reaction to this article is that the author seems to want to take usability out of UX. I do not recommend that at all.
I am a designer and a developer, too. I’ve had too many circumstances to count that went for someone’s opinion—which clearly did not support user goals—rather than genuinely, sincerely building an interface for users.
Tried to even get the gist of this article.
FYI, most curricula for Web design do not even include usability and findability, which I think is fundamentally misguided.
Thanks, Shefik. Sounds like you may have an effective process. The point of the article wasn’t to manage the whole design process, and I’ll say it again: I’m not saying there isn’t a place for testing, and yes, designers learn from it. You say “Without objectively assessing and communicating what actually works and what doesn’t, how are designers to hone their craft or foster credibility?” If a designer is relying wholly on testing as the basis for their credibility, they shouldn’t be designing. Testing isn’t the only way to be credible. It’s not the only way to manage stakeholders and bring them along on a journey. Research and testing are tools that support the designer, not a substitute for having a broad and deep knowledge of design.
Dasbender—The title was—I’m ashamed to admit—a shameless way of encouraging people to read the article. Look out for my upcoming article on UX research, provisionally titled “The 27 Craziest Outfits Kim Kardashian Has Ever Designed! You Won’t Believe Number 7!” (There may well be more exclamation marks.)
Thanks for the article. You make some great points, and this critical self-reflection is something long overdue in our field.
Out of respect for your thoughtful arguments and powerful post, if I might make a request:
Please rewrite your Summary—seriously! It is rather terribly written compared to the rest of the article, ending the reading experience on a confused flat note, and gives me pause before sending this link to others. I could offer specific feedback, but I think the problems will become obvious on rereading. The first sentence and the last bullet point make sense, but everything in between is a muddle.
Hi Shari, author here. :) I don’t typically define what I mean by UX in my columns, though UXmatters does provide a definition of user experience that I’m comfortable with. Happy to discuss in detail what UX means to me and my background, but probably one to take offline. While I also—as I tried to make clear in my column—appreciate the value of testing, my concern is that people are either unaware of or are dismissing the wealth of design knowledge that we have in favour of this view that we can only know anything through specific testing of the application under consideration. As I mentioned, some designers may be doing this to manage stakeholders, but this is undermining the contribution that UX can make.
You mention that “most curricula for Web design do not even include usability and findability.” When I was studying human factors (circa 1993), findability in the same sense as we use it now wasn’t something that could be studied—though I remember reading up on information science / librarianship as part of my studies. I’m fairly sure usability wasn’t a commonly used term either. Although a background in human factors—and later, computer science—has given me a solid knowledge base and mental tools to use when designing, plus keeping up with developments in the field has helped me to develop skills in the intervening years. I don’t think that’s unusual among designers. The concern I have is of unqualified designers using usability testing to make every decision. Effective designers make a lot of decisions based on their knowledge and skills, then use testing to understand if any of their assumptions or design decisions were not appropriate for a particular use context.
I am pretty comfortable with your point of view, Peter. Particularly with the part about how the need to test may become a way to devalue UX people’s work in front of stakeholders. Maybe the test becomes successful, just how we’re expecting, but even then, as you said “if, after all, we don’t really know anything until we test it,” it’s a bad deal for us.
But I think that there is also a really important component of UX that is, in fact, a process—a perpetual work in progress—because we are designing propositions. For me, it is the core of UX—not the implementation. The implementation is just a possibility and, in fact, if we can, we should try to propose systems able to adapt and create autonomous implementations for every kind of user. Thereby, yes, usability is a relevant part of the equation, but it is far from being the equation itself.
We propose experiences and, to measure and analyze how successful the experience is and improve it, we need, I think, a true iterative process in real contexts. Because the interaction, the experience, is not constrained only to our solution; it has a context that we can anticipate—perhaps through personas—barely or partially. But you never know. Users can be smart and powerful, give them time and tools to play with, experiment, and squeeze our beloved design. We’ll get lessons, inspiration, and new challenges.
Your point seems to be, “If you have really, really talented designers, you don’t need to run many usability tests.”
We could argue that point—my view is that an iterative design approach will beat a really talented designer working in a vacuum any day—but for the moment, let’s assume it’s true.
The problem with your solution is that it doesn’t actually solve anything. This is because most firms don’t have really talented designers. Most designers, by definition, are average. And average designers need the regular course corrections you get though usability testing.
I’m working with an organisation right now that is running 1-day usability tests every two weeks. It’s costing them about £15k per year to do these tests. That sounds a lot cheaper to me than hiring a rock star designer. (Plus they have the satisfaction of knowing that their final design really works.)
Hi David! My argument is that some designers rely on testing as a way of finding out things that a properly trained designer would already know. It’s about all designers’ being properly trained rather than there being an elite few.
You talk about an iterative approach being superior to a really talented designer working in a vacuum, though trained designers don’t work in a vacuum anyway. But putting all of that to one side: I have previously observed a designer sending three designs for testing. All three designs were very similar, and all contained mistakes in the UX that a trained designer would simply not have made. But even though the testing process was conducted in the proper way and found one of the three designs to be marginally preferable, then the designer made some iterative changes, when the design went live, it failed to get the desired result.
The problem was that they started from a poor set of designs and found what I’d call a local maximum: the best of that poor set. No rock-star designer needed—just a good level of competence.
Jaume, you raise a really solid point—that designing experiences isn’t a one-shot-and-done affair; that the designer needs to be involved to iterate the design not only through to the solution delivery, but afterward to monitor and change the design in response to real-world use. When you mention users’ having tools to play with, I agree—and observing users using other designs helps designers to understand their problems and scope in advance of having to design with those tools. This is part of training as a designer.
Hi Peter, thanks for sharing your thoughts. I can see why a few people have taken umbrage based on the title, but from your responses and the points you make in the article, I can see that you’re not advocating for an end to usability testing!
From my POV as a UX/usability researcher, your article resonated with me. There have certainly been many times when I’ve been asked to conduct research by designers and other stakeholders that feels like a way of abdicating decision making to users rather than as a way to improve designs as part of an iterative process. Normally, the situation is something like this: “We have two alternative designs, and we can’t decide which to use. Can you get half a dozen users in and ask them which they prefer?”
Conducting testing in this context is often a comparatively more expensive choice and, more often than not, is an exercise in confirmation bias. Moreover, it’s often conducted under misguided assumptions about the sample size required for comparative testing, and the amount of insight that users can realistically provide after they’ve seen two designs, been asked to make a choice between them, and asked to provide rationale.
If I’ve interpreted your article correctly, what you are describing sounds as much a problem with process as it is with designers not trusting their knowledge. If there is a member of the team responsible for research and testing, they also need to advocate for more appropriate and robust application of usability testing as part of an iterative design process, not as a cookie-cutter technique for answering all design questions.
I believe that Rolf Molich, who co-invented heuristic evaluation with Nielsen, has argued that there’s little reason for ever doing a heuristic evaluation. Expert reviews, on average, identify as many usability issues as usability testing and are cheaper and quicker. His research shows that usability tests also uncover far fewer usability issues of a system than many people assume. The best thing you can do is iterate a design, making improvements and fixing as many problems as you can each iteration.
I wasn’t getting the feeling that Peter was downright condemning usability, but rather, encouraging UX people to stand firm with their experience and expertise. You can express your disagreement to your stakeholders and test them at the same time.
He seems to be frustrated with the costs and bandwidth of usability testing. I’ll assume his deadlines are not realistic, or maybe he’s inadvertently projecting himself succumbing to a “we won’t know until we test it” situation.
I like this article because I frequently find myself in these situations. I’m either arguing between the stakeholder, product owner, or business analyst, or a combination of the three. What I do is back up my claims—no matter how many times I need to repeat it. I won’t say “I win and lose some battles,” but in the end our users needs—not their desires—will help support the argument. To that degree, I’m fortunate that me and my team agree to that.
But who am I kidding, it feels awesome when you end up being right. :) But also a humbling experience when the result doesn’t go in your favor.
It’s easier said than done I’m sure, but don’t make the stakeholder or whomever your enemy. We’re all in this together aren’t we? I think disagreeing with each other is a fantastic thing. The last thing I’d want is 100% agreement. How do you progress with that sorta shit?
Peter has been actively involved in Web design and development since 1993, working in the defense and telecommunications industries; designing a number of interactive, Web-based systems; and advising on usability. He has also worked in education, in both industry and academia, designing and delivering both classroom-based and online training. Peter is a Director at Edgerton Riley, which provides UX consultancy and research to technology firms. Peter has a PhD in software component reuse and a Bachelors degree in human factors, both from Loughborough University, in Leicestershire, UK. He has presented at international conferences and written about reuse, eLearning, and organizational design. Read More