The idea of bringing in fewer research participants for more frequent research sessions is becoming more prevalent in the field of UX Research. Usability studies that follow each cycle of iterative design are typically referred to as iterative usability testing. The purpose of iterative testing is to refine the design of a user interface over time.
Generally, for iterative usability testing, UX researchers bring three or four research participants into a testing lab, each of whom takes part in a 30 to 60-minute research session. Based on participants’ feedback during testing, UX designers then make changes to the user-interface design. A researcher then tests the next iteration of the design—either with the remaining participants in the same study or during the next round of testing, perhaps a week or two later. Typically, UX researchers conduct iterative testing over the course of several weeks or even months, as designers continue to refine the design. The goal of iterative testing is to reduce the number of usability issues.
Champion Advertisement
Continue Reading…
However, the fast pace of iterative usability testing and the frequency with which the studies occur can make iterative testing challenging to manage. In this article, I’ll provide some important tips that UX researchers should keep in mind when they are planning and executing iterative usability testing. These tips can help UX researchers to manage the rapid pace that iterative usability testing requires.
1. Clearly define exit criteria for each testing phase.
The pace of iterative testing can be difficult to maintain from month to month. Iterative testing requires significant resources, constant participant recruiting, money to compensate participants, and an available testing lab. UX researchers must work quickly to plan each study, execute test sessions, analyze the resulting data, share the findings, track all the issues they discover, and work with the product team to identify what issues to fix. Then, the cycle starts all over again.
To maintain this pace, it is important to have clear design objectives, define usability success metrics, and decide on a target date for each phase of iterative testing to end.
Defining Clear Design Objectives
Work closely with UX designers and other members of the product team to define the design intent for the prototype you’ll be testing. This gives you a baseline against which to measure. For example, if you were testing an unboxing-and-setup experience, you would need to understand what the designers want customers to experience when they take the product out of the box and go through the setup process. Then, you can figure out how best to conduct a test to determine whether
the experience does, indeed, match the design intent or there are areas where it falls short
the design is getting easier for people to use with each iteration
Some important questions to answer when defining the design intent and objectives are as follows:
Why are we creating this new design?
How would this design solution fit into a user’s life?
Does the design solve a real user need? Would it enhance or add value to their experience? Would users be better off with this solution?
Are we asking users to change their behavior? If so, what is their return on investment for learning this new behavior?
Defining a Date on Which to Complete Testing
UX researchers and designers are always testing and iterating designs, but products have to ship. Our role is to help product teams make the right design tradeoffs. What user painpoints should you fix before launching the product? What lower-priority painpoints can you fix after launch?
As a UX researcher, when you’re planning cycles of iterative testing, you must take the timelines of UX designers, developers, Beta partners, and other key stakeholders into account. You’ll need to stay up to date on these timelines because they tend to change. The product team needs you to help them understand when the design is good enough to build and launch, so as the ship date looms ahead, you must have clearly defined criteria for answering that question. As I mentioned earlier, it is difficult to maintain testing at this fast pace for extended periods of time; researchers can quickly burn out. So define the exit criteria for your iterative-testing research phase. Plan to gather post-launch learnings as well.
2. Track identified usability issues.
It is helpful to maintain a master list on which you can post all of the usability issues and painpoints you identify. Indicate what issues have been resolved, as well as those that remain open. Tracking issues in this way keeps you organized, preventing your becoming overloaded with all the data that you collect over time. It also helps UX designers to know what issues they still need to fix.
I have found using a Kanban board such as Trello or Jira incredibly helpful in keeping track of the usability issues we’ve identified, indicating what issues have been resolved, and coordinating our efforts for each study. Place each newly identified issue in the leftmost column, then move issues further toward the right column as they progress through the design, testing, and, development process and the software finally ships.
Using a Kanban Board
In Trello, a Kanban board comprises the following columns:
Issues identified—The board begins with this column, which is a backlog of all the usability issues you’ve identified through usability testing. Indicate each issue by providing a clear, single-sentence summary of the issue that enables the product team to quickly understand it. In the body of the Trello card, include an image that depicts the issue, as well as more information that provides the context the design team needs to solve the problem. Review the issues in this column with the product team so you can collectively decide what issues you must fix now, then move their cards to the Being addressed column.
Being addressed—The product team assigns each issue in this column to a team member—usually a UX designer—who is to start working on solving the related usability issues. Ideally, the team should agree on a target date for the next round of testing so designers know when they should complete their designs. This provides some accountability to ensure the issue is addressed. As you plan the next study, you must keep up to date on the progress of the issues in this column, then once the design solutions are ready for testing, move their Trello cards into the Ready for usability testing column. (Note—Not all issues require usability testing, so some may jump directly from this column to the Ready for development column.)
Ready for usability testing—Once the designers have completed their designs, move the issue’s card from the Being addressed column to this column, so you know you’ll have what you need for the next study. Once you’ve completed the testing sessions, update the cards for the designs in this column with information about what you’ve learned during testing. Then, based on what you observed during testing, do the following:
If the design has adequately resolved the usability issue, move its card to the Tested well column.
If the design did not solve the problem, move its card back to the Being addressed column, indicating that the designers must create another iteration of the design to solve the problem.
Tested well—The design solutions in this column have already gone through usability testing, and you’ve validated that they have solved the related problems. These design solutions are ready for a developer to start implementing the design.
Ready for development—Move the validated design solutions that are ready for implementation from the Tested well column to this column. After testing, there might be some additional work that UX designers need to do—such as putting the design into a design specification—before a developer can start implementing the design. (Note—Not all issues require usability testing, so some may jump directly from the Being addressed column to this column.)
Implemented—Once the developers have implemented a design solution in production software or hardware, move its card to this column.
Not addressing—This column is for any issues that the team decides not to address. On the card for each usability issue in this column, include an explanation of why you won’t resolve the issue—for example, fixing the issue is not a priority for the next release or won’t be technically feasible in the near future.
3. Consider the big picture, not just quick fixes.
Because of the fast-paced nature of iterative usability testing, it can be all too easy for a product team to get caught up in thinking only about quick fixes for usability issues. Make sure that the team takes the time to think about the bigger picture. Clearly identify the root causes of users’ painpoints and seek holistic design solutions that fully address them. That’s the only way to ensure a design solution truly meets users’ needs.
4. Sustain your product team’s interest.
A product team may become overwhelmed by all the iterations of design and usability testing that are going on, their need to understand all the insights from research, and trying to prioritize the work the team needs to do to address the usability issues you’ve identified. How can you keep a product team interested and engaged in acting on your research findings?
Get to the point. Everyone on the product team is busy, and each discipline has its own responsibilities for getting the product out the door. So don’t rely on a traditional presentation of your full research findings, during which you’d spend an hour walking the entire product team through all the painpoints you’ve identified—hoping they’ll absorb everything and refer back to your presentation to figure out what they need to act on. Instead, tell them what they need to fix! Using a Kanban tool such as Trello or Jira is a fantastic way of doing this. This approach is simple and to the point, helps the team stay organized, and focuses everyone on addressing the usability issues that still remain outstanding. Anyway, you probably won’t have time to write up and deliver a more formal presentation for every cycle of iterative testing. Instead, you can successfully communicate your research findings to designers and other teammates in a more visual way, using a Kanban board.
Co-present with UX designers. People enjoy looking at design solutions much more than bullet points. When the UX designers are getting ready to share their next iteration of the design with the product team, work closely with them to embed your research insights into their presentation or demo. Indicate why the design needed to change based on your findings. This enables stakeholders to see the direct impacts of your research on the design process. Co-presenting with UX designers can be very compelling and gets the team thinking about where they’re going next—instead of where they’ve been.
Focus on what is important, not on small details. Keep surfacing the higher-priority usability issues to make sure they get addressed. Of course, you can also look for opportunities to address some lower-priority issues during the development process, ensuring that they’re at least on the product team’s radar. But recognize that, despite the product team’s intent to deliver the best user experience, it just won’t be feasible for you to address every single issue that you identify. Be smart about what solutions you push for.
5. Remain unbiased.
Assuming the design process is not broken, your research findings and insights will have impact on design decisions. However, to avoid becoming invested in specific design recommendations or directions for the next iteration, it is important that you stay objective. Focus on user motivations, goals, and painpoints. Try to discover the root causes of usability issues and provide insights that the team did not previously possess. Some tips to help you avoid any bias are as follows:
Get other members of the product team—and possibly other UX researchers—to observe the test sessions. After the sessions, discuss your learnings with them. This helps keep everyone honest about what they are seeing and sets the right context. It also gives you different perspectives to consider.
Periodically, bring in an outside agency to conduct a study. A good research agency should be unbiased in their study design and execution. They won’t be attached to any specific design direction and can offer fresh perspectives.
6. Document each iteration of your prototypes.
With all the changes that a design goes through during iterative design and testing, it’s all too easy to lose track of what solutions you’ve already tested or even to forget some good ideas. For later reference, keep a record of each prototype you’ve tested. For software, capturing screenshots is very helpful rather than just keeping each version of the prototype in the tool the designers used to create it—for example, Flinto or Framer. In my experience, trying to rely on archiving each version of the prototype in a software tool only leads to confusion. UX designers often use the same prototype as the basis for building the next design iteration, wiping out any prototypes you’ve previously tested. For hardware, using videos and photos is a useful way of documenting the iterations of a prototype.
Conclusion
Iterative usability testing is a very useful method of surfacing the insights a product team needs to make optimal product and design decisions. However, it can be challenging to maintain the rapid pace that this type of research requires and still deliver high-quality research findings that help shape the user experience. The tips that I’ve provided in this article can help you to execute iterative usability testing successfully. Taking this approach can make you a valuable member of a product team by enabling you to contribute your learnings from UX research in a way that is easy for the team to understand and act on.
At Sonos, Paula is helping to manage the User Research team. Her research focus is on integrating voice technology into the software and hardware experience of smart speakers. Before joining Sonos, she spent time in Silicon Valley, conducting research for IBM, Intel, Microsoft, and Barnes & Noble. Paula is passionate about helping product teams design experiences based on users’ motivations, needs, and behaviors. She graduated from the Engineering Psychology program at New Mexico State University, receiving her MA in Psychology. Read More