After trying several design directions, UX designers often choose their final design solution before their team conducts any usability testing. But in a world where user expectations are rising, competition is high, and design trends change frequently, it’s important to stay ahead of the curve. Testing multiple designs early during the design process can provide much more useful information than testing just a single design or choosing a solution based on personal preferences or gut instinct.
When you run a comparative usability test, participants get to experience multiple designs and, thus, can provide better feedback. Testing multiple design variations early on lets you gain insights into each element of the design, understand what works, identify problem areas, and make potential optimizations by implementing actionable user feedback.
Champion Advertisement
Continue Reading…
In this article, I’ll discuss the value of combining qualitative research with a comparative usability study. Conducting comparative usability testing to elicit qualitative data early in the design process enables you to understand what design solutions resonate with users and are more effective and why one design direction solves customer painpoints better than another.
What Is Qualitative UX Research?
Comparative usability testing is typically a process for benchmarking the usability of a user interface against either a competitor’s user interface or a new design. When you’re conducting comparative usability testing, you would compare quantitative data such as task-completion rates, numbers of errors, or other numerical data for existing designs.
Although comparative usability testing can be effective in comparing live designs, the quantitative data cannot help you validate design decisions or provide you with the best direction in which to take your designs. In contrast, comparative usability testing that gives you qualitative data can tell you why one design is better than another, identify usability issues, and help you understand user preferences.
Qualitative research refers to the process of gathering and interpreting data that is not numerical. You can use qualitative data to gain an in-depth understanding of users and their interactions with different designs. In comparative usability testing, qualitative data can come in the form of written or recorded text, audio recordings, or video recordings. Qualitative data describes specific activities that numerical data would not enable you to understand.
In qualitative, comparative usability testing, you must test two or more variations of low-fidelity designs that have enough functionality to compare their differences. Ask participants to think out loud and discuss their opinions of different designs during each test session or record their experiences to allow researchers to analyze them later. This kind of study gives researchers and UX designers an understanding of which design elements work well and meet users’ expectations versus which elements cause issues, before they commit to a direction for the entire design solution.
What Is Comparative Usability Testing?
Comparative usability testing is a method of research in which researchers present two or more designs to test participants, enabling them to compare the designs side by side. The purpose of this type of testing is to understand which design is more user friendly, effective, and preferred by users. Plus, comparative testing can identify elements that don’t work or are causing problems for users.
Use this type of usability testing to compare UX designs early during the design phase. The goal is to identify the best directions for your design solutions. Usually, you’ll conduct comparative testing with a small group of users who you’ll ask to interact with the basic designs, think aloud while experiencing the designs, answer questions relating to their interactions, or provide their feedback and opinions. The results of this testing can help designers decide which prototype to move forward, what elements to improve, and what elements to eliminate entirely.
Why Conduct Comparative Usability Testing to Gather Qualitative Data
Now, let’s look at the benefits of conducting comparative usability testing and using qualitative data to validate a design direction early during design.
Enabling Designers to Identify the Best Design for Users
Designers naturally create designs by experimenting with various design directions, but this can lead to their choosing a design solution because of their personal preferences rather than the design that would work best for users. When UX professionals know they’ll be testing multiple designs, it lets them be creative and keep their options open until after testing is complete.
Conducting qualitative, comparative usability testing is an effective way for designers to choose a design direction, giving due consideration to every variation. If designers were to decide on a single design before testing, they might never get the feedback that would validate designs that might have eliminated issues or painpoints before investing time and money in completing a design solution. The best-case scenario would be that they’d get the feedback after completing a design. However, at that point, it can be costly to go back and make changes.
If they were to adopt a comparative usability-testing process early on, designers would have less time to build out high-fidelity prototypes. They would instead be able to focus on the most important design elements and pages, preventing their completing lengthy design work without receiving any user feedback.
By testing multiple designs early during the design process, UX designers can identify what design directions work best, helping to ensure that the final product is as user friendly and effective as possible.
Understanding Users’ Needs and Expectations
Through comparative usability testing and qualitative feedback, you can identify user needs and expectations that you might not have identified before. With your deeper understanding of users, you can improve the designs, leading to solutions that increase traffic, leads, and conversions.
Test participants provide qualitative user feedback regarding a user experience that informs designers about what the users have come to expect from the business, brand, or organization. UX designers can tailor their prototypes to the users. You can use real-time feedback to improve your designs and iterate designs with confidence that you’re creating what users want.
Knowing your target audience helps you to reach the right people and create loyal customers. When you create design solutions, then gather qualitative feedback, you can increase user engagement, traffic, user satisfaction, and ultimately, profit.
Helping Participants Provide Better Feedback
When designers give participants the opportunity to experience multiple solutions and voice their opinions about them, they’ll provide actionable insights because they feel as though they’re part of the process. Experienced UX researchers notice the differences immediately when they’re conducting comparative usability testing to gather qualitative data.
When you present participants with just one design solution, most of the feedback they provide is neutral, unless they encounter major issues or significant roadblocks. They have nothing to which they can compare their experiences. However, once a participant experiences another design and can compare their interactions with two different solutions, their feedback becomes much more actionable. Seeing multiple ways of accomplishing the same tasks makes it easier for participants to contrast them, identify issues, and discuss differences between the design directions.
Avoiding Design Mistakes and Issues
By testing multiple designs, you can avoid major issues and potential design pitfalls that often get overlooked if you test only one variation. Unless a team tests multiple designs, UX designers might create a complete design, then discover issues later that participants encounter when testing high-fidelity prototypes. If the team tests just one design solution, they could waste valuable time and money and potentially need to do a time-consuming and costly redesign.
When researchers conduct comparative usability testing, they can validate design decisions, identify issues, and make changes to designs before the issues turn into major problems. Qualitative data not only tells you what issues or changes you need to make but also why. The whys behind users’ painpoints provide enough information to enable designers to improve their design solutions and create user interfaces with which users want to interact.
Letting Researchers Compare Qualitative and Quantitative Data
In addition to qualitative data, comparative usability testing can still provide quantitative feedback such as task-completion rates, error rates, and survey data. If you test only one high-fidelity design and gather data only on that design, you must set goals for your task-completion and error rates to determine whether the design has met your requirements.
In contrast, if you collect data by comparing multiple designs, you can compare their data to inform the organization about what design is better and what direction to move forward. Because researchers and participants can differentiate the feedback and results for different design variations, it becomes clear which design is more effective when you analyze the results.
Enabling UX Designers to Test Their Own Prototypes
When UX designers test their own designs, the results could be biased—for example, a designer might become overly invested in one design direction over all others. When UX researchers test only one design, they usually want to either validate or reject that single design. However, when teams invest in creating only a single design direction, they often disregard negative feedback because of people’s personal opinions.
However, when you conduct comparative usability testing, the process is more about finding the direction in which to take the designs and learning what elements have the greatest impact. This process helps eliminate bias and takes pressure off the designer. Early testing focuses more on comparing the pros and cons of different design solutions.
Conducting a Comparative Usability Test
There are a few ways in which you can run a comparative usability test. One is to use a within-subject test design, in which each test participant performs multiple tasks. Another method uses a between-subject test design, in which each participant completes only a single task.
If you want to compare the functionality of multiple designs and the ways in which participants interact with a full prototype, conduct a between-subject comparative usability test. This method exposes all participants to each design solution, enabling them to interact with the full design and give their feedback on the differences, elements, and issues they encountered.
If you want to test only one element of the various designs such as a menu or layout, a within-subject test lets participants compare just that element. This testing should focus on the most important, impactful elements of the design. Participants can compare their interactions with each element and provide the same actionable feedback on each to inform your design decisions.
No matter which comparative usability-testing method you choose, you must have a clear understanding of your testing goals, what you are trying to compare, and the factors that lead to a successful or a failed test. Create a task list or scenario for the participants to follow. Once you’ve created your design variations and prepared your task list, you can start recruiting participants and conducting test sessions.
Testing multiple designs can be an extremely valuable way of obtaining valuable feedback, eliminating bias, and avoiding costly design issues. But, when you conduct a comparative usability test to gather qualitative data, you should consider a few key factors that aren’t important when testing a single design.
Choose Designs to Test That Are Comparable
When you’re conducting a comparative usability test, it is essential that participants be able to compare the various designs. The design variations must be different enough to be distinguishable from one another. If there aren’t clear differences, the results could be insignificant, and participants might have a hard time giving high-quality feedback. For example, it would be difficult to compare designs that are too similar or form opinions about the differences if they aren’t clear or sufficiently prevalent. If a UX designer has created variations that are only subtly different, they should make changes or create new designs that are clearly different.
Create Objective Tests and Tasks
By nature, usability is a subjective topic. What is usable for one participant might not be for others. It is important to craft a comparative test that outlines how participants are supposed to accomplish tasks and complete the test. Create tasks that anyone can accomplish, so users can compare their experiences with the designs.
When you ask test participants to complete specific tasks by interacting with various designs, the UX researcher can understand the effectiveness of the designs and test the most important elements. But the design of the test and scenarios that users follow are almost as important as the feedback itself.
Don’t Test Too Many Variations of the Designs
In comparative usability testing, testing two or three designs at once is ideal, and the data is much more valuable than if you test all design variations at once. When you’re testing only two or three designs, participants can easily compare, understand the differences, and choose the design that they prefer because they don’t have too many options to evaluate. When you’re testing more than three variations, tests can become overly complex and overwhelming to participants, who have a hard time following all the tasks and remembering each design. Plus, if you test more than three designs at once, you might not be able to include all the testing scenarios you want to study.
Test the Most Important Elements
Qualitative, comparative usability testing should not use complete, high-fidelity prototypes. The designs you’re testing should focus only on the most important user-interface elements. You need not include complex branding or graphics. Instead, make sure that you focus on the aspects of the user experience that make a difference to the functionality of the designs.
When creating your test scenarios, have participants complete tasks that elicit high-quality feedback and accomplish your testing goals. If you ask participants to complete too many tasks or test every element of a design, test sessions can become too lengthy, and participants might lose interest. For perspective, if you test three designs, asking participants to complete five tasks with each design, that adds up to 15 tasks. If you test more than three designs or try to analyze too many user-interface elements, you might be asking participants to complete 30 or more tasks.
Also, you need allow enough time at the end of each test session for comparisons and discussions about the differences, issues, and elements that a participant encountered. When UX designers are creating designs for comparative usability testing, they should carefully choose the most important elements to test and the tasks that have the biggest impact. UX researchers should ask the questions that elicit the most significant feedback. Rushing or overwhelming participants with too many tasks or test elements produces less effective results.
Be Prepared to Make Changes to Designs
Once you’ve gathered all the actionable, qualitative feedback from your comparative usability study, the team should be prepared to make changes to the designs to better fit users’ expectations and needs. The test results identify what design direction works best. The next step is to take this information into account and amend your designs to conform to the best design solution. The participants also identify any issues that occurred when interacting with the various design solutions. If these issues occur with multiple designs, you should fix the issues immediately—especially if the issues occurred in the design that participants identified as the best option. Participants might also tell you that some user-interface elements of one design are great and different elements of another are better than in the first design. UX designers should use this feedback to create solutions that incorporate the best elements from each design.
While using the insights from comparative usability testing to inform design decisions is important, the purpose of this testing method is primarily to gather qualitative feedback, not to rely on participants to make every design decision. The goal is to let users interact with prototypes, helping the organization to optimize the designs before creating the final product. Your designers are the professionals. Therefore, before testing, you should decide which elements of the design are most important, what elements are vital to the design, and how much influence the participants’ feedback should have on the design.
Conclusion
The next time you begin creating new UX designs, plan on doing some early, comparative usability testing to analyze different design directions and solutions. If your team decides to do this from the beginning, it helps designers to keep an open mind, explore different design approaches, and avoid becoming too attached to one design direction. Qualitative, comparative usability testing can identify new ideas, find issues before they have major impacts, and help your team to create designs that drive business growth.
Owen is a digital marketer at Poll the People, an emerging user-research and usability-testing platform. His focus is on content creation and moving the company forward through all digital-marketing mediums. Owen is a graduate of Bryant University, with a concentration in marketing. Read More