Top

UX Research: Facing Ambiguity Head On

Discovery

Insights from UX research

A column by Michael A. Morgan
May 28, 2018

When we do early-phase UX research, we dream of getting clear-cut results from the data we collect. That everything will come together neatly. But, often, our research findings end up being less obvious than we’d like them to be. This ambiguity makes deciphering our research findings and defining a product strategy challenging. As UX researchers, which results and recommendations should we present to stakeholders? Will they miss out on something important if we don’t share all of our findings with them? Which results should we deemphasize? How can we navigate the ambiguity that can result from formative UX research?

In this column, I’ll provide some key strategies for how to handle the ambiguity that comes with analyzing qualitative data from formative UX research. As much as we try to remain unbiased and rely on the actual evidence that we’ve gathered during our research sessions, we may sometimes talk ourselves into taking a side when presenting our findings and recommendations. How can we double-check our motives and ensure that we do right by our product stakeholders—even if that might mean delivering bad news to our product team?

Champion Advertisement
Continue Reading…

Some key strategies for handling ambiguity in qualitative data include the following:

  • Don’t overlook facts for the sake of a coherent story.
  • Don’t overlook infrequent occurrences of issues, especially if they have significant design implications.
  • Provide a balanced view of the data.
  • Don’t be afraid to recommend doing more research.
  • Always review what research has already been done to avoid covering the same ground again.

Don’t Overlook Facts for the Sake of a Coherent Story

In the modern psychology book Thinking, Fast and Slow, Nobel-prize winner Daniel Kahneman talks about two distinct types of decision-making: System 1 and System 2. In System 1 decision-making, people make intuitive, on-the-fly decisions that are based on the information that is immediately available to them in their mind and environment. System 2 decision-making is more effortful, relying on logic and experience to piece together a story that provides the basis for a decision.

Kahneman describes the concept of associative coherence, which occurs when System 1 thinking takes over our decision-making process, and we rely on available information, then fill in the gaps without doing any real System 2 thinking to back them up. This approach creates a false sense of confidence in the conclusions we draw. In System 1 decision-making, fiction suddenly becomes the theoretical glue as we piece together the readily available information we’ve gleaned, resulting in a motley crazy-quilt of unsubstantiated information.

When we analyze our research data, we tend to look for consistent themes and behaviors. If it becomes difficult to discern consistent themes, we may try to cherry-pick themes that resonate with our System 1 thinking. Suddenly, we begin to overweight certain facts over others. Once we’ve convinced ourselves that certain facts are more valuable than others, we try to string together a story that fits those over-weighted facts. This is dangerous because it leads to perpetuating stories that are based on shaky evidence. Once such stories become facts in our own minds and the minds of our stakeholders, this inevitably leads to tenuous predictions about what design solution would work. Kahneman calls such fictions that shape how we perceive the future the narrative fallacy. What can you do to avoid these dangerous mirages?

Whenever you see a fact that doesn’t match the cohesive story you ultimately want to tell, you need to dig deeper into it. What was the intent behind what the user said or understood? Why doesn’t it match up with other information? Look at the participant’s profile. Did that participant really qualify for your study? Did you overlook certain qualifications, causing other qualified participants to fall through the cracks?

For example, a recent study I conducted required participants to be small-business owners. One of the participants technically did not qualify because she worked for a government agency. This participant should have been disqualified, but was not. When analyzing the data and comparing results across participants, I realized that, while such participants might have some insights to contribute to the overall study, I needed to give less weight to their data than the data for participants who really did qualify. Taking a step back to ask whether it made sense to include that participant’s data in my study required System 2 thinking. Some of the stories she had told and data from her session were vastly different from the experiences of small-business owners. Ultimately, de-emphasizing her data enabled me to tell a more cohesive story.

On the other hand, there might be cases in which a qualified participant’s story is very different from those of other participants. You need to examine such stories more thoroughly and treat them equally with other less edgy cases. Consider what made a participant’s situation different from the others? In a recent study I conducted, one participant was a recruiter by occupation. His clients were manufacturing companies. Most of the candidates he placed were blue-collar workers who had limited access to computers. This fact changed an entire line of questioning in my script, which involved understanding how employees use their smartphones and computers.

I incorporated the insights I had gleaned from this participant into my final report, and they served as the basis for understanding some of the potential limitations of the proposed product solution.

Don’t tell a cohesive story because it’s the easy way out or just for the sake of it. Identify the gaps in your data. Look for reconciling facts that explain the lack of cohesion. Another story may emerge that you can justify to stakeholders.

Don’t Overlook Infrequent Occurrences of Issues That May Have Significant Design Implications

It’s a mistake to dismiss deviations from frequent occurrences of issues. When you’re conducting qualitative research with small sample sizes, the Law of Small Numbers dictates that small sample sizes will result in more extreme outcomes than larger sample sizes. A sole infrequent occurrence might be an extreme, but useful result. If you resampled the user population, that result might be the more frequent occurrence in the next sample. Allow the Law of Small Numbers to work to your advantage. Think of it as an opportunity to examine the full range of outcomes you might observe if you were to do a study with a larger sample size. When you’re doing formative, qualitative studies with small sample sizes, there is very little justification for ignoring certain participants’ data. (Of course, if a participant did not actually qualify for the study, that would justify dismissing the corresponding set of results.) Whenever participants do qualify for a study, it is important to look at their results—especially when what those results are telling you is something completely different from what others are telling you.

A recent study I ran involved employee online–registration experiences. It revealed that one participant of the six I had interviewed felt that downloading an app was too much effort, so he preferred using the mobile Web for such activities. Even though this insight was relevant to only one participant out of six, should I have ignored it? Might there be other employees who feel the same way? We didn’t know because we had spoken with only six people.

The next question you should ask yourself is whether the sentiment a participant has expressed makes sense. Is it possible that other people in similar circumstances might feel the same way, or is that less likely? We did know from personal experience that downloading apps can take some time, especially if they have very large file sizes or we’re on a network with a very slow Internet connection. It seems reasonable that there could be others who would agree with this participant’s point of view. Since this is a reasonable viewpoint and probably not out of the ordinary, we should likely include this insight in the research findings: some employees might not download an employee-portal app because of the high perceived effort of doing so. This insight could prove useful to the product team, who might want to consider building a leaner app that would download faster and wouldn’t take up so much room on an employee’s smartphone. A design challenge for the team might be determining how to reduce the perceived effort of downloading and installing an employee-portal app. Without digging more deeply into this insight, which came from just one participant, you might lose an opportunity to improve the product’s performance and user experience.

Provide a Balanced View of the Data

During a first review and analysis of the qualitative data from a user-research study, if it’s not immediately clear exactly what the key themes should be, comb through the data again and establish a range of themes that collectively tell the whole story. This is especially important when the results of your research cause you to question a proposed product idea. You have an opportunity to leave out the sugar-coating and tell it like it is—to share raw, unfiltered stories.

On a recent research project that I led, I wanted to understand whether HR practitioners and small-business owners would adopt a particular method of onboarding new employees. Stakeholders were excited about their product idea and anticipated that its adoption would be a sure thing. But, after we reviewed the data, it became very apparent that this was not the case. Not all small-business owners and HR practitioners would have adopted the proposed product. What came out of the user research was a set of potential reasons for their either adopting or not choosing to adopt the product in their organization. The insights we provided to stakeholders helped them make a better informed decision about their product strategy. If I had simply considered the results from only those participants who said they would adopt the product and ignored other participants who said they would not, I would have led stakeholders astray and could have caused the company to lose customers and revenue, not to mention losing my own credibility as a UX researcher.

Telling the whole story and getting to the why in your research findings provides more useful, actionable information than simply taking a side and considering only the data that supports one viewpoint, which would be a reckless thing for product stakeholders to do.

Don’t Be Afraid to Recommend Doing More Research

Recommending more research is not a cop-out by any means. It simply means you need to dig a little deeper and incorporate more of the assumptions and hypotheses that may have gotten left out of your earlier research to help a product team refine their ideas.

For one study, a product stakeholder wanted me to ask participants whether they would use a public kiosk at their employer’s site to register for their HR portal. Most participants in this small study, with a sample size of six, indicated they would not use it because they felt that using a smartphone to register would be more secure and convenient. It would have been easy for me to draw the simple conclusion that employees don’t like using public kiosks for employee registration, and the product team should not consider developing one. However, this would have been untrue. Because of the way we presented the question to participants, their evaluation of kiosks may have been influenced by their earlier seamless, convenient experience using a smartphone prototype.

In his book Thinking, Fast and Slow, Kahneman talks about the concept of single and joint evaluation. Our viewpoint might differ when we’re making a decision or forming an opinion based on only one option or possibility than when we’re evaluating that same option or possibility alongside another.

Since our study’s participants had just experienced a smartphone registration–design concept before we asked them the question about the same experience on a public kiosk, their reference point for forming an opinion about the kiosk was the smartphone experience, which had been well received by participants. Since they had just experienced a seamless, convenient mobile-registration experience, their reference point was a tough one to beat. However, if my study had involved just the singular evaluation of a public-kiosk experience without our presenting any other options, participants might have been more open to its adoption. They might have been unaware of the apparent relative advantages of the smartphone solution, leaving them free to reflect on their most readily available memories that connected somehow to registering on a kiosk. Given the bias that is inherent in joint evaluation, my recommendation was to consider doing more research regarding public kiosks to determine whether employees truly would or would not want to use one.

Recommending more research is not a cop-out—especially during formative research or when there has been very little prior research. It demonstrates that you take a careful, strategic approach to formulating your research recommendations. Plus, more research will give product stakeholders additional opportunities to better understand how prospective customers might receive their product.

Always Review What Research Has Already Been Done to Avoid Covering the Same Ground

Sometimes ambiguity exists around whether a product team has done sufficient research to feel confident about their decisions. Teams often revive old design concepts to see whether they are still viable for users. If such concepts were well researched in the past, they probably won’t require further research. But, for product teams’ trying to understand where things may have gone wrong, whether to pursue a design concept further may be a looming question. UX researchers’ navigating the ambiguous state between what has been done in the past and what could be done in the future should take this opportunity to look back so their product team can move forward with confidence in their decisions.

One way to understand the backstory and determine a way forward is by doing a meta-analysis. (This is a research project unto itself.) A meta-analysis provides an opportunity to gather and analyze all of the research that has been done on a particular topic. By analyzing past research, a team can appreciate the full scope of what has already been done and understand key findings they can use as a frame for understanding the problem—what worked and didn’t work for users—as well as to determine what is missing, and identify potential goals for new research efforts.

When doing a meta-analysis, I typically use a spreadsheet program and include the following information:

  • study project name
  • round of research on a particular topic
  • researcher’s name
  • key findings
  • recommendations deriving from their respective insights 

As you comb through past research reports, include only distinct insights. If you identify a similar insight in the report from another study, log the number of the research round of that study. For example, if in the first round of research you identified insight X, then you also identified a similar finding during the fourth round of research, add a 1 and a 4 to the row in your spreadsheet that corresponds to that insight. When multiple rounds of research uncover similar insights, that adds strength and credibility to an insight and ultimately your recommendation. Your case is much stronger when there are multiple instances of the same finding, especially when insights are surprises your stakeholders would never have expected.

Take an inventory of prior UX research to avoid doing research that would answer the same questions that have already been answered.

Face Ambiguity Head On

Ideally, when you are looking at data from your user-research studies, that data speaks to you clearly, in tongues you understand—with themes and strong patterns that are obvious.

As you engage in effortful System 2 thinking, your System 1 intuition periodically interrupts that process. You’ll see that the themes you want to see might not be so obvious. So you’ll need to dig a little deeper and peel away the layers of gray that cover unknown layers of potential understanding. Be willing to acknowledge what you don’t already know. Only in this way can you see things you haven’t yet seen. By embracing ambiguity head on, you can truly navigate the unknown and discover what you need to understand to make decisive recommendations. 

References

Kahneman, Daniel. Thinking, Fast and Slow. New York: Farrar, Straus, and Giroux, 2011.

Senior UX Researcher at Bloomberg L.P.

New York, New York, USA

Michael A. MorganMichael has worked in the field of IT (Information Technology) for more than 20 years—as an engineer, business analyst, and, for the last ten years, as a UX researcher. He has written on UX topics such as research methodology, UX strategy, and innovation for industry publications that include UXmatters, UX Mastery, Boxes and Arrows, UX Planet, and UX Collective. In Discovery, his quarterly column on UXmatters, Michael writes about the insights that derive from formative UX-research studies. He has a B.A. in Creative Writing from Binghamton University, an M.B.A. in Finance and Strategy from NYU Stern, and an M.S. in Human-Computer Interaction from Iowa State University.  Read More

Other Columns by Michael Morgan

Other Articles on User Research

New on UXmatters