UX strategists need to take charge of the metrics for online experiences. First, we’ll look at the current state of metrics in most organizations and some of the problems in defining metrics for user experience. Then, we’ll focus on three key types of metrics for user experience, how to track them, and how to integrate them into an organization’s measurement framework.
The Signal Problem
There is so much data available on sites and applications that it seems amazing insights would be sure to surface, yet that does not happen without smart decisions. The data that is available from off-the-shelf analytics, A/B tests, and even follow-up surveys does not always result in insights that inform the user experience.
- The signals that are easiest to track don’t show what is really important. Pageviews, for example, are easy to collect, but do not tell you much about the experience people have actually had on your site or when using your application. Your organization’s goal is probably not to make sure that every user views a certain numbers of pages. So, while pageviews may be a great metric for ads, they are not a good way to track or encourage engagement.
- Signals can be ambiguous. Think about time on site, which people often equate with engagement. More time could be positive, but it could also be negative—time that the user spent feeling confused, distracted, or frustrated. Even if you track engagement by looking at time spent on your site and pages per visit together, it’s still not clear how that equates to engagement.
- The signals don't always map to design. Maybe after the launch of a new feature, traffic starts going up. The product team might think this is because of the new feature, sales may link the increase in traffic to a new promotion, and the UX team might assume that it’s connected with their new design. But the increased traffic might not relate to any of these potential causes. A/B tests do connect data with design, but the data is granular. A metric like the number of clicks for image A versus image B can help you to make tactical decisions regarding how to implement a user-interface element, but it does not work well for bigger design decisions.
- There may be too many signals to act on. A lot of metrics get reported simply because they are flowing in from analytics tools that can track hundreds of metrics and are endlessly customizable. It is tempting to measure everything and hope that insights will emerge on their own, but usually, they won’t.
- The right signals might not get captured at all. Companies usually track metrics post launch. Metrics that you could use to inform design need to be captured while you’re developing new ideas, but often those metrics are neither quantified nor tracked.
So, it’s difficult to find the right signals for user experience amidst all the noise in your data. Further complicating matters, companies often identify and track the metrics that could help you to understand how you’re doing against your UX objectives—the key performance indicators (KPIs)—elsewhere in the organization and without involving the UX team.
The Early, Modern History of UX Metrics
Most metrics are marketing oriented, not experience oriented. Unique visitors can tell you whether your marketing campaign worked and social mentions can tell you whether you’ve got a great headline, but these metrics do not reveal much about the experience people have had using a site or application.
Table 1 does not provide an exhaustive list of metrics, but it illustrates some of the differences between what marketing may be tracking and the types of metrics UX teams currently track.
Marketing Metrics | User Experience Metrics |
---|---|
Conversion rate (site, campaign, social) |
Task success rate |
Cost per conversion (CPC) |
Perceived success |
Visits to purchase |
Time on task |
Share of search |
Use of search or navigation |
Net Promoter Score (NPS) |
Ease of use rating, or SUS |
Pageviews |
Data entry |
Bounce rate |
Error rate |
Clicks |
Back-button usage |
So far, user experience metrics focus on ease of use. Usability is familiar territory—and something that UX teams do well—so this makes sense as a starting point. The most commonly used metrics are performance measures such as time on task, success rate, or user errors. These are objective measurements that record what people actually do—though success rate can have an element of subjectivity, depending on how it’s measured.
Subjective measures like the System Usability Score (SUS) or simple satisfaction or ease-of-use rating scales gauge how people perceive an experience after using it. Sometimes UX teams capture both objective and subjective metrics to get a better picture of a site’s or application’s usability.
All of the UX metrics that I’ve listed in Table 1 reflect what UX teams would typically capture from a usability study rather than from analytics. This is another big difference between marketing metrics and user experience metrics. However, many organizations either do not quantify study data at all or track it inconsistently, usually because studies relate to a particular problem or are just exploratory and the study size is small. So they typically capture usability metrics like these only once in a while, if at all.
User experience is about more than just ease of use, of course. It is about motivations, attitudes, expectations, behavioral patterns, and constraints. It is about the types of interactions people have, how they feel about an experience, and what actions they expect to take. User experience also comprehends more than just the few moments of a single site visit or one-time use of an application; it is about the cross-channel user journey, too. This is new territory for UX metrics.
The (U)X Factor
Even though most metrics measure what people have done or said, they seem a little abstract. Marketing metrics focus on customer acquisition, so their emphasis is on getting attention and closing the deal. The language of conversion funnels and landing-page optimization plays down the human factor. Plus, these metrics miss all of the messy, in-between stuff that makes up the user’s actual experience with a site or application. Hesitation over a navigation category, using search because it’s too hard to deal with the site navigation, frustration over losing one’s place in the feed—all of these sorts of details are critical to understanding the user experience. This is also where UX teams excel.
The goal of UX metrics should be to bring the people—and maybe a little of the messiness of the actual experience—back into the mix. Contexts and connections are the missing links.
- contexts—UX teams can already supply context for the numbers from analytics and show what has happened by filling in the how and the why through UX research and creating personas and journey maps. If you start tracking things like the relationships between interactions and the different goals and behaviors that are associated with users who start from different channels, you can also provide metrics on these contexts.
- connections—In addition to filling in the blanks between what has happened and why, UX metrics can bridge the gap between the insights that emerged during the development process and what you track once a site has launched. More organizations are moving toward connecting and sharing data from quantitative and qualitative studies, customer service, and site analytics. But without metrics, it’s difficult to understand changes over time or to assign priorities to issues.
UX professionals can learn some things from marketing metrics, of course. Marketing teams have spent more time aligning KPIs with data collection and metrics. Assigning value to metrics is a given for marketing, but atypical for UX metrics—at least for now.