This is a sample chapter from Tomer Sharon’s new book Validating Product Ideas Through Lean User Research. 2016 Rosenfeld Media.
Chapter 5: Do People Want the Product?
“Mmm…” I thought to myself as I was reading Nate Bolt’s Facebook post about the Automatic app (see Figure 5.1). “A smart driving assistant? One that hooks up to my car’s computer and sends data to an iPhone app that will help me save energy and money? I want that!” (See Figure 5.2.)
I ordered an Automatic two minutes after I saw that post. It cost me $70. At the time, the product wasn’t shipping yet, and I was paying to participate in a beta that was going to start in a few months. Usually, I’m extremely skeptical about such things. But this was different. I really wanted that thing. I thought the idea was brilliant, and I was 100% positive that I would use and love it. The beautiful, smooth Automatic Web site and purchasing workflow reassured me that I could trust my instincts. When the Automatic package arrived at my doorstep a few months later, I was happy. Unboxing it was very “Apple-like,” and onboarding was great. I hooked the Automatic car adapter to my car (somewhere under the steering wheel where I was able to find the data port quickly), installed the app, and made sure it worked when I drove the car.
Champion Advertisement
Continue Reading…
I didn’t really understand what the app was for and what information it presented. “Give it some time,” I thought to myself. “Maybe it’s learning the car and my driving and will soon present useful information.” The app was very well designed. It was beautiful. However, at some point I wasn’t sure I was doing everything right, or perhaps I just needed to invest a little more time in trying to figure out what was going on. I didn’t get it.
A few days after I installed it, my wife went with the kids to one of their baseball league games 45 minutes away from our home on a weekday evening. It was a cold fall night and when the game was over she found herself in the car with three kids, including a four-month-old screaming, tired baby and a car that wouldn’t start. The car was relatively new, without any history of issues, and no mechanical failures at all. “Take it out,” I suggested. Immediately after my wife pulled the Automatic car adapter out of its data port, the ignition started. That ended up being our Automatic’s swan song. I didn’t want it anymore.
Maybe that’s a big coincidence, maybe it’s me, or maybe the car breaking down had nothing to do with the Automatic. I have no idea. And I’m not mad at Automatic. I’m sure they have a great product. The point is that when I bought it, I wanted it without any clue whether I needed it or not.
One of the most interesting questions that product development practitioners, entrepreneurs, and investors ask themselves is “Do people want the product?” In other words, once people read, hear, or talk about or interact with the product, would they want to buy and use it? This question is interesting, since it can be perceived as a critical question to get an answer to; however, it is not really a question about design and user experience of products, but rather one that concerns marketing them.
User researchers are sometimes uncomfortable answering this question with different methods such as focus groups, opinion polls, and Net Promoter Scores (NPS) because these methods focus on what people think rather than what they actually do. However, the Lean Startup management approach has brought to life several lightweight, nimble, and non-wasteful research techniques. These techniques force research participants to demonstrate a behavior that indicates what they want. By that, they help generate useful results to answer the wants question.
This chapter will walk you through two fun, effective research methods that provide an answer to the question “Do people want the product?” without writing one line of code. Actually, with one of the methods you will need to write two to three lines of code, but no more than that, I promise.
Why Is This Question Important?
The question “Do people want the product?” is important for understanding and learning about the state of mind of your target audience after it is exposed to the product or some kind of communication about it. Answering this question is key in making you more aware of current pain points of your audience. When people express a wish by demonstrating a certain behavior, they imply there’s something wrong in the world and that they care about it. This is exactly what you look for when you are on a quest to validate your key product and user assumptions.
Why the Want Question Is Different
What people want is a question that can be asked and answered before a specific product or service even exists. It is a question that affects product marketing and communication more than its design and features. Yes, when you ask people what they want, their answer includes products, features, and services. Yet they have no idea what they are talking about. They sound believable, but they’re not. They’re not bad people, and they are not liars. Basically, they have no clue, but they think they do and want to be helpful. That’s human nature. In order for people to want a product or perceive it as something they need, three things must happen:
They must know about the product. Your marketing and public relations channels must meet your audience.
They must understand the product’s value. Words, images, demos, and videos must communicate the value of the product and make potential customers feel it solves a problem or meets a need they have. The exception is that sometime, when non-important purchasing decisions are made, people tend to fudge the understanding of the value.
They must agree to the product’s cost. Potential customers must accept the price point and be willing to pay what you ask for the product.
Note that all of the above has nothing to do with product design, unlike the rest of the questions discussed in this book.
When Should You Ask It?
You should ask yourself the question “Do people want my product?” all the time—right when you have an idea, when you make a lot of progress with building and developing the product, and definitely after you launch it. Keep doing that. By asking the question before you actually build the product, feature, or service, you are reducing waste—time, resources, and energy (Figure 5.3). The more you learn about what people want before you build anything, the less time and effort you will spend on redundant code, hundreds of hours of irrelevant meetings, and negative emotions of team members when they realize they wasted their blood, sweat, and tears on something nobody wanted.
Research techniques covered in this chapter involve some manner of pretending you have a product or service, and therefore require you to create a manual, prototype, or page that is a key component used during research. It is a great time to ask the “wants” question when you have such a manual, prototype, or page.
Answering the Question with a Concierge MVP and Fake Doors Experiment
To answer the question “Do people want the product?” you must first understand what an MVP is and what it is not. An MVP (Minimum Viable Product) is the process of creating “a version of a new product that allows the team to collect the maximum amount of validated learning about customers with the least amount of effort.” [1] In other words, an MVP is a way to quickly validate, or most likely invalidate, an assumption.
An MVP is not version 1 of the product. As a matter of fact, some MVPs are not even products. For example, it could be a contract you try to persuade potential customers to sign and learn if they show enough interest. Or it could be a prototype with minimum functionality that allows its creators to test it with a subset of potential users to avoid building something people do not want. An MVP is not a cheaper product, nor it is a minimal version of a product with the smallest possible feature set. Think of an MVP as a series of experiments and research activities with the sole goal of helping you learn. Table 5.1 summarizes what an MVP is and is not.
Table 5.1—Defining an MVP
An MVP Is
An MVP Is Not
A process that allows its creators to validate or invalidate assumptions quickly with a subset of potential users
A cheaper product
A prototype with minimum functionality that facilitates learning
A minimal version of a product with the smallest possible feature set
An experiment to learn about potential users
A minimal version of a product with the smallest possible feature set
Version 1 of a product
Concierge MVP
Both Concierge MVP and Fake Doors are minimum viable products. The Concierge MVP is an MVP where you manually provide the functionality of the product to the customer. You guide your user through the solution to a problem. For example, Open Snow is a startup from Boulder, Colorado. It’s a team of meteorologists who specialize in (and are passionate about) weather forecasts for skiing resorts and destinations. They solve the problem of the non-existent, specific, and detailed snow sports weather forecast. Skiers invest a lot of time, money, and effort in planning ski trips. These trips might be canceled due to wrong (or too general) weather reports for the area, or even worse, skiers can go ahead with a trip only to find out that the actual weather does not permit any sports activity. Open Snow solves all of that.
One way of going about providing this service to skiers is developing an app or a Web site that can gather a person’s skiing plans and push snow sports weather reports in a timely and effective manner. The Concierge MVP approach is much simpler, less wasteful, and more effective for learning what skiers want. Rather than investing their time and money into building even a primitive version of an app or Web site, Open Snow can visit ski resorts, approach potential customers in person, and offer them the service they envision the app or Web site will eventually deliver. When they find someone interested in the service (for free at first), they will continue to provide value to the customer via email. They might ask interested customers to shoot them an email when they need a weather forecast for a ski resort and then respond with a full forecast to the customer’s inbox. Eventually, they should ask customers to pay for the service. The act of a customer who chooses to pay for a service serves as validation to Open Snow’s assumption about what people want.
Another great example for an MVP is how the founders of Get Maid chose to validate their idea. Get Maid is an app for booking a home cleaning service. The founders first created a front-end app that would send them a text message. They would call their network of maids and see who was available and then text the customer that the appointment was confirmed once they found a maid. This is an example of a more high-fidelity approach to an MVP, yet still one that does not involve fully developing the product.
Fake Doors
A Fake Doors experiment is a minimum viable product where you pretend to provide a product, feature, or service to Web page or app visitors. Without developing anything just yet, you communicate to visitors that the thing exists and ask them to act on it. If they do, you know they want it, and it’s time for you to start working on developing it. For example, imagine a grocery store Web site. If the store is thinking about developing a grocery shopping app and wants to know whether customers are interested or not, a call-to-action button could be added to the Web site. The button might be labeled as “Download our shopping app.” The store would have a powerful decision-making tool at hand if it saw a large ratio of people who clicked the button and divided that by those who were exposed to it.
Why Concierge MVP and Fake Doors Experiments Work
Concierge MVP and Fake Doors are effective and efficient lean research techniques with the following benefits:
They are great methods for finding how potential customers perceive the value of an offering.
They are good for evaluating single, very small features through very specific services to entire product suites.
These techniques will reduce the risk of wasting time on expensive product development.
They’ll keep you from delivering features, products, and services your customers don’t really want.
They will force you to start speaking the language that resonates with customers, and practice and perfect it.
Other Questions Concierge MVP and Fake Doors Experiments Help Answer
Other than the “Do people want the product?” question, Concierge MVP and Fake Doors experiments are great methods for answering the following questions as well.
Which words should I use to describe my idea to people?
What will persuade people to try my product?
What are people’s responses when they first hear about my product?
Would people pay for my product or service?
How much would people pay for my product or service?
Do people perceive my product as something that solves a problem they care about?
Who is the audience of my product?
How to Answer the Question
The following is a how-to guide that takes you step-by-step through the process of using a Concierge MVP and Fake Doors experiment to answer the question “Do people want the product?”
STEP 1: Choose an experiment type.
Technology is awesome. It really is. It helps humans communicate, find old friends, work more effectively, have fun, find places, and oh-so-many other great things. In many cases, technology is also hard, time consuming, and expensive to develop. In this step, you will find a way to solve a problem you want to solve with or without technology. Manual ways of solving problems are, without a doubt, inefficient, yet they will teach you a lot about what people want without actually developing any technology.
Choose between a Concierge MVP and a Fake Doors experiment:
Choose a Concierge MVP when you are in exploration mode, when you don’t have a product yet, when development hasn’t even started, when you don’t know a lot about how to solve the problem, or when you are very unsure about your idea.
Choose a Fake Doors experiment when you want to learn about people’s honest reactions to an idea of a product or feature and collect data about their interest level.
Steps 2 to 4 will guide you through Concierge MVP research. If you chose a Fake Doors experiment, jump to Steps 5–6.
The Contract MVP
Another type of an MVP experiment is the Contract MVP. A Contract MVP is when you learn if potential customers want your product (that doesn’t exist yet) by asking them to sign a contract for using it. In no way are you pretending that the product does exist. Actually, you are very open about the fact that you don’t have it yet. You ask potential customers to sign a contract confirming that they will pay you for your service once it is available. If it will not be available or if they are not interested anymore, you (and they) can take it all back without any consequences.
Ordr.in is a platform for online food ordering. Among other features, Ordr.in’s APIs allow businesses to add a widget to their intranets and have employees order lunch for their office from nearby restaurants. To validate that businesses wanted this type of service, Ordr.in’s founders crafted a contract MVP and walked into various businesses pitching their idea and asking them to sign the contract, while making potential clients fully aware that the product did not exist yet. The founders decided in advance that if 500 businesses signed their contract, they would start developing a product. The rest is history. Ordr.in was developed, launched, and is now a great success among businesses, restaurants, and developers. Ordr.in revolutionized the restaurant industry.
MVPs are not always successful and don’t always validate ideas. Many things can go wrong. Here are some examples:
Measuring the wrong metric
Low budget that prevents good distribution
Customer disbelief
Difficulties in crafting a good prototype or execution issues
Wrong customers
Using surveys or using poorly designed surveys
STEP 2: Design a Concierge MVP.
A Concierge MVP manually provides the functionality of the product to the customer. Without developing any expensive technology and without writing one line of code, a concierge MVP helps you figure out if people are interested in your idea. If they are, it means they perceive it as valuable. It means they want it.
The two real strengths of a Concierge MVP are discovering other ideas that might be better perceived by your audience and discovering new audiences. While you are going through the process of creating a Concierge MVP and improving it, pay extra attention to new insights that will introduce themselves to you about other product or service ideas or completely different audiences. These might just become the most important learning opportunities of your Concierge MVP.
To design a Concierge MVP experiment, first think of the value your idea offers to your audience. Ask yourself what the core benefit your product, feature, or service brings to its users. Why would they use it? Why would they need it? A great way of coming up with a Concierge MVP is stating your hypothesis. For example, “We believe that a tablet app that offers recipes for dishes based on what people currently have in their fridge will engage people with limited cooking creativity and skills to come up with tasty dishes that impress their family and friends. We will know this is true when we see our customers use the app at least twice a week for a period of four weeks.”
A manual way to create the same value for customers would be asking people to send a photo of the inside of their fridge and have an expert chef quickly send back matching recipes via email. The chef is acting as a human concierge who delivers a personal service. At the core of designing a successful Concierge MVP experiment is the human expertise, which allows you to provide the service combined with existing technology that will replace a digital way of providing it.
Beware though. The quality of the personalized product you will come up with in the end of the process must match the manual curation in your Concierge MVP. It must approximate the real thing or the results of the Concierge MVP could be skewed and misleading. It’s a danger that product developers who believe passionately in their product often overestimate the quality in the experiment. It’s much easier to be precise when a human is thinking about everything manually.
To help you plan a Concierge MVP study and track its results, use the Concierge MVP board (see Figure 5.4) available for you to use here.
The following are additional examples of Concierge MVP experiments:
1. Assisting lost shoppers:
Problem: Men find it hard to find their way around a grocery store when they shop for groceries. They waste a lot of time trying to find certain items, while not willing to admit they can’t find them and get help from grocery personnel.
The big idea: An iPad mounted on a grocery shopping cart with an app that allows shoppers to take a photo of their shopping list. The app then provides the shortest, most effective route to shop for groceries on the list.
Manual solution (the Concierge MVP): A personal assistant (not wearing a store uniform so that others don’t know the customer is getting help) walks with the customer in the store and shows him where groceries are located. The assistant guides the male shopper through a preferred route, which saves the customer time. Notice that this specific Concierge MVP might be misleading for you since customers might appreciate the fact they have the company of a person while shopping, which you cannot replicate in a digital solution later on. Be sure to pay attention to needs and wants while excluding the noise involved of having a person there with the customer.
2. Matching colors:
Problem: People who want to paint a room have no idea which wall color goes well with existing room furniture and floor color. They want to be creative and cool, but don’t know how.
The big idea: A Web site allows people to upload room photos. As users select walls they want to paint, they are presented with suggestions for matching colors based on the color palette of the furniture in the room.
Manual solution (the Concierge MVP): An interior designer who specializes in color theory and practice sends color suggestions accompanied by rationale to customers who sent room photos via email. The designer also offers a 15-minute phone consultation to answer customers’ questions. This consultation must also be replicated in the final product; otherwise, the experiment results will be misleading.
3. Finding customers:
Problem: Independent hardware producers find it extremely challenging to find and reach out to potential customers.
The big idea: A Web site that features (and sells) five weekly hardware items by independent hardware creators.
Manual solution (the Concierge MVP): The people behind the Web site curate a collection of indie-produced hardware items and send a weekly newsletter to interested customers. The email includes contact details of indie creators in case customers want to purchase any of the items directly.
4. Managing enterprise mobile security:
Problem: Enterprises need to deal with multiple unusable applications for managing mobile security and privacy, specifically for mobile messaging and biometrics.
The big idea: A software product that hooks into various mobile security and networking services while providing a dashboard for evaluating security threats, as well as actionable recommendations for dealing with them.
Manual solution (the Concierge MVP): An enterprise network security expert who is well informed and experienced provides an in-house review and consultation to enterprise data security departments. The expert’s output is a detailed report about potential security breaches and action remedies. Make sure that it’s not the concierge that people want, but rather the provided service. Otherwise, again, the experiment results will mislead you.
STEP 3: Find customers and pitch Concierge MVP.
An important aspect of a Concierge MVP experiment is the pitch to potential customers. It’s important because this is when you first meet potential customers and understand their perception about your product’s value. As soon as you have completed Step 2 and you are ready to provide your service manually, tailor your pitch to prospects and get out of the building to find new customers. Write down your pitch on the Concierge MVP Board (see Figure 5.4).
If you are located close to your audience, identify places where they linger and pitch your product. For example, if your idea solves a problem for teachers, approach them when they leave school or at teacher conferences. If you target people who love to cook, find them at specialty cooking equipment stores. If your audience is music lovers, find them at concerts. People who love to go to New York City for vacation? Go to Times Square. Grocery shoppers? Go to store parking lots. Enterprise security officers? There are dozens of annual conferences on that topic. Use the Concierge MVP Board to write down places where you assume your audience lingers and then physically go there.
If you are located far away from your audience, pitch your product over social media. (Learn more about finding your audience on social media in Chapter 9.)
Post on Facebook groups and pages relevant to your product domain.
Tweet a short pitch over Twitter and use hashtags.
Post on Google Plus communities and pages relevant to your product domain.
Post on LinkedIn groups relevant to your product domain.
Post on any other relevant social media. For example, if your audience is primarily German-speaking business people, post your pitch on Xing (the local version of LinkedIn).
Use the Concierge MVP Board to collect links to social media groups, communities, pages, and hashtags your audience spends time with.
As soon as people agree to participate, ask for their email or phone number (whatever makes more sense) and contact them with the next steps. Make sure that you recruit enough customers you can learn from, as well as a number that you can handle. If you plan on recommending dishes to people based on their preferences, 500 people is probably too many to put on your plate, literally. As a rule of thumb, five people should be enough for exploring a concierge MVP. As soon as people agree to participate, set expectations and let them know exactly what is going to happen.
If not enough people agree, change your pitch. If that doesn’t help, have someone else pitch it. If that doesn’t help, try more locations (both physical and virtual). If that doesn’t help and people are just not interested no matter what you say, who says it, or where, maybe you should consider changing your idea.
STEP 4: Serve the Concierge MVP to customers.
Without writing one line of code, serve your MVP to customers over phone, email, SMS, IM, or in person. Among these options, put extra consideration on using SMS, because it is the lowest fidelity way of prototyping since it strips away all user interface and interaction design, and it’s almost universally accessible across ages and geographies. As you serve your MVP, make sure you do the following:
Keep interaction with customers to a minimum. Don’t communicate more than what you set expectations for. You want customers to react to your product’s added value, not the noise you might create around it.
Track key events. Several key events will take place as your first customers experience your service. These might be understanding the instructions you sent, the first impression of your service, the first interaction, the first problem your customers have, requests they make and complaints they submit, as well as their reaction to your request for payment. Note down every such event per participant in the Concierge MVP board.
Proactively seek feedback. In addition to understanding what happens when your first customers consume your service, try to understand why. Ask them to share their feedback, perspectives, and thoughts each time something meaningful is happening—for example, the first time their expectations are not met) or once every day or week. Again, don’t nag too much. Otherwise, you’ll lose them for the wrong reason. Pay extra attention and put more weight on how customers behave rather than what they think and say. Always keep in mind that rationalization might occur in these feedback-requesting situations (read more about rationalization in Chapter 2, Step 5).
Make changes. Once you realize something is not working well for your customers, take action and change it. There are no rules in terms of how many people need to report or experience the same thing for you to decide to change something. This is a qualitative evaluation, not a quantitative one. You will need to make a judgment call and decide to change something only after you are sure it’s creating a challenge for your participants. Alternatively, you might find yourself in front of a “head-banger.” A “head-banger” is a problem you obviously need to solve yet you never realized it before someone experienced it. It is so obvious that you bang your head against a wall, not understanding how you could have missed it. A head-banging issue is one that happens once, and that one time is enough for you to realize you should change something.
Ask for payment. At a certain point in time, when you feel things are running smoothly, you should pop the question. Ask your customers to pay for the manual service. Give them notice in advance that the service will become a premium one starting at an upcoming date. Offer them a good deal since they were kind enough to help you learn. That said, the price point should be one that shows their commitment. If customers are willing to pay, excellent! It signals they perceive the service as something they need. That’s a very good sign you struck a chord with your first customers. If they are not willing to pay, it’s a learning opportunity for you. Try to get down to the root cause of their resistance. After you understand why, make the necessary changes. A word of warning: do not confuse asking for payment as described above with just asking “Would you pay for this?” Asking this question is not a substitute or shortcut for asking for payment. In fact, it has nothing to do with this technique at all. If you ask for payment, you gather behavioral data, while if you ask the “would you” question, you are collecting attitudinal data that will mislead you since people have no idea what they would do in the future. Also, people in these situations will want to be nice and helpful and will tell you they would pay for it. Don’t fall into this trap.
Track your customers’ behavior, reaction, and feedback, as well as lessons you learned in the Concierge MVP board (see Figure 5.5). Concierge MVP exploration is an iterative process. There are no time limitations to running it and no rules in terms of how many times you can or should use it. If it makes sense for you to run three rounds of Concierge MVP experiments, go for it. Just make sure that you document lessons you learned and make necessary changes between each round. The Concierge MVP board can support such an iterative process. Just add a sheet per MVP iteration, and track and monitor from round to round if specific issues disappear once addressed, if new issues come up, overall satisfaction increases, and propensity to pay goes up as you iterate.
STEP 5: Design a Fake Doors experiment.
Many people with new product ideas have two ways of finding out if potential customers want it:
Launch a landing page with key benefits and a screenshot and collect email addresses of people who are interested. If conversion rates of people giving their email address divided by landing page impressions are high enough, a decision is made to develop the product.
Ask potential customers if they want it. What’s considered as a healthy process in many organizations is sending a team with a new product idea to the organization’s top customers. The team then passionately describes the idea and asks for feedback. Would you use it? Would you pay for it? How much? What features do you want? If 10–15 customers show they are interested, the organization goes ahead and develops the product.
These two activities are seductive to startups and huge corporations alike. They feel science-y and data-ish. An entrepreneur having a customer tell her he wants her product is inherently validating during a time when the entrepreneur is probably vastly insecure about what she’s building and is desperate for someone to compliment the product. It’s innately human. Don’t be tricked. This kind of research will mislead you and waste your time, as it’s profoundly wrong, unreliable, and invalid. For example, startups tend to launch a landing page, thinking it’s the right way to learn if people need their product. The problem is that the only question landing pages answer is “Are people interested enough to give us their email address?” They learn nothing about what people want or need. Humans have no idea what they need and will almost always be nice to people who ask them. It doesn’t cost them much to be nice and say it’s a great idea.
That said, not everything is black and white when you ask “Will you use this?” Some people do actually know—for example, specialists (like doctors) in fields with atrocious user experiences where there are obvious design opportunities.
The Fake Doors technique is a powerful, quick, waste-reducing way to find out if people want a product, feature, or service. There are three ways to design a Fake Doors experiment:
Landing or crowdfunding page: Launch a landing page that attempts to prove some kind of commitment on behalf of its visitors. This commitment could be asking them to pay for a product that doesn’t exist yet. Starting an IndieGoGo or Kickstarter project is a variation of evaluating such a commitment. Be aware, though, that crowdfunding attracts very specific types of audiences that might not overlap with yours.
The button to nowhere: When you want to evaluate if people need a certain feature within an existing product, add a button or link or tap target to your product indicating that a certain capability or feature exists behind it. When users press, click, or tap it, show an indication that the feature doesn’t exist yet—a “coming soon” note or an “in progress” banner. Obviously, this technique requires you to have a product and enough visitor traffic.
404 testing: Launch an advertising campaign, for example, with Google Adwords or Facebook Ads. (A word of warning: advertising involves brand, imagery, tone of the messaging, targeting, and more. If you are a beginner in online advertising, be aware that if done poorly, it can be a huge barrier to attracting people that would really benefit from the experience.)
Ads included in the campaign lead to a 404 error page. You don’t need to develop anything. Your only goal is evaluating if people are interested in the product based on the ads. Personally, I wouldn’t recommend using a 404 page. I think it’s too nasty. You can decide for yourself.
To prevent visitors from getting angry with you and feeling tricked, be completely honest with them. In your “coming soon” message, be sure to thank them for helping you learn about their needs. If you have the budget for it, apologize and consider compensating them with a small token of appreciation, such as a $5 gift certificate on Amazon.
Would Google Do That?
I get asked a lot about whether or not Google does any of those Fake Doors, button to nowhere, or 404 testing. To the best of my knowledge, the answer is no. For a good reason, in my opinion. In a world where thousands of news articles and social media posts burst into the air worldwide after Google moves one letter in its logo one pixel to the right, you can only imagine what would happen if Google implemented a 404 test.
My point is simple: if you work for or founded an organization that is willing to experiment and does not have half the world watching every step you make, go for it and use Fake Doors studies. Just do yourself a favor and don’t be nasty. Never intentionally lead to a 404 page only because you want to learn. Have the courtesy to admit it and apologize for not having the product available. Be open about it. Thank the people who help you learn, and if you can, give them a small gift as a gesture.
STEP 6: Determine a Fake Doors threshold.
There is one piece of data coming out of a Fake Doors experiment that helps you get an answer to the “Do people want my product?” question: the ratio between how many people showed interest in the product/feature and the number of people who got exposed to the message about it. “Showed interest” means they either paid to buy the product, funded it, clicked the button to nowhere, or clicked through an ad.
When you decide in advance what the ratio (or dollar value) is that will make you want to develop the product or feature, you have a powerful research tool that drives decisions at hand.
STEP 7: Make a decision and move on.
Jeff Bezos, Amazon’s founder and CEO, says, “Experiments are by their very nature prone to failure.”
After you have collected data through either a Concierge MVP or a Fake Doors experiment, it’s time to evaluate, make an informed decision, iterate, and move on. When the threshold you’ve set in advance is crossed, or if participants are so enthusiastic about your offering that they’re willing to pay for it even in its manual version, these are all great signals that potential customers recognize the value of your product, feature, or service and that they want it. This serves as validation, and you can go ahead to make progress with developing a product prototype.
In most cases, though, you will find that your assumptions are invalidated. You learn that your idea has failed. Potential customers don’t provide any clear signal they want your product. This is where a lot of entrepreneurs, product managers, and startup founders make bad decisions. In my research for this book, I discovered that many of them decide to ignore what they learned and still chase their passion for making a product out of their idea.
Don’t get me wrong. I am not suggesting that after you learn people don’t want your product you should stop chasing your dream and vision. Not at all. I am calling you to pivot, to make informed decisions that will help you change your idea a little bit so that it appeals to your intended audience. Research is to help inform your intuition. Sometimes, it’s the audience that you need to pivot, not the product. In any case, be sure to make a decision based on data you collect, then implement it, and experiment again. A lot of people use terms such as UX, design thinking, and innovation without truly understanding what they mean. Iteration, pivot, and evaluation is exactly that. When you get frustrated that people don’t want your product, then change and test it—that’s innovation, design thinking, and user experience.
Other Methods to Answer the Question
While a Concierge MVP and a Fake Doors experiment are fast, effective ways for answering the “What do people want?” question, here are some additional MVP techniques for answering it.
Interview (in person or phone call)
Paper prototype
Pre-order page
Blog
Online ad campaign
Crowdfunding campaign
Mechanical Turk
Contract
Video
Software or hardware prototype
Wizard of Oz
Single-feature product
The product itself
Note—Concierge MVP and Fake Doors Experiment Resources
Access the online resource page for Concierge MVP and Fake Doors experiments on the book’s companion Web site at leanresearch.co. You’ll find templates, checklists, videos, slide decks, articles, and book recommendations.
Concierge MVP and Fake Doors Experiments Checklist
Choose an experiment type.
Design a Concierge MVP.
Find customers and pitch Concierge MVP.
Serve the Concierge MVP to customers.
Design a Fake Doors experiment.
Determine a Fake Doors threshold.
Make a decision and move on.
“After you guys gave me the go-ahead, we launched a User Testing study with six participants,” said Jennifer, queuing up videos on a laptop in their conference room. “They were asked to complete three basic tasks with note.io: sign up, create a list, and share a list with a friend. There were a few other things they could do, but those three were the most important.”
Dana sat at the table next to Will, who was still irritated, she noticed.
“Now, I want you to write down three big things you learn as you watch the videos,” said Jennifer.
Dana paid attention and hoped Will was as well. She also hoped it wouldn’t take too long. Fortunately, most of the videos were short— only about three minutes; the longest maybe fifteen. Still, she could feel Will getting more and more angry as the videos went on. Finally, it was over, and Jennifer looked over at them.
“So, Will. What did you think?” Jennifer asked, her expression curious but otherwise neutral.
“I can’t believe we didn’t do anything like this before,” he said, in a low voice.
“It was kind of painful to watch,” admitted Dana. Why had they thought friends and family would be good enough for this kind of research?
“It felt like a slap in the face,” Will continued. “Most of the time, I felt like either holding my head or banging it on the table!”
Dana felt some tension ease in her chest.
She was afraid Will would just flat out refuse to work with Jennifer, but based on his comments, at least it sounded like he was seeing just how badly they needed what she offered.
“Still,” said Will, “it was just six people. Can we really make a big deal about this data?”
“You’d be surprised,” replied Jennifer. “What did you learn?”
Will looked down at the notes he’d scrawled hastily. “The onboarding funnel is buggy. It doesn’t explain what note.io is in terms people understand, and it’s too long, which makes people doubt they’d use it.”
“Those are easy to fix,” Jennifer assured him. “Anything else?”
Dana cleared her throat. “This is probably stupid, but did you notice that four of the women mentioned they created grocery shopping lists for their husbands, so they knew what to buy?”
Will snorted with contempt, and Dana shrank back a bit, embarrassed. “I noticed that, too,” Jennifer mused. “Maybe we should look into it.”
Discount for UXmatters Readers—Buy Validating Product Ideas Through Lean User Research from Rosenfeld Media, using the discount code UXMATTERS, and save 20% off the retail price.
Endnote
[1] Ries, Eric. The Lean Startup: How Today’s Entrepreneurs Use Continuous Innovation to Create Radically Successful Businesses. New York: Crown Business, 2011.
I’m wondering about the “Other Methods to Answer the Question”—specifically the paper and software prototypes and when you’d want to use them to answer this question.
I was under the impression that paper prototypes and other interactive prototypes are primarily used to evaluate a design. You can easily figure out if people get the concept and can use the interface. But can you really use them to actually learn whether people want the real product?
In some cases, a participant might be very enthusiastic and tell you that that’s exactly the kind of product he needs, but in general? There’s no commitment from the participants, and they might just try to be nice. So can the results be as valid as the ones from Concierge MVPs and Fake Doors?
Also, you’d still have to do some design up front in the case of software prototype development. So it doesn’t seem as efficient as, for example, an interview.
A user experience researcher at Google New York since 2008, Tomer is currently doing user research for Google Search. Previously, he led the UX research effort for Google’s online advertising management platform, DFP (Doubleclick for Publishers). Prior to working for Google, he was a user researcher at Check Point Software Technologies in Israel, where he led the research effort for dozens of networking and Internet security products on various platforms. As founder and president of UPA Israel, Tomer led the chapter to many achievements, including raising awareness of the need for easy-to-use, efficient, and fun technology products, as well as growing and nurturing a professional community of 1,000 practitioners. He speaks at conferences and professional events, is a published author of articles and papers, and a past editorial board member for UX Magazine. Tomer is the author of the forthcoming Morgan Kaufmann book, It’s Our Research: Getting Stakeholder Buy-in for User Experience Research Projects, which will be out in 2012. He holds a master’s degree in human factors in information design from Bentley University. Read More