Case Study: How Alpha enables teams to make smarter product decisions more cost-effectively

Read

Experimentation and Prototyping

A Guide for Product Leaders

Incumbent companies across every industry have been put on notice: technology upstarts are taking over the Fortune 500 list at an accelerating pace. No one wants to be the next Blockbuster. But the ability to avoid that fate won’t come from open workspaces, a relaxed dress code, and a bring-your-own-device policy. It requires teams to gain insights directly from consumers, align to the actual wants and needs of the market, and accelerate time-to-market. 

How a corporate-incubated startup validated their product offering 3x faster

Read article

How do market researchers add experimentation to drive better business outcomes?

Read article

How to infuse a startup culture within a large organization

Listen to interview

Look no further than Amazon. In a 2017 letter to shareholders, CEO Jeff Bezos, credited a decision-making model for the company’s ability to deliver value to the market faster:

“Most decisions should probably be made with somewhere around 70% of the information you wish you had. If you wait for 90%, in most cases, you’re probably being slow. Plus, either way, you need to be good at quickly recognizing and correcting bad decisions. If you’re good at course correcting, being wrong may be less costly than you think, whereas being slow is going to be expensive for sure.”

So how do they quickly generate 70% of the information they need to make a decision? “Amazon has a point of view that’s deeply embedded in the company,” Investor’s Business Daily explains. “It will win with the customer by doing a series of small bets that give it insight on how to build that long-term customer relationship.”

Combining experimentation with market research

Whereas many organizations have deep and meaningful expertise doing large-scale market research, their findings are hitting critical limitations.

Product teams responsible for introducing new solutions to the market are embracing agile workflows and need to engage in higher-velocity decision-making to inform recurring code sprints. They can’t wait for large-scale studies to be completed and so are left making decisions based on non-contextualized data from past reports at best or gut feelings at worst. In an effort to get closer to Amazon’s benchmark of having 70% of the information needed, some product managers even go to nearby Starbucks to generate feedback from random patrons. It’s not a rigorous practice by any means.

In short, market research is extraordinarily effective for sensing broad marketplace changes and informing the organization about what markets to enter and what go-to-market strategies to deploy. But it simply can’t inform the day-to-day decisions that product teams need to make.

Small bets, via experimentation introduces data to these daily decisions. Designed specifically for continuous iterations, experimentation informs product teams with directional insight into distinct user preferences and behaviors.

Whereas market research can be used for sophisticated initiatives like market segmentation and conjoint analysis studies, experimentation involves more tactical roadmap prioritization activities like concept split tests and usability assessments. Integrated together, market research and experimentation can enable organizations to allocate resources intelligently and efficiently.

When it comes to adding experimentation to market research, the biggest mistake you can make is not using each to its strengths and weaknesses. Misusing data can be more dangerous than operating without data. Remember that experimentation is an iterative testing methodology that provides directional insight, whereas market research consists of study-based methodologies to provide statistically significant results.

The quality of the insights generated is directly correlated to the quality and rigor of your methodology. With regard to experimentation, you should ensure that:

  • You are eliminating bias from psychological distance by generating revealed preferences (i.e. authentic reactions) to stimuli rather than stated preferences (i.e. zero-risk answers) to survey questions
  • You are running split tests so that your insights are comparative (e.g. Feature A vs. Feature B) rather than independent (e.g. reactions to just Feature A), especially with smaller samples
  • You are sourcing an audience based on behaviors and problems, and not on preconceived notions of how certain demographics map to preferences

Done right, you can combine experimentation with market research to achieve a powerful trifecta:

How to determine when customer feedback is actionable

Read article

How do market researchers add experimentation to drive better business outcomes?

Read article

How to iterate on a product concept

Read article

Experimentation best practices

You don’t have to wait until you have a fully engineered product in the market to get feedback from customers for the first time. Split testing, or “A/B testing,” is a hugely popular marketing tactic for optimizing collateral and improving conversion funnels early in a campaign, before making major investments in any particular direction. Similarly, product teams can run tests using alternative prototypes or mockups to save time and money while also unearthing potential for innovation – all before writing a single line of code!

Follow this 5-step split testing process and accompanying example to generate qualitative and quantitative data with prototypes:

  1. Formulate a hypothesis: Begin with a decision that needs to be made, and work backward toward the assumptions or questions that need to be tested or answered in order to be as informed as possible. You have limited resources and stakeholders simultaneously clamoring for a number of features. You can’t do everything. You have to choose amongst alternatives, each of which has their own merits. Frame your assumptions and success criteria that will ultimately drive whatever individual decision you need to make.
  2. Create alternative prototypes or mockups: Once you have a clearly defined hypothesis, you have to create stimuli that will generate the user feedback you need to validate or invalidate your hypothesis. If you are prioritizing features on a roadmap, create a prototype that highlights the feature along with a prototype that doesn’t include the feature. If you are determining which possible implementation of a feature would perform best, create a prototype of each.
  3. Source your target audience: Once your prototypes are ready, you’ll need access to your target audience to generate feedback. You’ll need a small sample for qualitative testing and a larger sample for quantitative testing. Depending on what you’re looking to learn, you may want to stagger and reorder qualitative and quantitative testing to maximize insights.
  4. Generate authentic feedback: Alpha’s on-demand insights platform has best practices built into testing, but the rule of thumb is to do your best to simulate what would be an authentic shopping or evaluation experience. In the real world, customers usually have multiple options when making a purchasing decision. They’re usually in a specific mindset when shopping for certain types of offerings. Your testing methodologies should simulate these experiences and mental models, and include hard-hitting questions that generate authentic behaviors and responses.
  5. Objectively evaluate results: Evaluating test results is often just as, if not more, difficult than generating the data in the first place. I recommend two techniques to mitigate bias. First, never rely on a single data point to make a decision. Iteration, robustness, and replicability are the most powerful weapons in your arsenal. Second, try to separate the ideation process from the evaluation process. Don’t let individuals who are emotionally invested in ideas also be the ones to evaluate the test results.

Prototyping best practices

Testing your product ideas in prototype form is a critical part of the product life cycle. While surveys are powerful for learning about user preferences, testing interactive prototypes are needed to generate specific, insightful feedback about how your users understand and interact with your product.

But there are a few things you should know first.

Don’t test the one prototype to rule them all. It’s very common for product teams to create large prototypes that achieve multiple goals: To present to major stakeholders for approval on the design and user experience or to show to developers so they can better understand the build expectations. While these heavy prototypes may be a necessary part of your product process, they are not ideal for focused user testing. An unfocused prototype means that your testers can meander down any possible pathway in your product, and may never see the areas that you are actually need feedback on.

Instead, break up these large prototypes into shorter, smaller flows with fewer screens, so that each prototype is focused on an area you want to test. Smaller prototypes take less time to create, review and edit, and naturally shorten the production time.

To decide what to include in the prototype, ask yourself:

  1. What is the most important thing we want to learn from this test?
  2. What screens and / or interactions does the tester need to experience, in order to answer our questions?
  3. What does the user not need to see for this test?

Keep each prototype to 2 flows maximum, and decide which areas are necessary for interaction / clickability. To simplify the prototype, omit any interactions that are not related to the most important thing you want to learn.

You must also be aware of misaligned objectives from different stakeholders in the process. While user research and testing have become popular buzzwords, allowing the time to execute short testing cycles (and iterate based on testing feedback) within a waterfall framework is very, very difficult. Many large companies (and agencies working with large companies) are required to get approvals before moving to the testing phase. These approval processes can take weeks, if not months. In addition, most agencies create budgets with milestones and deliverable dates they need to hit, and may not factor in the time for multiple cycles of user research and testing, due to client-dictated timelines.

To address this issue, encourage budget creators to allow for these multiple test cycles up front, as well as communicate to the client that it is important to include this time. Testing shorter prototypes (as mentioned above) will help alleviate the stress of creating prototypes for testing and client approval, as well as providing data points to back up all design decisions

Calculating the ROI of prototyping

Read Article

How split testing validates new product concepts without code

Read Article

Optimizing your design for rapid prototype testing

Read Article

Better workflows, smarter product decisions

As with most things in business and life, there’s no one-size-fits-all approach. Three general models work for adding experimentation to your company’s arsenal of customer-focused initiatives:

  • Product teams have direct access to frameworks, tools, and platforms to run experiments. This works really well when your teams are trained to understand data, empowered to make decisions, and need to move quickly in an emerging or fast-paced market.
  • Research teams incorporate their principles and best practices into the frameworks, tools, and platforms that product teams use. This works really well when research teams communicate the strengths and weaknesses of different methodologies and are aligned with product teams’ objectives and workflows.
  • Research teams act as a hub for running experiments, using their own frameworks, tools, and platforms entirely. This works really well if research teams have the resources and bandwidth to handle and prioritize all the inbound questions and hypotheses.

Learn more about Digital Transformation

Learn More

Request more information

See how Alpha can enable your organization to be customer-centric at lightspeed.

Or run a free experiment