Unlock key insights from the first 200 episodes from our industry-leading podcast.

Learn More

How do decision makers think and experiment iteratively?

by Elizabeth Quispe
December 12, 2017

In today’s rapidly changing world, every business and product leader wants to make decisions as quickly and with as much confidence as possible. The challenge is that speed and certainty are often at odds with one another. That’s why the best decision-makers embrace methodologies and tools for rapid experimentation and iterative optimization. In other words, that’s why we built Alpha.

Historically, managers would commission large-scale research upfront, before investing in an initiative or project. They would design a study to evaluate a business opportunity, and it would take months to get data from a massive sample size. In simpler times, that may have worked, but consider what happens when companies do that today.

In mid 2017, a multinational bank used Alpha to test facial recognition for mobile app logins. The results were abysmal – users didn’t trust that facial recognition was secure, consistent, or convenient. So the bank rightfully scrapped the idea.

Just a few months later, Apple announced “Face ID,” a new technology coming to their flagship iPhone X device that would enable users to unlock their phones after sensors scanned their face. Of course, many people were and continue to be skeptical, but Apple is the most successful technology company in the world when it comes to driving behavior change. Needless to say, Face ID will soon become the new standard for securely logging into consumer devices.


A completely unpredictable and external factor made an idea suddenly great when, just months before, the same idea was terrible.

In one version of the world, the aforementioned bank exclusively conducts research before a project and makes a permanent decision about what they will invest in afterward. In a more forward-thinking reality, the bank continually tests assumptions – even the same assumptions – to track consumer preferences and make informed decisions throughout the product lifecycle based on the most relevant data. A recent article in The Harvard Business Review summarizes the necessity of the latter approach:

An alternative perspective on strategy and execution — one that we argue is more in tune with the nature of value creation in a world marked by volatility, uncertainty, complexity, and ambiguity (VUCA) — conceives of strategy as a hypothesis rather than a plan. Like all hypotheses, it starts with situation assessment and analysis — strategy’s classic tools. Also like all hypotheses, it must be tested through action. With this lens, encounters with customers provide data that is of ongoing interest to senior executives — vital inputs to dynamic strategy formulation. We call this approach “strategy as learning,” which contrasts sharply with the view of strategy as a stable, analytically rigorous plan for execution in the market.

It may not be a choice after all. “Strategy as learning” is perhaps the only option in today’s world. There are, however, some very real challenges involved when adopting this new workflow.

Shifting to iterative experimentation

Few organizations are built for iterative decision-making. Plans are due ahead of time. Stakeholders have set expectations. Even within a ‘test and learn’ environment, hypotheses can be difficult to articulate and validate.

You first have to educate key stakeholders on the value of experimentation. By having empathy for their objectives and priorities, you can frame and align experimentation with their needs, rather than telling them to have different needs. We wrote a guide for communicating the value of experimentation to common personas, and outlined a calculation method for measuring its ROI throughout the product lifecycle.

In order to actually make the change from upfront research to iterative experimentation, you need to understand the differences and how to use each.

Market research solves an acute pain point but was not designed for the needs of product managers and decision makers. Product research, on the other hand, involves testing ideas with small sample sizes of potential users and iterating on the idea based on the insights from each test.

A sample size of ten may not seem significant enough to provide meaningful insight, but it’s actually a feature of product research. Iterative testing is valuable because of the small sample size. Using a small sample size enables product teams to run more tests, thereby mitigating bias that might be present in any single test. Small tests also let product teams identify glaring risk sooner before sinking more costs into research or development. Remember that research is a broad category that includes far more than statistics and statistical significance, and leading product teams master other forms of learning that are more conducive to their constraints.

Within these situations, no single data point is a definitive “green light” or “red light” decision. Each test will teach you something that you want to follow up on. The value of testing is in a collection of test, not just a single test. It takes continuous iteration to develop a good idea into a product or feature set that will be successful in the market.

Experimentation with Alpha

Alpha’s platform is designed to drive a workflow around iterative experimentation. It’s there for when you need data that you can’t find on Google, and don’t have the time to craft and run a large-scale study.

There are literally thousands of assumptions that go into every decision, and thousands of decisions that are made throughout the product lifecycle. Alpha helps you put data behind as many decisions as possible. The result is less risk and an optimal chance of achieving your goals.

When using Alpha, decision makers have two primary responsibilities: asking the right questions which get turned into experiments, and making decisions with the resulting data from the experiment. The latter involves stakeholders and frameworks that are unique to your business, so we focus on best practices for the former.

Framing an assumption

Alpha offers six convenient templates for inputting assumptions into the platform. Each corresponds to a different phase of the product lifecycle, and can be configured to source your target audience, use pre-designed assets, and capture requested information.

Building a culture of experimentation is key to competing in today’s fast-changing world. Alpha is here to help any organization test and substantiate assumptions rapidly. If you have further questions about our platform, check out our Getting Started page. Of course, you can reach out to your customer success manager at any time for help!

Elizabeth Quispe

Elizabeth Quispe is the Director of Customer Success at Alpha. She helps clients learn best practices for experimentation and data-driven decision-making.