Experimentation best practices
You don’t have to wait until you have a fully engineered product in the market to get feedback from customers for the first time. Split testing, or “A/B testing,” is a hugely popular marketing tactic for optimizing collateral and improving conversion funnels early in a campaign, before making major investments in any particular direction. Similarly, product teams can run tests using alternative prototypes or mockups to save time and money while also unearthing potential for innovation – all before writing a single line of code!
Follow this 5-step split testing process and accompanying example to generate qualitative and quantitative data with prototypes:
- Formulate a hypothesis: Begin with a decision that needs to be made, and work backward toward the assumptions or questions that need to be tested or answered in order to be as informed as possible. You have limited resources and stakeholders simultaneously clamoring for a number of features. You can’t do everything. You have to choose amongst alternatives, each of which has their own merits. Frame your assumptions and success criteria that will ultimately drive whatever individual decision you need to make.
- Create alternative prototypes or mockups: Once you have a clearly defined hypothesis, you have to create stimuli that will generate the user feedback you need to validate or invalidate your hypothesis. If you are prioritizing features on a roadmap, create a prototype that highlights the feature along with a prototype that doesn’t include the feature. If you are determining which possible implementation of a feature would perform best, create a prototype of each.
- Source your target audience: Once your prototypes are ready, you’ll need access to your target audience to generate feedback. You’ll need a small sample for qualitative testing and a larger sample for quantitative testing. Depending on what you’re looking to learn, you may want to stagger and reorder qualitative and quantitative testing to maximize insights.
- Generate authentic feedback: Alpha’s on-demand insights platform has best practices built into testing, but the rule of thumb is to do your best to simulate what would be an authentic shopping or evaluation experience. In the real world, customers usually have multiple options when making a purchasing decision. They’re usually in a specific mindset when shopping for certain types of offerings. Your testing methodologies should simulate these experiences and mental models, and include hard-hitting questions that generate authentic behaviors and responses.
- Objectively evaluate results: Evaluating test results is often just as, if not more, difficult than generating the data in the first place. I recommend two techniques to mitigate bias. First, never rely on a single data point to make a decision. Iteration, robustness, and replicability are the most powerful weapons in your arsenal. Second, try to separate the ideation process from the evaluation process. Don’t let individuals who are emotionally invested in ideas also be the ones to evaluate the test results.
Prototyping best practices
Testing your product ideas in prototype form is a critical part of the product life cycle. While surveys are powerful for learning about user preferences, testing interactive prototypes are needed to generate specific, insightful feedback about how your users understand and interact with your product.
But there are a few things you should know first.
Don’t test the one prototype to rule them all. It’s very common for product teams to create large prototypes that achieve multiple goals: To present to major stakeholders for approval on the design and user experience or to show to developers so they can better understand the build expectations. While these heavy prototypes may be a necessary part of your product process, they are not ideal for focused user testing. An unfocused prototype means that your testers can meander down any possible pathway in your product, and may never see the areas that you are actually need feedback on.
Instead, break up these large prototypes into shorter, smaller flows with fewer screens, so that each prototype is focused on an area you want to test. Smaller prototypes take less time to create, review and edit, and naturally shorten the production time.
To decide what to include in the prototype, ask yourself:
- What is the most important thing we want to learn from this test?
- What screens and / or interactions does the tester need to experience, in order to answer our questions?
- What does the user not need to see for this test?
Keep each prototype to 2 flows maximum, and decide which areas are necessary for interaction / clickability. To simplify the prototype, omit any interactions that are not related to the most important thing you want to learn.
You must also be aware of misaligned objectives from different stakeholders in the process. While user research and testing have become popular buzzwords, allowing the time to execute short testing cycles (and iterate based on testing feedback) within a waterfall framework is very, very difficult. Many large companies (and agencies working with large companies) are required to get approvals before moving to the testing phase. These approval processes can take weeks, if not months. In addition, most agencies create budgets with milestones and deliverable dates they need to hit, and may not factor in the time for multiple cycles of user research and testing, due to client-dictated timelines.
To address this issue, encourage budget creators to allow for these multiple test cycles up front, as well as communicate to the client that it is important to include this time. Testing shorter prototypes (as mentioned above) will help alleviate the stress of creating prototypes for testing and client approval, as well as providing data points to back up all design decisions