I’ll always remember watching the near-collapse of the world financial system in 2008 and thinking, “But things were supposed to be going so well!” And they were. The economy wasn’t just up; it was, by most measures, the best it had ever been.
“How could this happen?” I and many others wondered. I believe we fell victim to one of the oldest blunders in business: We were focusing on the wrong metrics of progress. Investment was up, mortgages were selling like hotcakes and the big banks had never failed to turn money into more money. Those, we were told, were the things that mattered.
Had we known what metrics for progress and improvement in the market we should have been looking for, perhaps the rickety nature of the system would have become obvious.
I see this problem, in one form or another, all over the business world. Even as I’ve built my company, Widerfunnel, based on a reputation for growth-driving experimentation, I’ve never focused on performing more experiments for their own sake. I’ve found that focusing on the quality of insights and proven business value is more beneficial than quantity or production volume alone.
Focusing on the wrong measure of your program’s success leads to a misleading sense of accomplishment. We’ve had several companies recently come to us with an assumption that they should measure their experimentation program’s success solely according to how often they run an experiment — their “experiment velocity.” This assumes that running twice as many experiments will reliably produce twice as much business growth. But this assumes that every experiment has equal value — which couldn’t be more wrong.
Consider two scenarios: In a given span of time, we could conduct either 20 experiments (scenario A) or 10 (scenario B). With the same resources, conducting 10 experiments allows us to put more effort into qualitative studies, such as customer interviews and user testing, and on expert analysis and behavior pattern research, which leads to new, data-informed hypotheses. If the 20 scenario A experiments produce an average of 1% revenue improvement, and the 10 scenario B experiments (which are founded on stronger insights) produce an average of 4% improvement, then by doing half as many experiments, you can produce twice the extra revenue.
That’s why I care less about experiment velocity than metrics that reveal growth and insight velocity. What matters is how quickly your company can improve its performance metrics and customer understanding, not how quickly it can simply produce experiment designs and production code.
That’s what Dan Toma, coauthor of The Corporate Startup, found when he compared the level of experimentation to the level of useful experimentation between two teams. It turned out that the team performing fewer experiments was producing far more actionable results per experiment, turning his faith in the power of experiment velocity on its head.
Let’s be clear, though: Doing half as many experiments should not mean doing half as much work. If anything, doing fewer, more meticulous studies may be harder than a rapid-fire approach — but it’s also far more rewarding.
In my experience, taking greater care in the design of experiments produces dramatic improvement in returns versus more frequent and poorly designed studies. Generating a sustainable increase in business improvement (or as my e-commerce clients call it, “uncovered revenue”) takes a wider and more nuanced understanding of your core customer, not just throwing spaghetti against the wall in the hope that something sticks.
Focusing on experiment velocity can lead to predictable problems, like a tendency to test only small, simple ideas that can be addressed quickly. This leaves the meatier problems, which can offer greater potential improvement, sitting ignored by a team that’s incentivized by the wrong program metric.
Part of the problem stems from preconceptions about what sorts of experiments can have business value. We’ve found that quantitative A/B test experiments need to be paired with studies using qualitative methods. In this mixed-methods approach, qualitative analysis that generates interesting questions is alternated with quantitative experiments to gain confidence in proven insights.
For example, analysis of session recordings and web analytics might show that your user onboarding funnel has a conversion problem at one point. Qualitative feedback from a follow-up user testing study reveals that negative emotions are triggered in the process. By taking the time to hone in on the cause of the barrier, we can now design insight-led experiments to test various solutions.
Always remember that experiments are a means to an end, not an end in themselves. Even though my company specializes in designing and performing experiments, we know that the purpose of the experiment is to produce real-world growth and insights, not just experiment production volume.
If the entire U.S. economy was able to push itself to the precipice of doom while many of us missed the warning signs, then let’s face it: Your company can, too.
The best way to see the warning signs is to use a blend of program-level metrics that balance each other. Experiment velocity should be combined with some indicator of experiment quality and business value.
Depending on your program structure, you might track quality metrics like your actionable rate (the proportion of experiments that produce actionable results) or your hard code rate (the percentage of positive conclusive experiments that end up as deployed code). Combine these metrics with an indicator of business impact using your uncovered revenue ratio or untested visitor rate (the percentage of visitors who are not included in a test, which you’d want to minimize).
This type of metrics blending allows you to see accurate indicators of your program’s value. Getting the program metrics right at the beginning will increase your confidence that you’re continuously providing more tangible customer insights and driving business growth.
Go to Source
Author: Chris Goward, Forbes Councils Member