In the world of conversion optimization, experimentation is king. Test content, messaging against different audiences to understand what really engages, and converts your visitors.
We’ve discussed the strong limitations and flaws of A/B testing in this article and today, we’ll look at a few results that we usually accept when we should be pushing back and questioning!
TL;DR: Status quo in testing can be challenged in many ways.
- Demand results in days and weeks, not months
- Conversion loss while testing is not required! Intelligence can prevent that!
- Experiments should give you more than 1 winner. Your audience is full of nuances, cater to all of them!
- You don't know what tomorrow will be made of…Assume the same for your testing engine and let’s stop uniform distribution of content
Months-long testing? Demand days or weeks instead!
We have access to tremendous computation resources and fast algorithms that can learn from their own experiments. A successful testing campaign should run for weeks at most.
Keep in mind though: successful optimization is a continuous journey, always on. But that doesn't mean the testing part of it should last months!
An overall decrease in conversions during your testing? It doesn't have to be!
Testing randomly for months exposes you to significantly lower your overall conversion rate. Indeed: what is most of your variants of content are underperformer and no dynamic/intelligent adjustment is made? Even if your baseline performs better, it is unlikely it will offset the loss.
1 winner, really? Does your *entire* audience uniformly agree?
It is very unlikely your audience as a whole reacts uniformly to your content. Looking for 1 winner is not the way to look at successful experimentation. Sensitivity and intent-driven winners should be the goal.
Instead, look for many winners and understand the subtle differences of expectations from your many segments of visitors
Same impression distribution…every day?
Your audience changes daily, reports of variants performance are coming in, new outside campaigns are launched, influencing your visitors.
Pick a solution intelligent enough to distribute the content accordingly and dynamically.
Now let’s look at a good example of experiment results
Hopefully, this gives you a few questions to ask your conversion optimization and testing team (or yourself!) if you see these results.
(or ask us, we’re happy to help at Cauzal AI )