How long do most split tests run?

How long do most split tests run?

Post #1 in 'What No One Tells You About Split Testing' series. Subscribe to receive articles in this series.

It’s never been easier to run experiments online. Thanks to Optimizely, VWO, and more, you can design and execute split tests faster than ever. The more tests you run, the faster you learn.

But there’s a catch. Launching is only half the battle. Once you publish an A/B test to your website, you have to wait for the results. This takes time.

How long does it take to a run a split test?

At DoWhatWorks, we have amazing access to thousands of split tests each year. We recently analyzed the trends across over two thousand tests to see how long they typically last. Our analysis found that split tests typically take 4+ weeks to run.

At that rate, you are limited to 12-16 AB tests per year to find the right experience for your customers. That’s not a lot of attempts to move a metric.

The good news is that you are not alone. Everyone faces this challenge, and there are ways to get results faster.

How do you get results faster?

  1. Reduce the number of variants - If you have limited traffic volume, the fewer variants you have, the sooner you will get your results.
  2. Get learning from high volume experiences – High volume experiences reach critical mass faster (the wins there are also more consequential). You can use the learning to inform bets on lower volume experiences.
  3. Get your learning elsewhere – Learn from other's tests to avoid losers. Each time you can avoid a losing experiment, you save weeks—weeks that could be spent running a test with better odds to win. Try not to waste cycles on lessons you can learn elsewhere.

This is easier said than done. You want to run many variants because you’re dying to know what will move the needle. And, some of your highest value experiences (like pricing pages) may not get a ton of traffic. By default, these kinds of experiences take more time to evaluate. How can you know upfront what is likely to work?

Our own team faced this harsh reality, and our experience inspired us to start DoWhatWorks. We want you to get results faster by leveraging what everyone else is testing. We want you to focus on the things that are most likely to work and avoid wasting time on the things that are unlikely to work.

We can’t increase your volume but we can give you access to tons of results to make the most of the traffic you have.

To learn from everyone at scale, request access to our private beta.

Is there something you'd like to see covered in this series? Let us know.