
A Revolution in Website Testing
Web testing and optimization is going through a major shift and it all comes down to leveraging data to make better decisions.
Let’s start with how the industry has operated for over a decade.
The Old Way
Tell me if this rings a bell.
You need to redesign your site. You hire an agency to do some research. You get your design team and marketing team making mock-ups. You look at the top incumbents in the space and use them as examples.
You change a few ideas because an executive says round buttons look “weak” and “video is the future”.
You spend four months on the redesign.
Then it launches and after the first few weeks….
Nothing changes. In fact, conversions are down a bit.
The team scrambles. Everyone is highly incentivized to say this $140,000 experiment is working.
You tweak the report. “Well yes our primary markets in North America and Europe are slightly down but did you see we have a new paid conversion from Djibouti? That’s a first! Our page resonates with new demographics!”
But the reality sinks in. It was a bust.
We see this repeatedly in businesses of all sizes. Many teams try to make up for the mistakes by increasing the volume of tests.
However, there are several fundamental problems with the conventional approach.
#1 Brands are copying the losing versions of their competitors' websites.
Brands set out to do a website redesign, or make some improvements, and the team goes out and finds examples from bigger incumbent competitors.
This acts as a foundation for best practices.
Logic goes: Competitor X is big. If they do their hero this way or set up their pricing plans like this, it must be working.
But here is the problem.
Most of those big brands are constantly testing.
You might have shown up on a Wednesday when they were running some off-the-wall test.
The normal control version converts at 4.4% but you got the split test (tested to only 10% of their traffic, lucky you) that converts at 1%. They are going to ditch it in another week.
But you don’t know that.
So you take it back to the team and spend a quarter on a redesign with a disastrous comp as a reference point.
I saw a competitor of Monday.com copy the losing side of an experiment. They likely went to the site, saw the variant and assumed it worked for Monday.com so they did it too. Unfortunately the copied the wrong thing.
This is the blindspot of using competitors as references without knowing what versions of their tests are winning.
#2 Teams are copying what’s common versus what’s winning
There is a herd mentality when it comes to certain aspects of web design. But oftentimes, when tested head to head, these “best practices” lose.
False Premise #1: Competitor comparison grids help conversions

Square ran a bold split test where they added a “see how Square compares to others” grid to their pricing page.
They wanted to see if competitor comparisons on their pricing page would help move buyers over the line.
This is an emerging trend I have tracked across dozens of SaaS brands in 2025… Zendesk, Front, Zoho.
Brands are trying to paint a clear “why us” but competitor content isn’t landing.
The plain version without the competitor section won out.
The problem is… Nobody trusts you when you talk about competitors.
The grids where you do everything and your competitors miss all the key features are fading because they don’t drive conversions.
So how do you establish a “why us”?
1) Be strategic on placement. Pricing pages do not seem to be a good place to mention competitors.
2) Focus on affinity instead of competitors. Klaviyo is a great example of this. They drill down on being the email solution for eCommerce AND the only CRM designed for B2C. They don’t need to talk about Drip or Mailchimp or Hubspot.
3) Focus on “defensible features”. If you have a feature that no other competitors have, emphasize that differentiating feature in your copy. “We are the only payment processing platform that can…
4) If you are going to mention competitors, try a more deeply researched, honest take on how you stack up. Take a look at the article Avoma did comparing Zoom IQ vs. Avoma. They explain their methodology, they actually show the value of all tools discussed and they work hard to make it a more objective take.
False Premise #2: Video performs well in the hero on homepages

This is a controversial one.
Brands invest a lot of resources into high production videos, so there is a strong vested interest to have these assets win.
Yet despite that, when tested head to head by dozens of the top brands, we repeatedly find videos lose out to static images or GIFs.
Part of the challenge with video is skimmability and not knowing if the video is a marketing promo, a product walk-through etc..
So, the recommendation would be to lean into a graphic/GIF for the homepage visual. But if you have to use a video, one way that seems to get better engagement is to have it behind a button that says “Watch Demo”. This provides a clear expectation to reality match.

False Premise #3 : Simple backgrounds convert better on checkout or register pages

In B2B SaaS, we find the opposite is often true. Above is a split test from Canva, where the illustrated background was the clear winner.
We see plain backgrounds lose out for dozens of other SaaS, from Zendesk to Gorgias. A common SaaS strategy is also to use a semi-opaque background showing the product.

Tom Orbach, head of growth marketing at Wiz, did a detailed analysis on this very thing (plain background vs. “product preview” background) and found huge lifts for the non-plain variants (a 25% conversion lift for MineOS and 94% lift for MyCase).
#3 Teams are leveraging anecdotal experience that is outdated.
A few years back (check this in the Wayback machine) most SaaS defaulted to monthly pricing.
Then someone realized they could charge annually as default but display it as monthly and now a large portion of B2B SaaS brands do this.
Between 2023-2024 nearly 30% of the top 100 B2B SaaS brands switched to the default annual (displayed monthly)

If you are someone using logic/experience from 2020 and applying it to 2025 you are in for a slog.
The New Way
Around 2018, in the paid advertising space, we saw a substantial shift where more and more brands stopped using manual targeting (and rough guessing) and switched to using the powerful algorithms of Facebeook, Google, Linkedin etc to help optimize for them.
They would simply upload a list of their customers and prospects and tell the ad engines to find a “lookalike audience”.
Online split testing is going through a similar evolution.
Today, we see the world's top brands using industry data, provided by companies like DoWhatWorks, to better run their web optimizations and testing programs.
DoWhatWorks has a patented technology that can detect any split test on the web, then analyzes what is being tested and what versions the brand favors, and finally has a research team that verifies, monitors, and aggregates these tests into the largest A/B test database in the world.
Now, brands can take their highest traffic pages and systemtically work through the major parts of the experience (value prop, hero, image, etc), and optimize based on what the data shows is effective.
They can see how often certain elements win, what winning variants have in common and be able to prioritize the changes that will have the biggest impact.
We envision a future where brands can run fewer, and more educated split tests and save immense amounts of time and resources in the process.