What your A/B digital tests don’t tell you

Katie Sweet

Can you imagine not pretesting a TV ad? Just throwing something together, shipping it, and hoping for the best? It’s a scary thought.

But digital ads are a completely different story. The digital world moves fast. It’s easy to track performance in-flight, swap out creative, or end a campaign early if it’s not going well. As a result, pretesting isn’t always a given like it is for TV advertising.

Instead of testing an ad before it’s launched, digital marketers rely on A/B testing to test the ad when it’s already live. Your digital marketing team may say they have it covered and don’t need your insights expertise for this.

But while A/B testing is definitely useful, it leaves a few gaps. If you’re not usually involved in digital, it may seem easier to stay out of it. But your expertise is still needed to fill some of these gaps.

Let’s talk about what A/B testing can and can’t do in this blog post.

What A/B testing is good for

First, a quick primer on A/B testing. When you A/B test, you have two versions of a digital ad running simultaneously. Which viewer sees which ad is randomly determined by your ad buying platform. That system tracks behavioral metrics for each of the ads (clicks, conversions, video ad skips, etc.). Over time, it is able to statistically determine whether one of the versions “won” over the other based on these metrics.

A/B testing is great at telling you which of your two ad versions performed best with performance metrics. It has been used for a very long time to do just that.

Neon win image
What A/B testing doesn’t do

While A/B testing is certainly useful, it can’t do everything.

Let’s say you have two versions of a static banner ad.

The creative for version A shows smiling people using your product while version B shows just the product itself. Over time, the A/B test indicates that version B was more successful at driving clicks than version B. That’s useful information.

But what don’t you know?

A/B tests don’t tell you if there was a better version you didn’t test

In this example, you can conclude that B was the better of the two versions. But was B the best possible version of the ad to start with? Was there anything you should have changed before you even launched the ad to improve it?

For example, you don’t know if the message resonated with consumers. You don’t know if there was anything they found confusing — or if they even took away the right message. You don’t know if any of the other creative options you considered would have been stronger than the two versions you went with.

In other words, an A/B test can’t tell you if there was anything you could have done differently going into the test, it can only tell you what performed well from what you thought to include in the ad.

You’ll never know if you could have improved A or B from the start.

A/B tests don’t tell you the why

You know that version B won. But do you know why it won?

Your team may hypothesize that simple product images work better over images of people. You may incorporate that into digital creative going forward.

But do you know for sure that’s what happened? Maybe consumers thought the people in this particular ad looked unnatural. Maybe the slightly different placement of text meant that viewers missed the message in version A and therefore weren’t compelled to click on it.

You may draw the wrong conclusions about what works if you rely on A/B testing alone. And that can have negative implications for your ads going forward if you make decisions about why you think consumers reacted a certain way rather than why they actually reacted that way.

A/B tests don’t tell you the impact on your brand

A/B testing will only tell you which version of the ad scored the highest in performance metrics. But there’s so much you don’t know about the impact of both versions of the ad.

Did consumers recognize the ad was for your brand?

Did consumers think the ad fit your brand?

Was there anything in the ad that offended them?

If consumers don’t associate the ad with your brand, then you have essentially wasted your ad dollars.

And any negative experience can sour someone’s impression of your brand.

Since you can only A/B test with live ads, and without brand metrics, you can’t get ahead of these issues.

3 things I learned about brand building from Mark Ritson

Check out our article on brand building that insights and marketing professionals alike can benefit from.

How do you fill these A/B testing gaps?

A/B testing has its place in digital advertising. But it’s not a silver bullet for all your advertising needs.

In addition to A/B testing your ad once it’s live, get feedback from real consumers on your ad before it goes live — just like you do for your TV ads. Ask them the same types of questions you ask for TV ad pretesting to understand the strength of your creative, the potential impact to your brand, the resonance of the message and more.

And don’t forget to look at verbatims to really dig into the WHY behind the metrics. Find out why consumers like or don’t like an ad and spot the areas for improvement. That way, you’ll always put out the strongest version of your ad.

This may feel obvious to you and your insights team who have been working to improve creative pre-launch throughout your careers, but it can be a big change for your digital marketing teams. Take the time to educate your digital marketers on what they’re missing with their A/B tests, and explain how you’re able to improve TV or other ads pre-launch and what metrics you look at.

Subscribe to our newsletter

Each month we share the latest thinking from insights leaders and Zappi experts, open roles that might interest you, and maybe even a chart or two for all you data nerds out there.