Using AI in 2025? Get inspired by the approach of 3 insights leaders from top brands
WATCH THE PANELLet’s start with a little experiment: Here’s a picture, take a look.
What do you see? A duck? Or a rabbit? Well, chances are this depends on when you’re seeing it.
This image was part of an experiment by Peter Brugger from the University of Zurich, in 1993. He showed the drawing to participants and asked what they saw. Some people took part around March - close to Easter - and others in October.
The results were clear.
Far more people seeing the image at Easter saw a bunny. And in October, the duck came to the fore.
This is a strong demonstration that we genuinely experience things differently in different contexts.
There’s further evidence that is arguably more relevant to marketing.
In a 2005 study, neurologist Michael Deppe and his colleagues at the University of Münster, set out to quantify the importance of the media context.
They showed 30 news headlines to a small group of participants. Respondents were asked to rate the believability of the headlines on a scale of 1 to 7, with 1 being the most credible and 7 the least.
The headlines appeared to come from one of four news magazines. Headlines were rotated, so that all participants saw every headline in every magazine. This allowed the researchers to assess the effect of the context on the credibility of the headlines.
Results showed that believability was significantly influenced by publication.
Headlines in the most respected magazine scored on average 1.9, compared to 5.5 in the least regarded magazine.
This demonstrates how significantly we can be swayed by contextual cues.
And, as Jeremy Bullmore — ex-Creative Director and Chairman at JWT — notes, this affects ads as well as headlines. He said:
A small ad reading “Ex-governess seeks occasional evening work” would go largely unremarked in the chaste personal columns of The Lady. Exactly the same words in the window of a King’s Cross newsagent would prompt different expectations.
Not only are we easily swayed by context — but we also have a tendency to underestimate its effect.
An eye-opening study from 1998 illustrates this lack of self-awareness.
Daniel Read and Barbara Van Lowen at Leeds University looked at the effect of timing on future food preferences.
Two snacks were on offer to 200 participants: an apple, or a bar of chocolate. They assigned the people to one of two groups.
Future group: Participants were told they’d get the snack in a week’s time, and asked to select what they’d like.
Present group: Participants were shown the snacks right away and asked to choose.
The results were powerful.
With the snack arriving in the future, around 75% of participants opted for an apple. But when they were getting a nibble for right now, 70% of participants went for the chocolate.
This research, and other studies like it, suggests that we are very poor at predicting our future selves. Even though we actually love a chocolate freebie, we somehow guess that in a week’s time we’d want something healthy.
For researchers, this means that it's tricky asking people to guess how they might behave in a different context from their current one. And the fact that channel type has such sway suggests that the average research setting could throw off results too.
It’s pretty hard to get around the channel impact unless you’re carrying out real-world research and testing your ad in different settings.
But one way to mitigate some of the context influence is to set up test conditions that are at least a step closer to a real world scenario than the typical research set-up.
Zappi’s approach to ad development research is called the Amplify ad system. Amplify uses a ‘layered’ approach to test advertising effectiveness. Most research firms start and end with focused, forced exposure. But that doesn’t paint the full picture. For Zappi, the in-context viewing experience is an influential factor in building true creative effectiveness.
When testing a TV ad, viewers see a snippet of real TV, which is compulsory to view. This puts the consumers into the watching mindset. They then see 3 ads: the one the brand is wanting to learn about, and 2 others as ‘distractions’. These are shown in random order, and are taken from real ads in the current market. As in the real world, respondents have the option to ‘skip’ any of these ads by pressing the spacebar or tapping the screen.
After seeing the 3 ads, they go back to snippet of real TV before going into the survey
This serves to bring as much of a normal setting as possible to their research, removing some of the impact of context.
Of course, they can’t stop participants spotting Easter bunnies in April - but an awareness of these quirky human biases can bring deeper insights into research analysis.
Learn more about the Zappi Ad System and how it can help you not only create ads people love, but develop learnings over time to help your brand grow.
____
To learn more about understanding the effectiveness of your advertising in context, talk to the Zappi team.
Richard Shotton specialises in applying behavioural science to marketing. He is the author of two books on the topic, The Choice Factory and The Illusion of Choice, and tweets about the topic on the handle @rshotton.