Why your data needs to account for cultural response bias

Jack Millership, Mollie Ayris

When conducting global research, you may have found yourself asking: How do cultural differences affect the way people answer surveys? When are differences statistically significant and when are they the result of cultural response bias?

Cultural response bias is the effect of respondents from different cultures answering in different ways for quantitative research — which can cause problems for market research analysis if it’s not accounted for.

We see many examples of it. For example, if it’s common in one culture to be more neutral when responding to questions, then it will be less common to receive a hard “no” when respondents from that culture are asked whether they’d use a specific product or enjoy a specific flavor. By not receiving an answer that provides true insight into that consumer’s wants and needs, brands will be hard-pressed to deliver a solution.

At Zappi, we’ve collected data for tens of thousands of concept and creative tests across multiple different markets — resulting in a vast lake of data available to conduct meta analyses to better understand cultural differences and cultural response bias that can occur.

In this article, we’ll dive into the types of cultural bias that can be found in quantitative surveys as well as share our insights on cultural response bias, backed by our own data.

Types of bias

There are at least three types of cultural bias in the ways respondents answer quantitative questions in surveys. These are:

  1. Acquiescence: A tendency to agree with what is being asked in the survey

  2. Middling: A tendency towards neutrality

  3. Nay-say: A tendency to disagree with what is being asked in the survey

Experienced researchers hold several truisms on cultural biases. Some examples from Tellis and Chandrasekaran’s Extent and Impact of Response Biases in Cross-National Survey Research include:

  • North American respondents react more positively to stimuli than the British respondents

  • Japanese respondents can typically nay-say

  • Many Indian consumers show an acquiescence bias when they are presented with a statement and asked if they agree or disagree

Now that you’re more familiar with the types of cultural response bias you could potentially see from survey respondents, let’s dive into what our data could tell us.

The methodology

Traditional agencies that have large databases typically find it complex and time consuming to aggregate data from many discrete concept and creative tests, but the meta analysis capabilities on our platform allow us to do this at the click of a button.

Zappi’s meta tab is a tool that we have for aggregating survey results. The meta tab allows us to analyze, cut, group and filter the results from the entire library of tests that our users have run on our platform.

We have used the meta tab to explore general cultural variances in survey responses.

Our findings

The table below was created from data we collected using our meta tab with data from Zappi Concept Test.

Each row shows a market’s average score for the Overall Appeal measure (how much respondents indicated they like the product), split by market and category (industry vertical). Only combinations of category and market with a minimum of 30 concepts included in the base are shown, with the markets color coded by continent.

While the data above can have some biases due to the varying stimuli quality, tested brands, or screening conditions and quotas used, it does allow us to look for patterns in data using much larger samples than would traditionally be available for this kind of research.

When grouping the data by continent, respondents in the Asian countries Philippines, China, Vietnam, and India tend to score concepts higher than most other regions, with Japan as an exception. Likewise, South American respondents appear to give fairly extreme scores, with Mexico and Brazil scoring concepts highly, but Argentina much lower on the spectrum. In contrast, European countries appear to cluster towards the middle, perhaps displaying a tendency towards neutrality.

Interestingly, when the North American markets are split, US respondents appear to score concepts slightly higher than Canadian respondents, who appear to score concepts more in line with European respondents, as do Australian respondents.

This data emphasizes the points of cultural tendencies mentioned earlier on in this article through Tellis and Chandrasekaran’s research. Which could therefore suggest that not much has changed in terms of cultural response since 2010 and that the research they presented remains accurate.

Also worth noting, these trends are not only present within innovation research. We’ve also seen them within advertising research — as illustrated above using the aggregated scores for Overall Appeal across ads tested on Zappi Creative VideoThese cultural response biases also appear to be cross-category, with little difference between their rankings.

Accounting for cultural response bias

Ultimately, it is essential to account for these biased responses when analyzing the results of a study. The extent of the differences can be considerable and therefore critical to your business decisions.

For example, if you directly compared findings from a study containing Japanese and Indian respondents, there could be a difference as high as 2 points on average Overall Appeal purely due to cultural response bias — which could greatly skew the findings if unaccounted for and lead to a faulty interpretation.

Being aware of this type of bias beforehand will also allow you to put scores into perspective and avoid shock at particularly high scores or particularly low scores for a concept or ad you’re testing.

Why sample consistency is everything

For more content on the importance of data quality, listen to our podcast episode on how to tackle the data quality crisis in the insights industry.

What should you be looking at?

Norms

When testing, the safest solution for cross-country analysis that will help to avoid cultural response bias is to analyze within countries first, utilizing the country and category norms available on the Zappi platform, and compare the eventual insights afterwards.In other words, it is best to compare the scores for stimuli to the norms of the countries that stimuli are tested in, and then compare these differences to norms across markets when making cross market comparisons.It’s important to note that ‘country’ appears to be far more influential than category when observing the data above, so it’s almost always better to use a country norm containing many categories than a category norm from many countries.

Grouping markets

It may also be worth trying to group markets depending on the type of bias that they show.For example, respondents in China, Vietnam, India and the Philippines all tend to give high scores, whereas scores in UK, France, Italy and Spain are more middling.While this does lend itself to a theory around shared culture or geographical proximity being a good way to group markets, caution should be exercised: In the first table we shared, we see very low scores for Argentina and Chile, but very high scores for Mexico and Brazil. Clearly the nuances of cultural response styles are more complex.

4 types of bad verbatim responses: Why they ‘Shall Not Pass‘

If you’d like to learn more about what we’re doing to maintain the quality of our data, check out our article 4 types of bad verbatim responses: Why they 'Shall Not Pass' at Zappi.

Looking forward

The more our users are able to build up their own country and category-specific norms, the more accurate and useful our data will be for distinguishing between the cultural response biases of different markets.

As it stands now, not only can our users group their stimuli by market, they can even group their stimuli in any way they choose, create norms and perform meta analysis using tags of their own creation (all of which helps us have better data!).

These meta tags are a core element of our reporting platform, allowing users to group survey scores according to the tags that the users have applied.

While a tag can represent almost anything, at Zappi we encourage our users to use them to codify the content of their stimuli. This could be as simple as marking the length of an ad, or whether a creative ends in a ‘chug shot’ of a consumer drinking from a carbonated soft drink.

As our users’ libraries of tested stimuli grow, they are able to unlock additional value and insight from previously conducted research. This is achieved by both conducting retrospective analysis on the content of their previously tested stimuli (like: Do ‘chug shots’ improve the effectiveness of my creative?) and establishing more relevant norms and benchmarks for future testing.

Final thoughts

There’s a lot of factors that can skew your data, but cultural response bias is a significant element to consider (and be more aware of).

To make sure you’re accounting for cultural response bias when conducting your own research, we recommend making it part of your process to first analyze your data by country, using established norms, then to compare these differences to norms across markets for cross-market comparison.

4 things you should be doing with your data

If you’d like more tips on how to ensure the quality of your own data, check out our article on the 4 things you should be doing with your data.

Subscribe to our monthly newsletter