Survey bias: What it is and how to beat it

Kirsten Lamb

Coke called it the most memorable marketing blunder ever.

New Coke.

Launched in 1985, New Coke was rolled out by the brand following a series of “successful taste tests,” in which consumers seemed to like the new, sweeter Coke flavor. But once it was released, new Coke failed to sell with consumers hating its saccharine taste. 

Taste test participants weren’t given the context: That new Coke would take classic Coke’s place. They weren’t surveyed on what new Coke vs old Coke meant to them. And they weren’t questioned on the packaging, emotional resonance or social weight of new Coke vs old. 

But Coke’s research team felt their isolated taste-test survey backed their original hypothesis: That the new, carefully-designed Coke would outsell its aged-out predecessor. 

Only it didn’t. 

And herein lies the problem of inaccurate, unreliable data created from “flawed” research methods.  

Biased survey design and researcher biases can lead you towards the worst possible decisions for your brand. 

In this post, I'll cover the impact of a range of the most common survey biases on survey outcomes, run through survey bias examples and show you how to beat them. 

Impact of bias on survey outcomes

Survey bias is when surveys are constructed in such a way, intentionally or not, that influences respondents’ answers. 

As a result, participants’ answers don’t reflect their genuine thoughts, feelings or perceptions — undermining the accuracy and reliability of your data. And if your data isn’t accurate and reliable then it’s not a good foundation for understanding your brand and consumers and making the right business decisions. 

The questions you ask. The samples you choose. The way you structure your survey. These can all throw off the way respondents answer your questions. 

Say you use leading questions to quiz customers about the draw of a new ad. 

If you ask them questions like:

  1. Did you like the song used in the ad? 

  2. Was the tagline funny? 

  3. Did you like the people featured in the ad? 

Then you may get a false sense of confidence in how effective your ad is and roll it out to unimpressed audiences. 

To get high-quality data, you need to create the right foundation and that starts with great survey design — free from bias.

Types of biases in surveys: Sampling bias vs response bias

Sampling bias is one of the main types of survey bias. Sampling bias refers to when certain members of a population are systematically, disproportionately selected in a sample — meaning that it no longer represents the target population. 

Researcher R.H. Riffenburgh says

“The bias may exist in the demographic character or in the nature of the subject being questioned, such as knowledge, belief, or attitude.”

Let’s break this down. Demographic character refers to the characteristics of a population, like age, gender, race, ethnicity or socioeconomic status. Say a company surveys more younger people than older people when researching the public’s opinion on the healthcare system  — this would bias the results by failing to accurately reflect the opinions of both older and younger generations. 

In comparison, "nature of the subject being questioned," refers to the topic the researchers are studying. Researchers' topic of choice can have a huge impact in who agrees to the research and how they choose to respond. 

Take a survey on sensitive or taboo topics like sexual behavior or substance use — certain people will be more comfortable being surveyed on these topics than others. This can lead to underrepresentation in the sample. 

If you notice, sampling bias takes place during the recruitment phase of the research process. On the other hand, response bias takes place during survey taking. When people respond “inaccurately” to survey questions, whether intentionally or unintentionally, you get response bias. If we go back to our taboo topic example — certain people will be more comfortable providing honest answers than others and this will impact the reliability of the data. 

Selection bias

While sometimes used interchangeably with sample bias, selection bias happens when researchers fail to choose survey participants at random. This form of bias covers both the selection process and who researchers end up choosing for their survey after they’ve selected potential respondents. 

For example, during the initial selection process, researchers may choose participants who don’t broadly represent their audience — such as choosing more respondents from a higher socioeconomic class that doesn’t accurately represent the different backgrounds of the people who shop with them. After their selection, more men than women in this sample may then drop out — further undermining the validity and generalizability of researchers’ data. 

Nonresponse bias

You’ve got your sample. But that doesn’t mean everyone in it is going to respond to your survey. You’re likely to come up against several people who won’t want or aren’t able to engage with your survey — giving you unrepresentative data. 

Researcher Martin Prince talks about how this form of bias can play out in medical research:

“In simple descriptive epidemiology, for example, the prevalence of depression in a community may be underestimated if those with depression are less likely to participate in the cross-sectional survey than those without depression. 

An association between lack of social support and depression may be overestimated either if those with good social support are less likely to take part if they are depressed or if those with poor social support are less likely to take part if they are not depressed. Again, note that when an association between an exposure and a disease is being estimated, bias will only occur if the error operates differentially with respect to both.”

Acquiescence bias 

Let’s review our acquiescence bias definition. 

Acquiescence bias (or agreement bias) is a form of bias that refers to respondents’ inclination to agree with a research question or statement even if they don’t really think or feel that way. This bias taps into many people’s desire to be agreeable. For example, they might say that they like a product’s new packaging because they believe that’s what researchers want them to hear from them. 

Social desirability bias

Social desirability bias refers to participants' choice to alter their answers to come across in a more socially-acceptable way. People want to come across as if they have socially acceptable views and opinions and engage in socially-desirable behavior. If we circle back to the taboo topics example — some people may feel less comfortable being open about casual sexual experiences and may choose not to report them to researchers.

5 strategies to avoid bias in survey research

Let’s jump into the main strategies you can to help avoid bias in your survey research. 

1. Use a vetted partner or tool

Dedicated, pre-vetted research tools or partners can help you avoid bias and protect the integrity of your data. 

External research partners can help you bring more transparency and accountability to your research process and spot blind spots or potential for bias more easily as they have a degree of separation from the research and less stake in its outcome. 

Research tools like AI-based software such as Zappi can help you create user-friendly surveys with less bias in their questions and their sequencing. You can use Zappi’s AI analytics features to automatically analyze and gain deeper insights into your data — with less of the burden of the personal biases of researchers that pop up in data analysis such as confirmation bias. 

2. Create neutral, clear survey questions  

“For many years, surveyors approached questionnaire design as an art, but substantial research over the past forty years has demonstrated that there is a lot of science involved in crafting a good survey questionnaire.” - Pew Research

When writing your survey questions, the most important thing is that your questions are clear and easy to understand. Follow our list for best practices to avoid bias in your survey questions: 

1. Write questions that are short, easy to understand and clear. Aim for zero ambiguity. 

2. Ask one question at a time 

3. Use common, easy-to-interpret words. Take into account respondents’ level of expertise, whether they may need a technical or “higher-level” of understanding to interpret the question, their education level, whether English is their first language and their cultural background. 

list of common and uncommon words
Source: PMC PubMed Central

4. Avoid loaded, biased, or potentially-offensive language. Pew Research Center shares: 

“Similarly, it is important to consider whether certain words may be viewed as biased or potentially offensive to some respondents, as well as the emotional reaction that some words may provoke. For example, in a 2005 Pew Research Center survey, 51% of respondents said they favored “making it legal for doctors to give terminally ill patients the means to end their lives,” but only 44% said they favored “making it legal for doctors to assist terminally ill patients in committing suicide.” 

5. Ditch vague words — clearly define what you mean. Here’s an example of concrete vs vague word choices:

vague words used to ask how often people exercise
Source: PMC PubMed Central

3. Randomize question and response orders

Randomize questions and response orders to help make sure that different respondents get a different sequence of questions. 

This helps cut down on the impact of response-order bias by sharing the bias across each question on your survey. By randomizing you’ll avoid biases like the recency and first-shown effect — protecting the quality of your data. 

4. Make sure you have a representative sample

A representative sample is a sample that successfully reflects the characteristics of your study population. Nobody is missing. And all perspectives are accounted for. 

To get a representative sample, you’ll need to use representative sampling methods. 

To give everyone in your research population a chance of being selected for your survey, use simple random sampling to choose from them at random. You can also use stratified random sampling to break down your overall population into groups and randomly select people from your survey from each group — this helps make sure that specific subgroups of your research population are represented in your research. 

5. Pre-testing surveys to identify potential biases

Run a pilot survey to uncover any biases. Choose a small group that’s representative of your chosen research participants and try out different survey structures to see whether certain biases come up. Use our above list to guide the creation of your survey and factor in each bias when reviewing your data, such as wording and response-order biases. 

Avoid bias, protect your data quality

From question wording to the researchers’ personal biases, bias is everywhere in survey research. Being aware of these biases is the first step to uncovering them and putting strategies in place to reduce their impact. 

Designing a survey that’s easy-to-understand, well structured, and as free from bias as possible is essential to getting high-quality data that can help you make the best decisions for your brand and campaigns.

🍟 Webinar: McDonald’s secret sauce

Watch our webinar to learn how McDonald’s creates it's newest product innovations through real consumer feedback.

Ready to create ads or innovation that win with consumers?