Super Bowl LIX: Get the report with our takeaways after the game đ
RESERVE YOUR COPYCoke called it the most memorable marketing blunder ever.
New Coke.
Launched in 1985, New Coke was rolled out by the brand following a series of âsuccessful taste tests,â in which consumers seemed to like the new, sweeter Coke flavor. But once it was released, new Coke failed to sell with consumers hating its saccharine taste.Â
Taste test participants werenât given the context: That new Coke would take classic Cokeâs place. They werenât surveyed on what new Coke vs old Coke meant to them. And they werenât questioned on the packaging, emotional resonance or social weight of new Coke vs old.Â
But Cokeâs research team felt their isolated taste-test survey backed their original hypothesis: That the new, carefully-designed Coke would outsell its aged-out predecessor.Â
Only it didnât.Â
And herein lies the problem of inaccurate, unreliable data created from âflawedâ research methods. Â
Biased survey design and researcher biases can lead you towards the worst possible decisions for your brand.Â
In this post, I'll cover the impact of a range of the most common survey biases on survey outcomes, run through survey bias examples and show you how to beat them.Â
Survey bias is when surveys are constructed in such a way, intentionally or not, that influences respondentsâ answers.Â
As a result, participantsâ answers donât reflect their genuine thoughts, feelings or perceptions â undermining the accuracy and reliability of your data. And if your data isnât accurate and reliable then itâs not a good foundation for understanding your brand and consumers and making the right business decisions.Â
The questions you ask. The samples you choose. The way you structure your survey. These can all throw off the way respondents answer your questions.Â
Say you use leading questions to quiz customers about the draw of a new ad.Â
If you ask them questions like:
Did you like the song used in the ad?Â
Was the tagline funny?Â
Did you like the people featured in the ad?Â
Then you may get a false sense of confidence in how effective your ad is and roll it out to unimpressed audiences.Â
To get high-quality data, you need to create the right foundation and that starts with great survey design â free from bias.
Sampling bias is one of the main types of survey bias. Sampling bias refers to when certain members of a population are systematically, disproportionately selected in a sample â meaning that it no longer represents the target population.Â
Researcher R.H. Riffenburgh says:Â
âThe bias may exist in the demographic character or in the nature of the subject being questioned, such as knowledge, belief, or attitude.â
Letâs break this down. Demographic character refers to the characteristics of a population, like age, gender, race, ethnicity or socioeconomic status. Say a company surveys more younger people than older people when researching the publicâs opinion on the healthcare system â this would bias the results by failing to accurately reflect the opinions of both older and younger generations.Â
In comparison, "nature of the subject being questioned," refers to the topic the researchers are studying. Researchers' topic of choice can have a huge impact in who agrees to the research and how they choose to respond.Â
Take a survey on sensitive or taboo topics like sexual behavior or substance use â certain people will be more comfortable being surveyed on these topics than others. This can lead to underrepresentation in the sample.Â
If you notice, sampling bias takes place during the recruitment phase of the research process. On the other hand, response bias takes place during survey taking. When people respond âinaccuratelyâ to survey questions, whether intentionally or unintentionally, you get response bias. If we go back to our taboo topic example â certain people will be more comfortable providing honest answers than others and this will impact the reliability of the data.Â
While sometimes used interchangeably with sample bias, selection bias happens when researchers fail to choose survey participants at random. This form of bias covers both the selection process and who researchers end up choosing for their survey after theyâve selected potential respondents.Â
For example, during the initial selection process, researchers may choose participants who donât broadly represent their audience â such as choosing more respondents from a higher socioeconomic class that doesnât accurately represent the different backgrounds of the people who shop with them. After their selection, more men than women in this sample may then drop out â further undermining the validity and generalizability of researchersâ data.Â
Youâve got your sample. But that doesnât mean everyone in it is going to respond to your survey. Youâre likely to come up against several people who wonât want or arenât able to engage with your survey â giving you unrepresentative data.Â
Researcher Martin Prince talks about how this form of bias can play out in medical research:
âIn simple descriptive epidemiology, for example, the prevalence of depression in a community may be underestimated if those with depression are less likely to participate in the cross-sectional survey than those without depression.Â
An association between lack of social support and depression may be overestimated either if those with good social support are less likely to take part if they are depressed or if those with poor social support are less likely to take part if they are not depressed. Again, note that when an association between an exposure and a disease is being estimated, bias will only occur if the error operates differentially with respect to both.â
Letâs review our acquiescence bias definition.Â
Acquiescence bias (or agreement bias) is a form of bias that refers to respondentsâ inclination to agree with a research question or statement even if they donât really think or feel that way. This bias taps into many peopleâs desire to be agreeable. For example, they might say that they like a productâs new packaging because they believe thatâs what researchers want them to hear from them.Â
Social desirability bias refers to participants' choice to alter their answers to come across in a more socially-acceptable way. People want to come across as if they have socially acceptable views and opinions and engage in socially-desirable behavior. If we circle back to the taboo topics example â some people may feel less comfortable being open about casual sexual experiences and may choose not to report them to researchers.
Letâs jump into the main strategies you can to help avoid bias in your survey research.Â
Dedicated, pre-vetted research tools or partners can help you avoid bias and protect the integrity of your data.Â
External research partners can help you bring more transparency and accountability to your research process and spot blind spots or potential for bias more easily as they have a degree of separation from the research and less stake in its outcome.Â
Research tools like AI-based software such as Zappi can help you create user-friendly surveys with less bias in their questions and their sequencing. You can use Zappiâs AI analytics features to automatically analyze and gain deeper insights into your data â with less of the burden of the personal biases of researchers that pop up in data analysis such as confirmation bias.Â
âFor many years, surveyors approached questionnaire design as an art, but substantial research over the past forty years has demonstrated that there is a lot of science involved in crafting a good survey questionnaire.â - Pew Research
When writing your survey questions, the most important thing is that your questions are clear and easy to understand. Follow our list for best practices to avoid bias in your survey questions:Â
1. Write questions that are short, easy to understand and clear. Aim for zero ambiguity.Â
2. Ask one question at a timeÂ
3. Use common, easy-to-interpret words. Take into account respondentsâ level of expertise, whether they may need a technical or âhigher-levelâ of understanding to interpret the question, their education level, whether English is their first language and their cultural background.Â
4. Avoid loaded, biased, or potentially-offensive language. Pew Research Center shares:Â
âSimilarly, it is important to consider whether certain words may be viewed as biased or potentially offensive to some respondents, as well as the emotional reaction that some words may provoke. For example, in a 2005 Pew Research Center survey, 51% of respondents said they favored âmaking it legal for doctors to give terminally ill patients the means to end their lives,â but only 44% said they favored âmaking it legal for doctors to assist terminally ill patients in committing suicide.âÂ
5. Ditch vague words â clearly define what you mean. Hereâs an example of concrete vs vague word choices:
Randomize questions and response orders to help make sure that different respondents get a different sequence of questions.Â
This helps cut down on the impact of response-order bias by sharing the bias across each question on your survey. By randomizing youâll avoid biases like the recency and first-shown effect â protecting the quality of your data.Â
A representative sample is a sample that successfully reflects the characteristics of your study population. Nobody is missing. And all perspectives are accounted for.Â
To get a representative sample, youâll need to use representative sampling methods.Â
To give everyone in your research population a chance of being selected for your survey, use simple random sampling to choose from them at random. You can also use stratified random sampling to break down your overall population into groups and randomly select people from your survey from each group â this helps make sure that specific subgroups of your research population are represented in your research.Â
Run a pilot survey to uncover any biases. Choose a small group thatâs representative of your chosen research participants and try out different survey structures to see whether certain biases come up. Use our above list to guide the creation of your survey and factor in each bias when reviewing your data, such as wording and response-order biases.Â
From question wording to the researchersâ personal biases, bias is everywhere in survey research. Being aware of these biases is the first step to uncovering them and putting strategies in place to reduce their impact.Â
Designing a survey thatâs easy-to-understand, well structured, and as free from bias as possible is essential to getting high-quality data that can help you make the best decisions for your brand and campaigns.
Watch our webinar to learn how McDonaldâs creates it's newest product innovations through real consumer feedback.