What is the term for when you have the right people for a survey but groups of respondents within the sample do not respond and skew your results?

You can’t always eliminate bias, but you can be aware of it 

By Alli Milne, Manager of Digital Learning Content at Alchemer 

Research and sampling are about drawing conclusions from a subset of a group. In order to collect valid data, researchers need to ensure that everybody in that group has an equal opportunity to participate. The biggest challenge is that researchers might not be aware that they’ve created a biased survey until after they collect the results. Survey builders need to be vigilant about their own biases as well as the biases of the people they’re surveying. 

There are more than a dozen different types of bias that can come into play either in creating the survey or in taking it. And there are ways to minimize the possibility and impact of both types of bias.  

Types of Survey Bias 

The hardest one to spot is Hidden bias. Hidden bias happens in surveys when the words used in the questions or answer options unintentionally (or intentionally) influence the choice of the respondent Adjectives are the biggest culprit because they are descriptive. Even when asking for adjectives that describe a subject, your choice and order of adjectives can dramatically impact results. Read more about hidden bias here. 

For example, in a list of adjectives, extremes like “evil” and “angelic” can influence the choice of other words that might be more neutral. Similarly, selecting from a color list that includes British Racing Green (as opposed to dark green) could get people to choose it because it sounds faster or cooler. 

Confirmation bias is when the survey builder phrases questions in a one-sided way, subconsciously looking for results to support a hypothesis, or confirm a belief. Rather than asking, “Do you prefer Product A or Product B?“ they phrase the question as “Is Product A better than Product B?” This predisposes the respondent to choose Product A over Product B, confirming what the question states – that Product A is better than Product B. You can read more here.  

Irrational Escalation is often the result of a survey laden with confirmation bias that doesn’t deliver the expected results. Here the stakeholders believe the research is wrong because it doesn’t support a sunk debt or investment. This is how companies miss major market shifts (such as moving from DVD rentals to streaming. Read more here. 

Gender bias tends to err on the side of being binary – male or female. On the other side, listing the 25 gender options distributed by the Department of Education can lead to quick answers and survey fatigue. The Center for Diversity and Inclusion at the American University differentiates Sex (chromosomes, hormones, and physical characteristics), Gender (internal sense of self), and Sexual Orientation (emotional, physical, and sexual attraction). Decide which criteria is most important to know (if any) before building a survey. You can read more here.  

Sampling bias occurs when a survey doesn’t reach a representative sample of the population equally. This can happen when a survey is only provided via a QR code for mobile devices. There may be people in the market who do not have a mobile device with them to take the survey via the QR code but would be willing to take the survey in a different way. Limiting the distribution method only samples the population who have a mobile device with them and neglects the rest. At Alchemer, we work with our Panels team to make sure we reach the people we need to survey evenly. Often the Alchemer panels team parses out the survey to make sure that each group gets equal representation in the survey. You can read more here. 

Cultural bias assumes that most people think the same way. This happens when people from the same relative backgrounds miss a cultural preference. Using the word Coke, soda, or pop, to indicate soft drinks is one example. Others might include crisps, chips, and biscuits for chips, fries, and cookies. But more than that, researchers might miss an obvious category in a region outside their own. Adding the option for Other with a text box can help overcome this. You can learn more here.  

Question-Order bias happens when a survey influences answers due to an earlier question putting an idea in the respondent’s mind. For example, asking about elephants just before asking people to name animals that are gray will influence people to respond with elephants. Read more here.  

Recency bias (and its opposite number, Nostalgia bias) show up in rankings of greatest of all time. People tend to go with people or events that come to mind easily or come from a time they look fondly on. For example, a list of the greatest songs often includes those that take the respondent to a happy time. You can read more here. 

Extreme or Neutral Response bias can skew results because some people are more likely to pick an extreme, while others tend to pick neutral responses. This bias is well documented in Eastern Europe, for example, where people are much less likely to say service was excellent. It is rare to get a 9 or 10 on an NPS survey in Eastern Europe, so 7 and 8s are not truly passives there. You can read more here.  

Desirability bias (ascribing to behaviors to impress the poll taker), Response bias (changing answers to be more acceptable), and Acquiescence bias (being more agreeable than otherwise) are common when the respondent is answering questions from an in-person survey taker. This is fairly easy to overcome by offering anonymous surveys online. You can read more here. 

Test First to Minimize Bias 

The challenge with creating a survey without bias is that most of these biases work on a subconscious level – most people are not aware of their cultural biases, recency or nostalgia biases, or confirmation bias. Testing your survey with people in different regions and demographics can help you identify biases before you roll out a survey to thousands of people, and then make business decisions based on flawed data.  

Alchemer offers self-service panels that allow market researchers to test a survey to a diverse but small audience for just a couple hundred dollars. This saves people from the financial impact of fielding a flawed survey, which is not only the cost of the panel and results, but more importantly, the cost of making a potentially erroneous business decision.  

For more information on reducing bias in your surveys, read Leading Practices: Understanding and Reducing Bias in Your Surveys. 

Request a Demo

By accessing and using this page, you agree to the Terms of Use . Your information will never be shared.

What is survey bias called?

What is Response Bias? Response bias (also known as survey bias) is defined as the tendency in respondents to answer untruthfully or inaccurately. It often occurs when participants are asked to self-report on behaviors, but can also be caused by poor survey design.

What are the four types of bias in surveys?

4 Types of Biases in Online Surveys (and How to Address Them).
Sampling bias. In an ideal survey, all your target respondents have an equal chance of receiving an invite to your online survey. ... .
Nonresponse bias. ... .
Response bias. ... .
Order Bias..

What is response bias in surveys?

The response bias refers to our tendency to provide inaccurate, or even false, answers to self-report questions, such as those asked on surveys or in structured interviews.

What is the most common type of bias when participants take surveys?

One of the more common types of response bias, demand bias, comes from the respondents being influenced simply by being part of the study. This happens as respondents actually change their behavior and opinions as a result of taking part in the study itself.