Bogus respondents, bots and bad data: how to work with market research panel providers to avoid potential pitfalls
The importance of accurate surveys from market research panels can’t be overstated. Critical business decisions are made based on the data gleaned from online surveys, and if that data isn’t high quality, the decisions won’t be, either.
According to a recent article in Quirk’s, almost half of respondents in a typical online study using market research panels were found to be bogus. That’s right. With a simple, 10-minute, easy-to-complete questionnaire, they deemed 46% of their responses unacceptable. Two glaring examples of how they knew:
- Valid respondents reported an average monthly medical care spend at $568, but the eliminated respondents reported their average spend at $9,069. That’s 16 times more than the reality.
- Valid respondents spent an average of 80 seconds reading a 200-word concept statement, while eliminated respondents spent an average of only 11 seconds (that’s about 18 words per second. This was 40% of their respondents who weren’t truly reading the statement before answering questions about it.
If that’s not troubling enough, the concern for data integrity is greater if your target audience is more niche than the general population. Market research panel companies have a more difficult time sourcing hard-to-reach respondents, which increases the likelihood of problem participants and therefore, bad data.
Types of Bogus Research Participants
The truth is that conducting research using market research panels may open clients up to a variety of problem participants. But if partners know what to look for, these bad seeds can be weeded out:
- Outliers. These are respondents whose performance or behavior is different from the rest of the population. They may not be part of your target user group or could be exceptional in some other way.
- Speed readers. A tell-tale sign of a bogus participant, these respondents’ speed at answering a question indicates that they couldn’t have been reading with real comprehension.
- Cheaters. Typically, these participants are only looking to get paid and don’t even try to perform requests. An easy way to spot them is in their random answers, such as stating “12345” when asked how much they spent on your product last year.
- Straightliners. These respondents take the path of least resistance, often marking “agree strongly” to every statement or checking every brand as a “familiar” one.
- Professionals. These respondents don’t represent “regular” users simply because they participate in too many studies, too often. Because they have been so exposed to so many surveys, they are attuned to the researchers’ goals and may skew answers accordingly.
- Bots. Not even real people, survey bots are computer algorithms designed to complete online surveys automatically in order to gain payment.
According to a 2020 Pew Research Center study, checks for speeding and attention alone fail to catch most bogus respondents. In fact, a whopping 84% of their bogus respondents passed a trap question, while 87% passed when checking for responding too quickly.
With stats like this, it’s glaringly easy to see the need for a variety of techniques and key quality measures to be instituted as part of all online surveys. The integrity of research is the direct result of questionnaire design, the continual monitoring of data while it’s in field, thorough final data checks and the willingness to expel any bogus participants that are discovered. In addition, and perhaps more importantly, it’s imperative that market research panel companies and their clients work hand-in-hand to detect fraudsters.
Tips on Questionnaire Design
Include multiple traps in the questionnaire to weed out bogus respondents. To help, enlist panel suppliers early in the process to review the questionnaire wording.
- Include CAPTCHAs to eliminate bots
- Ask the same demographic question at the beginning and the end of your questionnaire for comparison purposes
- Include fake brands in brand awareness and usage questions
Use both numerical and open-ended text questions. Participants who have to formulate answers in their own words will be more identifiable as legitimate – or not – upon review.
Spot potential “professionals” by including screener questions that identify how recently the respondent participated in a study. Anyone marking 0-3 months or 0-6 months should be eliminated.
Ensure that the overall completion length of the survey is not too taxing, which helps control for respondent burnout.
Tips on Monitoring Field Collection
A field partner should constantly be monitoring the raw survey data while a study is in field to identify problem participants. In addition to reviewing open-end responses, they can track and review time spent on individual questions to eliminate speeders. This ensures good quota integrity and ensures the establishment of separate cells with equal demographic distribution for monadic testing.
Tips on Final Data Review
A final review of the data is a critical step in the process. This step will help identify bogus respondents and make sure that any additional responses that may have come in after the last in-field check are accounted for.
Some things to keep an eye out for include any responses that are outside the normal range of data and multiple metrics, such as time on task (those who are moving more quickly or more slowly than the other respondents), and task success (those with low success rates along with fast task times). If respondents have multiple outlying metrics, that’s also a sign that you should be suspicious.
Bots can be particularly difficult to identify but can be found by looking at behaviors like:
- Impossible timestamps
- Not answering required questions or requests
- Identical open-ended responses from different “participants”
- Inconsistent responses to identical questions
- Impossible data values
The Worldwide Market Research team knows that if it’s worth doing, it’s worth doing right. Bringing us in early will ensure the right plan to mitigate the danger of bogus research participants and bad data. The end result will be a smoother-run study with valid, reliable and actionable data.
Send us a line at email@example.com. We can’t wait to collaborate on your next study.