Top 5 Tips for Ensuring Research Data Quality
By Tim McCarthy, Imperium General Manager
High-quality data – data that’s accurate, valid, reliable and relevant – is the ultimate goal for market researchers and brands alike.
While well-constructed engagements return critical insights on timely topics, their poorly planned/designed counterparts can accrue flawed results: invalid, unreliable or irreproducible data that can lead to the wrong conclusions and inform bad decisions further down the line.
Taking the time to explore and improve data quality delivers tangible benefits for everyone, increasing reliability, reducing the costs and aggravation associated with refielding and saving on time and resources all round. Luckily, there are multiple ways of fine-tuning survey processes to minimize problems and optimize data quality.
Quality data begins with selecting the right participants – respondents who are well suited to the aims and objectives of the study. But researchers also need to be cognizant of dozens of individual survey elements that have the potential to impact data quality – and understand how to manage and mitigate these threats while maintaining process integrity.
Technology can help. Greater automation helps select the best participants at the outset. By reducing subjectivity and lowering bias, automation also achieves a more balanced and consistent view of what constitutes “good” and “bad” respondents. A smoothly automated system reduces friction, boosting project speed while scaling back the cost and duration of manual checks.
Here are my top 5 tips for ensuring data quality in research:
1. Plan properly
Great results start with careful planning. Everyone involved should thoroughly understand the aims and objectives of the research before it leaves the starting blocks. Early actions include engaging in a process to identify the ideal target audience for a study, followed by building out an accurate sample plan based on both audience make-up and research objectives. Once this stage is agreed, it’s time to create a thorough screener that only allows appropriate respondents to proceed to the main survey.
2. Set candidates up for success
A successful survey isn’t one that’s predicated on tripping up respondents. It’s in everyone’s interests to build an engaging survey that is relevant to the audience and – crucially – not over-long. Surveys should be mobile friendly and shouldn’t include numerous trap/trick questions that may confuse even the most genuine of participants. Trick questions can backfire if respondents get frustrated and abandon the survey or intentionally answer incorrectly just to “see what happens”. Likewise, if you think the use of “insider jargon” could be problematic, you may want to consider conducting research to identify how your audience speaks about a particular topic, so you can communicate with them in a way that makes sense and is more likely to return relevant responses.
3. Include an appropriate mouse trap
You will need to incorporate a range of question types designed to identify poor respondents, but make sure they’re targeted and fit for purpose. Employ a variety of Open-End Questions, Grid Questions, Low Incidence Questions, Red Herring Questions and Conflicting Answers to weed out the weakest candidates. But don’t overdo it by adding multiple trap questions that are unrelated to the survey; instead use actual survey questions with anomalistic/inappropriate behaviors flagged. Also, be sure to not throw out respondents at the first sign of concern, but rather look for secondary data to confirm your suspicions of poor quality. Flushing out acceptable respondents because they’ve triggered one flag depletes the potential respondent pool and risks biasing data at a time when we need more diverse voices.
4. Ditch the fraudsters and dupes!
Using the right tech at the right time will save you time and money: by utilizing survey data quality solutions like RelevantID® at the outset, you’ll be able to build up a detailed picture of respondents’ fraud potential. RelevantID maps a participant’s ID against dozens of data points (including geo-location, language and IP address) to weed out obvious fraud and dupes before they enter the survey. This will not only mean that fewer respondents will need to be manually reviewed/removed after survey data is collected but will also prevent large-scale attacks from bots and click farms.
5. Develop consistent, efficient and accurate data quality checks
Ensure that you design an effective and efficient plan for removing bad data that is consistent for all respondents, and run frequently to ensure you don’t have large amounts of removals upon quota completion. Automating in-survey data reviews is one of the best ways to safely streamline the quality process. By utilizing solutions like QualityScoreTM, you can ensure your data checks are consistent and run in real-time, while reducing the time and resources needed to conduct effective manual reviews. Our data shows that by using QualityScore, clients save about 85 percent of the time they would otherwise spend checking survey results to identify bad respondents.
Tim McCarthy is General Manager at Imperium. He has over 15 years of experience managing market research and data-collection services and is an expert in survey programming software, data analysis and data quality. Imperium is the foremost provider of technology services and customized solutions to panel and survey organizations, verifying personal information and restricting fraudulent online activities.