2023 will be remembered as the year when generative artificial intelligence (AI) first stepped out of science fiction into reality. Across just a few months AI tools have become increasingly sophisticated and prevalent. They can now generate realistic and convincing text, images, and videos, mimicking human behaviour and communication. While this can have many positive applications, such as enhancing creativity, productivity, and accessibility, it also poses significant challenges and risks, especially for the field of market research. ​

As researchers, we rely on the quality and authenticity of the responses we collect from our respondents. However, AI and bots can potentially compromise the integrity of our data by providing false or misleading information, either intentionally or unintentionally. For example, some respondents may use AI tools to generate responses for surveys or interviews, either to save time, avoid effort, or earn incentives.

These scenarios can have serious consequences for our research outcomes, as they can introduce bias, error, and noise into our data, affecting the validity and reliability of our findings. Therefore, it is crucial to have effective and robust defence mechanisms in place to prevent, detect, and eliminate AI and bot-generated responses from our research. Here, we will discuss some of the key measures that HRW and our partners use to ensure that we are collecting genuine and accurate responses from real human respondents. ​

Panel Provider Vigilance and Expertise
The first and foremost line of defence against AI and bot-generated responses is the quality and professionalism of our healthcare panel providers. We work with trusted and reputable panel providers who have extensive experience and expertise in recruiting and managing respondents for market research. They employ rigorous screening and verification processes to ensure that they are inviting only qualified and engaged respondents to participate in their panels.

However, the management does not cease when a respondent joins a panel. Our partners employ a continuous data validation process to ensure that panellist information remains up to date. This includes periodic checks and verifications of contact details, employment status, and other relevant information. They also utilize software to monitor the behaviour and response patterns of panellists. It allows us to identify any irregularities or suspicious activities that may indicate fraudulent responses.

The efforts of the healthcare panel providers to verify, check and manage the respondent databases is never-ending. We rely on their vigilance and commitment to quality to filter out fraudulent respondents before they reach fieldwork.

Specialized Software Integration
The second key aspect of our defence strategy involves the integration of specialized software that can identify and filter out bot or AI-generated responses. This space is continually evolving as new technologies emerge and we are always adding to our toolbox of technical solutions. These solutions can monitor the actions of the respondents within the survey as well as collecting device and location data to screen for any external influences on the survey.

Some of the features and functionalities of this software include:​

  • De-duplication on survey entry: This prevents respondents from taking the same survey multiple times from the same computer by using digital fingerprinting and watermarking techniques.
  • Speed bumps: To prevent respondents from rushing through the survey without reading the questions carefully, the time between each question is monitored and a warning message is displayed if the respondent is too fast.
  • Re-verification on survey entry: This ensures that respondents are from the intended country and are not likely to perform fraudulent activity by using GeoIP and FraudScore™ tools.
  • Anti-bot measures: This blocks automated programs or bots from accessing the survey by using RelevantID, reCAPTCHA v2, and BotDetect services.

By integrating this software into our research platforms and processes, we can provide an extra layer of protection and assurance against undesirable AI or bot-generated responses. The software can automatically flag or remove any suspicious or questionable responses, and alert us to any potential issues or risks. ​

Robust Data Checking
The third and final component of our defence mechanism is the robust data checking that we conduct to verify the accuracy and legitimacy of the responses we collect. We implement stringent data validation procedures to ensure that the responses meet the quality and consistency standards that we expect from our research. ​

Data checking is a specialized skillset. It involves identifying and removing any invalid or suspicious data. The responses are examined for any signs of flat lining, inconsistent answers or poor/invalid open-ended answers. Cleaning the data helps to improve the quality and usability of the data, and to reduce any noise or error that may affect the analysis and interpretation of the results. ​

In summary, our defence against AI and bot-generated responses hinges on a multifaceted approach that leverages existing protective measures. The first critical aspect involves the vigilance and expertise of our panel providers to ensure they are inviting verified and engaged respondents to take part in our research. ​

Additionally, the integration of specialized software is pivotal in identifying and filtering out bot responses. This software increasingly employs algorithms and machine learning techniques to analyse text and behaviour, effectively distinguishing between genuine human input and automated, potentially malicious content.

Lastly, robust data checking is essential to verify the accuracy and legitimacy of information being provided. By implementing stringent data validation procedures, we can mitigate the risks of fraudulent responses making it into our reporting and contaminating findings. ​

In combination, these measures form a comprehensive defence strategy, strengthening our ability to combat respondents using AI, and bot-generated responses. But this isn’t the end of the story, the speed of the developments means that we need to stay alert to new technologies and how they can impact our industry. With our partners, HRW are continuously monitoring the evolving landscape of emerging threats. Understanding the threats and countermeasures is one way to ensure that we are always one step ahead.

Should you wish to speak to the team to discuss this further, please reach out at innovation@hrwhealthcare.com

By Darren Vircavs and Amy Russell


Apply Now!

Get in touch:

    I would like to receive news and information from HRW
    I agree to the HRW privacy policy

      Sign up to our newsletter

      If you wish to hear about our latest blogs, podcasts, webinars, and team news, simply enter your email to sign up to our monthly newsletter.


      We use cookies to give you the best possible experience. We can also use it to analyze the behavior of users in order to continuously improve the website for you. Data Policy

      analytical Essential