The explosion in AI tools over the last 2 years has brought with it numerous new abilities as well as efficiencies within market research. However, with these advancements come significant risks that must be carefully considered. The unchecked use of AI in market research can lead to a variety of issues around accuracy, data privacy, and overreliance. Accuracy and AI Hallucinations One of the primary concerns when using AI in market research is the issue of accuracy. Large Language Models (essentially the new generation of AI chatbots) such as ChatGPT, can sometimes generate incorrect information. This is either due to the model referencing something in its training data that is outdated or incorrect, or due to the AI simply making up information, a phenomenon known as “hallucination.” Although the accuracy of these models is improving, there is still a real risk in blindly accepting and using their outputs. Take, for example, the case of the lawyer who relied on ChatGPT to prepare for a court case, only to discover after the hearing that the cited cases he mentioned were actually ‘hallucinations’ and never really happened. To mitigate these risks, if you plan to use LLMs for anything factual and high stakes (e.g. a report you’re preparing for stakeholders), it is crucial to double check the accuracy of the outputs. You can do this by checking sources1 (if the LLM has referenced anything online), or by doing your own web search to check the accuracy of the information. Fraudulent Responses The rise of LLM-generated responses also poses significant challenges to data integrity in market research. These fraudulent responses can compromise the validity of research findings, even though the responses they generate can sound plausible. To combat this, a varied approach is needed. Firstly, working with trusted panel providers who use rigorous screening and verification processes greatly reduces the risk of ingenuine respondents participating. Secondly, the use of specialized software helps identify and filter out bot or AI-generated responses by monitoring respondent behavior. Techniques like digital fingerprinting and reCAPTCHA are used to enhance this process. Lastly, once responses have been collected, robust data checking is crucial for verifying their legitimacy and removing any that are not legitimate. If you are interested in finding out more, we have written a full blog post about how we protect against AI generated fraudulent responses in our own research – read here. Data Security Concerns Data security is another critical area of concern when using AI. The risk here is that business sensitive or personally identifiable information could be shared with third parties (potentially even competitors). Understanding whether an AI tool is a closed or an open system is vital to reduce this risk. Closed systems provide greater security by maintaining data within a secure environment. Open systems might expose the data you input to third parties or use it to train the AI model, which could result in the data being used to generate responses for other users in the future. It’s important to use closed systems where possible, and avoid inputting sensitive data (such as personally identifiable information or confidential business information) into open systems. If in doubt, follow the advice of your IT security team and don’t input sensitive data into any AI platforms unless you’re sure they’re closed systems that do not train on your data. Overreliance Using AI models as a replacement for a large amount of thinking, writing or editing leaves you open to you not knowing the information you’re working on very well. This might be okay for sending out the occasional internal email, but if you need to explain the information, use it to form an argument in a conversation, or answer a question about it then it’s not something you’ve thought about enough to ingrain it in your head. As such, it’s best to use these tools collaboratively, making sure you’ve had a chance to think through the problem yourself first and that you have dissected and truly understood any outputs you are likely to be sharing with others. Conclusion In conclusion, harnessing AI in market research offers vast potential but also presents significant challenges in accuracy, data integrity, and security. Additionally, while AI can enhance efficiency, relying on it too heavily can lead to a superficial understanding of crucial information. To address these issues, it is crucial to verify AI-generated information and ensure we are using closed systems when sharing sensitive data. In addition, blending AI capabilities with your own judgement through collaborating, scrutinising, and ensuring you have grappled with outputs ensures more accurate, safe and ultimately higher quality outcomes. Apply Now!