Union

Among the many use cases proposed at conferences and in articles by industry personnel over the past year, one such idea has become particularly disruptive – that we will be able use ChatGPT and other such open AI platforms as a source of data collection, rather than just a tool to aid in the collection process.

In other words, that ChatGPT will soon serve as a well-informed, fully-qualified market research respondent, able to talk to our moderators about an endless list of research topics.

So, how realistic is this proposition? Does it apply to certain types of market research respondents and not others?

AI-Generated Healthcare information still uncomfortable (but getting more comfortable)

There currently exists a topic “ceiling”, beyond which many of us are no longer comfortable relying on the advice of a generative AI platform.

Try asking yourself the question – what wouldn’t I take ChatGPT’s sole word on?

That answer often includes examples related to the health and well-being of ourselves and those we love. For these decisions, we usually want to consult with a fellow human, preferably one with training and experience in the relevant field of expertise.

It’s likely that, as more studies are conducted and the results are publicized, trust in AI as a source of healthcare-related information that we would normally get from our doctor will increase, and our aforementioned AI “ceiling” of comfort will rise.

Yet, for now, many patients feel that the healthcare information being provided by AI platforms does not the standard of reliability necessary in real, clinical practice.

According to a recent study funded by the National Institute of Health and conducted by the University of California San Francisco, researchers found that respondents were still almost evenly split between choosing a human doctor (52.9%) or an AI clinic (47.1%) to provide them with a diagnosis based on their presented symptoms.

It’s this same sentiment, then, which makes healthcare market research a particularly difficult nut for the robots to crack. Our business objectives often deal with serious conditions, which can limit the degree to which we are comfortable relying on AI as a source for insights.

Medical Expertise 

In addition to the gravity of subject matter, healthcare is a nuanced and complex field which continues to change and evolve every day.

Overall, ChatGPT has shown immense strides in its ability to talk about medical-related topics.

A promising study in the February 2023 issue of PLOS Journal of Digital Health reported that physicians associated with Beth Israel Deaconess Hospital were able to successfully use ChatGPT to pass all three Steps of the United States Medical Licensing Exam (USMLE) without the platform undergoing any additional training or reinforcement on the contents of the exams. The USMLE is a set of three standardized tests of expert-level knowledge, that are currently required for medical licensure in the United States. By passing each of these three tests, a significant milestone was achieved in the progress of AI and large language models, which speaks to the potential of this technology moving forward.

As of the writing of this article, however, there are still limitations built into the ChatGPT platform that restrict access to information made available after December 31, 2021. OpenAI, the company behind ChatGPT, has addressed this limitation for premium subscribers of the platform by integrating Microsoft’s Bing search engine – but generally speaking this limitation on the processing of recent information means that a response from ChatGPT cannot be considered as a fully contemporary understanding, despite its success in general physician qualification testing

ChatGPT and similar platforms also work with publicly available data – of which individual health records are not included. For now, ChatGPT has access only to medical texts, clinical trials, research papers, and other sources of medical information.

As such, there’s really no “primary source” information – i.e. patient records – for ChatGPT and other AI platforms to learn from. These platforms operate using information in aggregate, rather than learning on an individual, patient-by-patient basis. As any doctor would be happy to tell us, every patient is different, and there are often certain “intangibles” that a doctor must account for which often come through years of real world experience. This is an area which is difficult to replicate.

By dealing only in aggregated sources, we also inevitably lose nuance and individuality of healthcare providers themselves, which can be reflected in topics like treatment approach, for example. This is also to say nothing about differences inherent across practice settings or geographic locations that are made clear using sample-based research methods.

The healthcare industry recognizes the potential that is inherent in these tools and acknowledges the ways in which access to clinical health records can aid in AI’s ability to provide reliable and accurate medical advice. For this to happen, serious problem-solving is underway to begin to answer the questions about how to maintain patients’ privacy and confidentiality while at the same time being able to evolve and improve on the information that these tools can provide. Ultimately, the vast wealth of individual patient data that is being collected everyday by doctors, hospitals, apps, phones, and so on, if made available to these platforms, could empower the model to deliver more precise responses to clinical questions posed.

Applications in Patient Research 

For now, it seems that we’re much more likely to get useful AI-generated insights when it comes to patient research, where emotion and experience can be modeled with some degree of success, without also requiring an understanding of medical science and having the experience of medical practice.

It’s already been shown that AI tools are often able to mirror the behavior of a human being in a given situation. A basic example can be given in the way that most of us choose to engage with an AI platform like ChatGPT – we ask the chatbot a question, and the chatbot responds to the best of its ability, just like two people would do in conversation.

The potential for AI to accurately reflect human behavior extends beyond simply this back-and-forth chat exchange. Another example includes the way in which we visually engage with an advertisement. Specifically, we’re talking about the areas of an ad that are looked at first, the time spent looking at each element of an ad, and the direction of eye movement from one place to another across the page.

At our research agency, we have adopted an AI-powered visual prediction algorithm tool, informed by hundreds of previously conducted eye tracking research studies and validated against many of those previous projects.Using this tool, we are able to understand visual engagement of various communication concepts without any actual human participation in eye tracking exercises.

Beyond the more concrete realm of behavior, we’ve also shown AI to comment on human experience in fairly sophisticated ways, specifically focusing on the patient experience across many different conditions. While this is a topic for another day, our initial experiments are positive!

Conclusion

It’s important to note that while ChatGPT can be a valuable tool in healthcare market research, because of the reasons listed above it cannot serve as a stand in for human sample.

In addition to an up to date understanding of their medical specialty, healthcare professionals possess an emotional and social understanding of their patients that can be equally important to our market research outputs.

ChatGPT also provides what can be considered a single viewpoint that is developed only via aggregated sources – yet it is often in the conflicting viewpoints of physicians where we find our strongest insights.

Researchers should always validate the outputs generated by ChatGPT and consider its limitations.

Please reach out to HRW Innovation at innovation@hrwhealthcare.com if you have any questions or would like to hear more about the use of Ai in the healthcare market research space.

 

Citations

1:

Robertson C, Woods A, Bergstrand K, Findley J, Balser C, Slepian MJ (2023) Diverse patients’ attitudes towards Artificial Intelligence (AI) in diagnosis. PLOS Digit Health 2(5): e0000237. https://doi.org/10.1371/journal.pdig.0000237

 

2:

Kung TH, Cheatham M, Medenilla A, et al. Performance of ChatGPT on USMLE: Potential for AI-assisted medical education using large language models. PLOS Digit Health 2023;2:e0000198 https://journals.plos.org/digitalhealth/article?id=10.1371/journal.pdig.0000198.

 

Apply Now!

Get in touch:





    I would like to receive news and information from HRW
    I agree to the HRW privacy policy

      Sign up to our newsletter

      If you wish to hear about our latest blogs, podcasts, webinars, and team news, simply enter your email to sign up to our monthly newsletter.

      Cookies

      We use cookies to give you the best possible experience. We can also use it to analyze the behavior of users in order to continuously improve the website for you. Data Policy

      analytical Essential