ChatGPT’s answer to healthcare-related queries at par with humans: study

ChatGPT’s responses to people’s healthcare-related queries are nearly indistinguishable from those provided by humans, reveals a new study, suggesting the potential for chatbots to be effective allies to healthcare providers’ communications with patients.

In the study, researchers from the New York University presented 392 people aged 18 and above with 10 patient questions and responses, with half of the responses generated by a human healthcare provider and the other half by OpenAI’s chatbot ChatGPT.

Participants were asked to identify the source of each response and rate their trust in the ChatGPT responses using a 5-point scale from completely untrustworthy to completely trustworthy.

The study, published in JMIR Medical Education, found people to have had limited ability while distinguishing between chatbot and human-generated responses.

On average, participants correctly identified chatbot responses 65.5% of the time and provider responses 65.1% of the time, with ranges of 49.0% to 85.7% for different questions.

Results remained consistent no matter the demographic categories of the respondents.

Discover the stories of your interest


The study also found participants mildly trust chatbots’ responses overall (3.4 average score), with lower trust when the health-related complexity of the task in question was higher. Logistical questions (e.g. scheduling appointments, insurance questions) had the highest trust rating (3.94 average score), followed by preventative care (e.g. vaccines, cancer screenings, 3.52 average score).

Diagnostic and treatment advice had the lowest trust ratings (scores 2.90 and 2.89, respectively).

According to the researchers, the study highlights the possibility that chatbots can assist in patient-provider communication particularly related to administrative tasks and common chronic disease management. Further research is needed around chatbots’ taking on more clinical roles, said the researchers from NYU Tandon School of Engineering and Grossman School of Medicine.

However, providers should remain cautious and exercise critical judgement when curating chatbot-generated advice due to the limitations and potential biases of AI models, they noted.

Stay on top of technology and startup news that matters. Subscribe to our daily newsletter for the latest and must-read tech news, delivered straight to your inbox.

For all the latest Technology News Click Here 

 For the latest news and updates, follow us on Google News

Read original article here

Denial of responsibility! TheDailyCheck is an automatic aggregator around the global media. All the content are available free on Internet. We have just arranged it in one platform for educational purpose only. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials on our website, please contact us by email – [email protected] The content will be deleted within 24 hours.