ChatGPT over prescribed unneeded x-rays & antibiotics in emergency care: Study

114

New Delhi, Oct 8 (IANS) While ChatGPT has shown its potential in interacting with patients and acing medical examinations, the popular generative artificial intelligence (AI) platform by OpenAI can overprescribe unneeded x-rays and antibiotics in emergency care, finds a study on Tuesday.

The study led by researchers from the University of California-San Francisco (UCSF) showed that ChatGPT even admitted people who didn’t require hospital treatment.

In the paper published in the journal Nature Communications, the researchers said that, while the model could be prompted in ways that make its responses more accurate, it’s still no match for the clinical judgement of a human doctor.

“This is a valuable message to clinicians not to blindly trust these models,” said lead author postdoctoral scholar Chris Williams at UCSF.

“ChatGPT can answer medical exam questions and help draft clinical notes, but it’s not currently designed for situations that call for multiple considerations, like the situations in an emergency department,” he added.

A recent study by Williams showed that ChatGPT, a large language model (LLM) was slightly better than humans at determining which of two emergency patients was most acutely unwell — a straightforward choice between patient A and patient B.

In the current study, he challenged the AI model to perform a more complex task: providing the recommendations a physician makes after initially examining a patient in the emergency — whether to admit the patient, get x-rays or other scans, or prescribe antibiotics.

For each of the three decisions, the team compiled a set of 1,000 emergency visits to analyse from an archive of more than 251,000 visits.

The sets had the same ratio of “yes” to “no” responses for decisions on admission, radiology, and antibiotics.

The team entered doctors’ notes on each patient’s symptoms and examination findings into ChatGPT-3.5 and ChatGPT-4. Then, the accuracy of each set was tested with increasingly detailed prompts.

The results showed the AI models recommended services more often than was needed. While ChatGPT-4 was 8 per cent less accurate than resident physicians, ChatGPT-3.5 was 24 per cent less accurate.

“AI’s tend to overprescribe because these models are trained on the internet. To date, legitimate medical advice-giving sites have not been designed, which can answer emergency medical questions.

–IANS

rvt/dan

Go to Source

Disclaimer

The information contained in this website is for general information purposes only. The information is provided by BhaskarLive.in and while we endeavour to keep the information up to date and correct, we make no representations or warranties of any kind, express or implied, about the completeness, accuracy, reliability, suitability or availability with respect to the website or the information, products, services, or related graphics contained on the website for any purpose. Any reliance you place on such information is therefore strictly at your own risk.

In no event will we be liable for any loss or damage including without limitation, indirect or consequential loss or damage, or any loss or damage whatsoever arising from loss of data or profits arising out of, or in connection with, the use of this website.

Through this website you are able to link to other websites which are not under the control of BhaskarLive.in We have no control over the nature, content and availability of those sites. The inclusion of any links does not necessarily imply a recommendation or endorse the views expressed within them.

Every effort is made to keep the website up and running smoothly. However, BhaskarLive.in takes no responsibility for, and will not be liable for, the website being temporarily unavailable due to technical issues beyond our control.

For any legal details or query please visit original source link given with news or click on Go to Source.

Our translation service aims to offer the most accurate translation possible and we rarely experience any issues with news post. However, as the translation is carried out by third part tool there is a possibility for error to cause the occasional inaccuracy. We therefore require you to accept this disclaimer before confirming any translation news with us.

If you are not willing to accept this disclaimer then we recommend reading news post in its original language.

Online Cricket Play Online