Tools: Experts Sound Alarm After Chatgpt Health Fails To Recognise Medical...

Tools: Experts Sound Alarm After Chatgpt Health Fails To Recognise Medical...

Study finds ChatGPT Health did not recommend a hospital visit when medically necessary in more than half of cases

Follow our Australia news live blog for latest updates

Get our breaking news email, free app or daily news podcast

ChatGPT Health regularly misses the need for medical urgent care and frequently fails to detect suicidal ideation, a study of the AI platform has found, which experts worry could “feasibly lead to unnecessary harm and death”.

OpenAI launched the “Health” feature of ChatGPT to limited audiences in January, which it promotes as a way for users to “securely connect medical records and wellness apps” to generate health advice and responses. More than 40 million people reportedly ask ChatGPT for health-related advice every day.

The first independent safety evaluation of ChatGPT Health, published in the February edition of the journal Nature Medicine, found it under-triaged more than half of the cases presented to it.

The lead author of the study, Dr Ashwin Ramaswamy, said “we wanted to answer the most basic safety question; if someone is having a real medical emergency and asks ChatGPT Health what to do, will it tell them to go to the emergency department?”

Ramaswamy and his colleagues created 60 realistic patient scenarios covering health conditions from mild illnesses to emergencies. Three independent doctors reviewed each scenario and agreed on the level of care needed, based on clinical guidelines.

The team then asked ChatGPT Health for advice on each case under different conditions, including changing the patient’s gender, adding test results, or adding comments from family members, generating nearly 1,000 responses.

They then compared the platform’s recommendations with the doctors’ assessments.

Source: HackerNews