Ask Dr Chatbot? AI is giving us unsafe health advice, study shows


The team found college graduate level education would be required to understand the chatbot's answers. — Image by freepik

BERLIN: Artificial intelligence chatbots cannot be relied on to give accurate, safe or even clear advice about medication, according to a team of Belgian and German researchers.

"Chatbot answers were largely difficult to read and answers repeatedly lacked information or showed inaccuracies, possibly threatening patient and medication safety," say the authors of findings published by BMJ Quality & Safety, a British Medical Journal publication.

Around a third of the replies could lead to harm being done to the would-be patient if he or she took up the bot’s medication advice, the team warned.

Despite being "trained" on data taken from across the Internet, bots are nonetheless prone to generating "disinformation and nonsensical or harmful content," the researchers warned, in an apparent reference to so-called AI "hallucinations" – industry jargon for when chatbots churn out gibberish.

The team from the University of Erlangen-Nuremberg and pharmaceutical giant GSK ran 10 questions on around 50 of the most frequently prescribed drugs in the US past Microsoft’s Copilot bot, assessing the bot’s answers for readability, completeness and accuracy.

The team found college graduate level education would be required to understand the chatbot's answers. Previous research has shown similar levels of incorrect and harmful answers from OpenAI's ChatGPT, the main AI chatbot service and largest rival to Microsoft's Copilot, the researchers noted.

"Healthcare professionals should be cautious in recommending AI-powered search engines until more precise and reliable alternatives are available," the researchers said. – dpa

   

Others Also Read