GENEVA: Research has found that artificial intelligence (AI) is already better than humans at picking donor organs for transplants and also better at answering health-related questions from patients.
And yet experts at the World Health Organization are saying that introducing AI into health care procedures brings the risk of "completely incorrect" information.
The WHO issued a call on Tuesday for caution in the use of large language models (LLMs) in health care, warning that their use could lead to health care errors and erode trust in AI.
The organization said it was enthusiastic about the appropriate use of technology, including LLMs, to support health care, but that there was concern that normal standards of caution were not being applied with LLMs.
Proponents believe doctors will soon use medical AI chat systems to faster answer patient questions about their health, while AI may also be used in some patient diagnosis.
However the two AI chatbots being pushed by Microsoft and Google (ChatGPT and Bard) both have shown their responses are not fully reliable when it comes to facts. The fact that these responses look reliable, however, is what concerns the WHO.
"LLMs generate responses that can appear authoritative and plausible to an end user; however, these responses may be completely incorrect or contain serious errors, especially for health-related responses," the WHO says.
Among the WHO's concerns are that the data used to train AI may be biased and could generate misleading information posing risks to health, equity and inclusiveness and that LLMs could be misused to generate highly convincing disinformation that is difficult for the public to differentiate from reliable health content.
The WHO proposed that these concerns be addressed and that clear evidence of benefit be measured before their widespread use. – dpa