When AI chatbots show signs of potential dementia


By AGENCY

Human doctors need not fear that AI-powered chatbots will take over their job with a study finding that most of the latter have mild cognitive impairment. — Freepik

Almost all leading large language models (LLMs) or “chatbots” show signs of mild cognitive impairment in tests widely used to spot early signs of dementia, finds a study in the Christmas issue* of The BMJ medical journal.

The results also show that “older” versions of chatbots, like older patients, tend to perform worse on the tests.

The authors say these findings “challenge the assumption that artificial intelligence will soon replace human doctors”.

Huge advances in the field of artificial intelligence (AI) have led to a flurry of excited and fearful speculation as to whether chatbots can surpass human physicians.

Several studies have shown LLMs to be remarkably adept at a range of medical diagnostic tasks, but their susceptibility to human impairments such as cognitive decline have not yet been examined.

To fill this knowledge gap, researchers assessed the cognitive abilities of the leading, publicly-available LLMs using the Montreal Cognitive Assessment (MoCA) test.

The LLMs were ChatGPT versions 4 and 4o (developed by OpenAI), Claude 3.5 “Sonnet” (developed by Anthropic), and Gemini versions 1 and 1.5 (developed by Alphabet).

The MoCA test is widely used to detect cognitive impairment and early signs of dementia, usually in older adults.

Through a number of short tasks and questions, it assesses abilities, including attention, memory, language, visuospatial skills and executive functions.

The maximum score is 30 points, with a score of 26 or above generally considered normal.

The instructions given to the LLMs for each task were the same as those given to human patients.

Scoring followed official guidelines and was evaluated by a practising neurologist.

ChatGPT 4o achieved the highest score on the MoCA test (26 out of 30), followed by ChatGPT 4 and Claude (25 out of 30), with Gemini 1.0 scoring lowest (16 out of 30).

All chatbots showed poor performance in visuospatial skills and executive tasks, such as the trail-making task (connecting encircled numbers and letters in ascending order) and the clock-drawing test (drawing a clock face showing a specific time).

Gemini models failed at the delayed recall task (remembering a five-word sequence).

Most other tasks, including naming, attention, language and abstraction were performed well by all chatbots.

But in further visuospatial tests, chatbots were unable to show empathy or accurately interpret complex visual scenes.

Only ChatGPT 4o succeeded in the incongruent stage of the Stroop test, which uses combinations of colour names and font colours to measure how interference affects reaction time.

These are observational findings and the authors acknowledge the essential differences between the human brain and LLMs.

However, they point out that the uniform failure of all LLMs in tasks requiring visual abstraction and executive function highlights a significant area of weakness that could impede their use in clinical settings.

As such, they conclude: “Not only are neurologists unlikely to be replaced by large language models any time soon, but our findings suggest that they may soon find themselves treating new, virtual patients – artificial intelligence models presenting with cognitive impairment.”

*Editor’s note: The annual Christmas issue of The BMJ features quirky and light-hearted research articles and commentaries.

Follow us on our official WhatsApp channel for breaking news alerts and key updates!

Artificial intelligence , AI , ChatGPT , chatbots , brain

   

Next In Health

New Year resolutions for your mental health
Save a day of your life by not smoking for a week
'Don't leave me alone on shelf, please buy me'
Should kids be using skin care products?
When your duvet causes muscle stiffness
AI twice as accurate as doctors in determining stroke time
Exercise your fingers to stimulate your brain
Tween discovers potential cancer-fighting bacteria
What’s your genetic health forecast?
It takes a team to prehabilitate a patient before surgery

Others Also Read