WHO warns against bias, misinformation in using AI in healthcare


FILE PHOTO: AI Artificial Intelligence words are seen in this illustration taken, May 4, 2023. REUTERS/Dado Ruvic/Illustration/

(Reuters) - The World Health Organization called for caution on Tuesday in using artificial intelligence for public healthcare, saying data used by AI to reach decisions could be biased or misused.

The WHO said it was enthusiastic about the potential of AI but had concerns over how it will be used to improve access to health information, as a decision-support tool and to improve diagnostic care.

The WHO said in a statement the data used to train AI may be biased and generate misleading or inaccurate information and the models can be misused to generate disinformation.

It was "imperative" to assess the risks of using generated large language model tools (LLMs), like ChatGPT, to protect and promote human wellbeing and protect public health, the U.N. health body said.

Its cautionary note comes as artificial intelligence applications are rapidly gaining in popularity, highlighting a technology that could upend the way businesses and society operate.

(Reporting by Shivani Tanna in Bengaluru; Editing by Nick Macfie)

Follow us on our official WhatsApp channel for breaking news alerts and key updates!

   

Next In Tech News

Amazon CEO denies full in-office mandate is 'backdoor layoff'
Musk now says it's 'pointless' to build a $25,000 Tesla for human drivers
Google defeats lawsuit over gift card fraud
Russian court fines Apple for not deleting two podcasts, RIA reports
GlobalFoundries forecasts upbeat Q4 results on strong demand from smartphone makers
Emerson sharpens automation focus with offer for rest of AspenTech in $15 billion deal
Palantir shares surge to record as AI boom powers forecast raise
Netflix under tax fraud investigation as offices in France and Netherlands raided
Singapore's Keppel to buy Japanese AI-ready data centre
Tesla increases wages for staff at German gigafactory by 4%

Others Also Read