WHO warns against bias, misinformation in using AI in healthcare


FILE PHOTO: AI Artificial Intelligence words are seen in this illustration taken, May 4, 2023. REUTERS/Dado Ruvic/Illustration/

(Reuters) - The World Health Organization called for caution on Tuesday in using artificial intelligence for public healthcare, saying data used by AI to reach decisions could be biased or misused.

The WHO said it was enthusiastic about the potential of AI but had concerns over how it will be used to improve access to health information, as a decision-support tool and to improve diagnostic care.

The WHO said in a statement the data used to train AI may be biased and generate misleading or inaccurate information and the models can be misused to generate disinformation.

It was "imperative" to assess the risks of using generated large language model tools (LLMs), like ChatGPT, to protect and promote human wellbeing and protect public health, the U.N. health body said.

Its cautionary note comes as artificial intelligence applications are rapidly gaining in popularity, highlighting a technology that could upend the way businesses and society operate.

(Reporting by Shivani Tanna in Bengaluru; Editing by Nick Macfie)

Follow us on our official WhatsApp channel for breaking news alerts and key updates!

   

Next In Tech News

Elon Musk blasts Australia's planned ban on social media for children
Bitcoin's wild ride toward $100,000
OpenAI considers taking on Google with browser, the Information reports
One tech tip: How to get started with Bluesky
FCC proposes fining Chinese video doorbell manufacturer after security concerns raised
Snap seeks to dismiss New Mexico lawsuit over child safety
Crypto industry jockeys for seats at Trump's promised council
Reddit back up after latest outage impacts thousands of users
Massachusetts student's punishment for AI use can stand, US judge rules
Exclusive-Amazon likely to face investigation under EU tech rules next year, sources say

Others Also Read