WHO says AI can transform healthcare if understood properly


When using health data, AI systems could be accessing sensitive personal information, so robust legal frameworks are needed to safeguard privacy and integrity, the WHO said. — AP

GENEVA: Artificial Intelligence has the potential to transform health treatment but rapid roll-out without fully understanding how AI performs could end up harming patients, the World Health Organization said on Thursday (Oct 19).

The WHO said AI held great promise for healthcare but also came with challenges, notably around privacy and the potential to entrench existing problems.

The United Nations' health agency issued a new publication detailing some of the main regulatory considerations on AI for health, so that authorities can build or adapt their guidance on using it.

"With the increasing availability of health care data and the rapid progress in analytic techniques – whether machine learning, logic-based or statistical – AI tools could transform the health sector," the organisation said.

The WHO said AI could strengthen clinical trials, improve medical diagnosis and treatment and supplement medical knowledge and skills.

For example, AI could help in places with a lack of specialist, by interpreting radiology images and retinal scans, it said.

However, the WHO added that AI is being rapidly deployed, sometimes without a proper understanding of how such technologies perform, "which could either benefit or harm end-users", both patients and professionals alike.

When using health data, AI systems could be accessing sensitive personal information, so robust legal frameworks are needed to safeguard privacy and integrity, the WHO said.

Pros and cons

"Artificial intelligence holds great promise for health, but also comes with serious challenges, including unethical data collection, cyber-security threats and amplifying biases or misinformation," said WHO chief Tedros Adhanom Ghebreyesus.

"This new guidance will support countries to regulate AI effectively, to harness its potential, whether in treating cancer or detecting tuberculosis, while minimising the risks," he added.

The WHO said AI systems depend on the code they are built with and the data they are trained on, and better regulation could help manage the risks of AI amplifying biases present in training data.

"For example, it can be difficult for AI models to accurately represent the diversity of populations, leading to biases, inaccuracies or even failure," the WHO said.

"To help mitigate these risks, regulations can be used to ensure that the attributes – such as gender, race and ethnicity – of the people featured in the training data are reported and datasets are intentionally made representative," the organisation added.

The WHO outlined six areas for regulating AI for health.

They include externally validating data, evaluating systems before release so as not to amplify biases and errors, looking at consent requirements on data privacy, and fostering collaboration between regulators, patients, governments and healthcare professionals. – AFP

Follow us on our official WhatsApp channel for breaking news alerts and key updates!
   

Next In Tech News

How tech created a ‘recipe for loneliness’
How data shared in the cloud is aiding snow removal
Trump appoints Bo Hines to presidential council on digital assets
Do you have a friend in AI?
Japan's antitrust watchdog to find Google violated law in search case, Nikkei reports
Is tech industry already on cusp of artificial intelligence slowdown?
What does watching all those videos do to kids' brains?
How the Swedish Dungeons & Dragons inspired 'Helldivers 2'
'The Mind Twisting Quadroids' review: Help needed conquering the galaxy
Albania bans TikTok for a year after killing of teenager

Others Also Read