WHO says AI can transform healthcare if understood properly


When using health data, AI systems could be accessing sensitive personal information, so robust legal frameworks are needed to safeguard privacy and integrity, the WHO said. — AP

GENEVA: Artificial Intelligence has the potential to transform health treatment but rapid roll-out without fully understanding how AI performs could end up harming patients, the World Health Organization said on Thursday (Oct 19).

The WHO said AI held great promise for healthcare but also came with challenges, notably around privacy and the potential to entrench existing problems.

The United Nations' health agency issued a new publication detailing some of the main regulatory considerations on AI for health, so that authorities can build or adapt their guidance on using it.

"With the increasing availability of health care data and the rapid progress in analytic techniques – whether machine learning, logic-based or statistical – AI tools could transform the health sector," the organisation said.

The WHO said AI could strengthen clinical trials, improve medical diagnosis and treatment and supplement medical knowledge and skills.

For example, AI could help in places with a lack of specialist, by interpreting radiology images and retinal scans, it said.

However, the WHO added that AI is being rapidly deployed, sometimes without a proper understanding of how such technologies perform, "which could either benefit or harm end-users", both patients and professionals alike.

When using health data, AI systems could be accessing sensitive personal information, so robust legal frameworks are needed to safeguard privacy and integrity, the WHO said.

Pros and cons

"Artificial intelligence holds great promise for health, but also comes with serious challenges, including unethical data collection, cyber-security threats and amplifying biases or misinformation," said WHO chief Tedros Adhanom Ghebreyesus.

"This new guidance will support countries to regulate AI effectively, to harness its potential, whether in treating cancer or detecting tuberculosis, while minimising the risks," he added.

The WHO said AI systems depend on the code they are built with and the data they are trained on, and better regulation could help manage the risks of AI amplifying biases present in training data.

"For example, it can be difficult for AI models to accurately represent the diversity of populations, leading to biases, inaccuracies or even failure," the WHO said.

"To help mitigate these risks, regulations can be used to ensure that the attributes – such as gender, race and ethnicity – of the people featured in the training data are reported and datasets are intentionally made representative," the organisation added.

The WHO outlined six areas for regulating AI for health.

They include externally validating data, evaluating systems before release so as not to amplify biases and errors, looking at consent requirements on data privacy, and fostering collaboration between regulators, patients, governments and healthcare professionals. – AFP

Follow us on our official WhatsApp channel for breaking news alerts and key updates!
   

Next In Tech News

Sirius XM found liable in New York lawsuit over subscription cancellations
US Supreme Court tosses case involving securities fraud suit against Facebook
Amazon doubles down on AI startup Anthropic with $4 billion investment
Factbox-Who are bankrupt Northvolt's creditors?
UK should use new powers to probe Apple-Google mobile browser duopoly, report says
EU regulators scrap probe into Apple's e-book rules after complaint was withdrawn
Hyundai recalls over 145,000 electrified US vehicles on loss of drive power
'World of Warcraft' still going strong as it celebrates 20 years
Northvolt CEO steps down, saying group needs up to $1.2 billion
Bitcoin at record highs, sets sights on $100,000

Others Also Read