Paper raises ethical issues on use of AI and big data in human biomedical research


If a research is done by an AI model, who should be held responsible for its wrong decisions? - ISTOCKPHOTO

SINGAPORE (The Straits Times/Asia News Network): Protecting the privacy and confidentiality of a person’s data is of paramount importance in biomedical research. So is obtaining informed consent and respecting the individual’s rights and autonomy.

But what if it is not feasible for researchers to get the person’s consent for the specific use of his data for a study powered by artificial intelligence (AI) and big data?

When should societal benefit take priority over data privacy, and vice versa? If research is done by an AI model, how does it come to a decision and who should be held responsible for its wrong decisions?

These are among the questions raised in an ongoing public consultation paper, titled Ethical, Legal And Social Issues Arising From Big Data And Artificial Intelligence Use In Human Biomedical Research.

The 103-page paper, released online in May, was initiated by the Bioethics Advisory Committee (BAC), an independent national advisory body set up by the Cabinet in late 2000 to review ethical, legal and social issues arising from biomedical research and its applications in Singapore.

The final advisory report will be used to guide various groups, including academics, researchers and healthcare professionals, which want to use big data and AI in human biomedical research.

Current safeguards include the Personal Data Protection Act (PDPA) 2012 that came into full effect in 2014, and the AI in Healthcare Guidelines 2021, which the Ministry of Health co-developed with the Health Sciences Authority and the Integrated Health Information Systems.

These do not cover all eventualities or possible developments in big data (which in healthcare includes those from clinical records, patient health records, results of medical examinations, health-monitoring devices) and AI.

It is a transformative, fast-growing area that can help with the early detection of diseases, disease prevention, better treatments, and better quality of life, for instance, said the chair of BAC’s big data and AI review group, Professor Patrick Tan from Duke-NUS Medical School’s cancer and stem cell biology programme.

However, he said that when it comes to big data research, individuals may sometimes not know what their data is used for, and it may not be possible to get consent from every individual each time their data is used.

Data privacy vis-a-vis societal benefits is an issue highlighted in the paper. The potential of big data research is huge, but risks to data privacy could increase, and there has to be a fair balance, Prof Tan added.

For individuals, a possible risk scenario in the future is when they face discrimination when buying insurance, if the privacy of their data is compromised.

“For instance, if your insurance company knows that you have the BRCA1 (breast cancer gene 1) mutation, but you don’t have breast cancer yet, should you still be eligible for insurance?

“In the United States, there are laws that prohibit this sort of pre-disease profiling,” said Prof Tan, who is also the executive director of the Genome Institute of Singapore and director of the SingHealth Duke-NUS Institute of Precision Medicine.

“There’s even some evidence of (researchers) using big data in AI where the cadence of your voice... the speed at which you type your letters on the keyboard are all registers of mental acuity.

“Let’s say that you’re at risk for something, and if you can be identified, would that put you at risk (of) being discriminated against?” he said, citing hypothetical examples.

One safeguard could be to ensure that users of the data will not re-identify the individuals by penalising them if they do so, he said.

There is currently a moratorium here that bans the use of genetic test results from human biomedical research in insurance underwriting. The question is whether that moratorium should be moved to legislation, Prof Tan added.

Another issue raised is whether current methods of de-identification and anonymisation are still applicable when large volumes of personal, health and medical data are used in AI research.

Anonymisation occurs when the identifiers that connect an individual to the stored data are removed, while de-identification entails the removal of personally identifiable information to protect privacy.

Still, there is a risk of re-identification, which is the identification of individuals from a data set that was previously de-identified or anonymised.

Prof Tan said it is not impossible for a person whose name and identification card number have been removed from a data set to be re-identified, if there is other information such as his place of residence and gender.

Anonymisation may not be entirely possible when handling large volumes of data and may not be easily achieved with existing AI methods, but institutions should take steps to de-identify or reduce the risk of re-identification of confidential patient data, the paper said.

“The question is how do we build in the safeguards that will still enable (big data and AI) research to go on, but with the assurance that it is not total cowboy town,” said Prof Tan.

In recent years, the secondary use of data has become increasingly common as technological advances have made it possible to extract more value from existing real-world data sets, the paper said.

But individuals may not be aware that their data could be used again for other purposes than the original intention.

In early 2021, news broke that the police could ask for the data on the TraceTogether app – which identified people in close contact with a Covid-19 patient via Bluetooth during the pandemic – to track down suspected criminals.

The public backlash prompted the Government to enact legislation to restrict the use of the data to investigations in seven categories of serious crimes, such as murder, terrorism and rape.

“That’s a perfect example of big data... We can do it. The question is under what circumstances, should that be allowed, or not?” said Prof Tan.

The public consultation to seek feedback on the report and the proposed recommendations will end on July 1. The public can submit their responses via the BAC website at www.bioethics-singapore.gov.sg or Reach portal at www.reach.gov.sg.

Follow us on our official WhatsApp channel for breaking news alerts and key updates!

Singapore , AI , ethics , research , medical

   

Next In Aseanplus News

Vietnam takes ‘unprecedented’ disciplinary action over graft on former parliament chairman
China claims 'wipe-out' of large telecom fraud centres in northern Myanmar
Cambodia needs to unlock full potential of transport corridors to enhance regional connectivity, says world bank
M'sian teen duo meet UK's Queen Camilla to receive top prizes in Commonwealth essay competition
Asean defence ministers warn of security threats, call for collective action in Vientiane talks
Brunei joins global efforts to combat antimicrobial resistance
Number of Hongkongers enrolled in US universities hits record low
Bank Negara international reserves up at US$118bil
Cricket-Australia to mark 10-year anniversary of Hughes' death
Malaysia aims for greater logistics and transport integration as Asean chair

Others Also Read