PETALING JAYA: Artificial Intelligence (AI) should be regulated to ensure that its use does not infringe on the rights of consumers as they are the biggest group of people whose lives will be impacted by the technology, says a legal advisor.
Federation of Malaysian Consumers Associations (Fomca) vice-president and legal adviser Datuk Indrani Thuraisingham said the Malaysian National Artificial Intelligence Roadmap 2021-2025, which was launched in 2021, had neglected consumer groups in its consultation process.
This was despite consumer groups representing the largest number of people whose lives would be impacted by technology innovation, including AI development and deployment, she said.
“AI will have an enormous impact on people’s lives in the ways we work, communicate, gather information and much more. It has major implications for consumers’ well-being, autonomy, self-determination, privacy, safety, fairness and security.
“It also raises questions about who should be held responsible if the output of an AI system has a detrimental effect on a consumer,” she said in an interview.
As such, she said Fomca called on the government to regulate AI from a consumer rights’ perspective.
In view of World Consumer Rights Day on March 15, Indrani, who is also the National Consumer Complaints Centre’s chief executive officer, said that consumers’ voices must be heard in a digital world.
“To ensure generative AI is developed and used in accordance with consumer rights and social well-being, it is insufficient to rely on companies to regulate themselves, hoping that the effect would trickle down to the consumers’ level.”
Quoting Adobe’s latest State of Digital Customer Experience report, she said that in Malaysia, many brands have not adapted AI guidelines to meet consumer trust needs.
Only 10% have internal usage policies.
Policymakers and enforcement agencies should set boundaries on how technology is developed, deployed and used.
“Policymakers must pass laws and regulations which are necessary to provide safe and consumer-centric technology in the years to come,” she said.
To ensure generative AI is safe, trustworthy, fair, equitable and accountable, Indrani pointed to several principles that provide a foundation for policymakers and enforcement agencies on ways to approach the opportunities and pitfalls of generative AI through consumer rights principles.
This included the right to information when algorithms use personal data, the right to object and receive explanations, the right to have personal data deleted, the right to interact with humans instead of AI, the right to redress and compensation, the right to collective redress and the responsibility of developers to establish systems safeguarding these rights.
Generative AI models are taught by large amounts of data to identify patterns and structures, allowing them to generate new content, which can be text, images, audio or video that can resemble human-made content.
ChatGPT, Midjourney, Stable Diffusion and DALL-E are among the generative AI-driven services.
Although technology is not an uncontrollable force, Indrani claimed that fundamental rights, laws and societal values must adapt to and shape it.
“We are in the driver’s seat if we choose to be. Many of these challenges can be tackled using the laws.”
By prioritising consumer protection and applying consumer rights principles, she said the government establishes a forward-looking regulatory framework that ensures the safety, reliability and fairness of AI technology.“We should prevent consumers from being treated as experimental subjects for new technologies,” she added.