PETALING JAYA: There must be a legal framework to regulate the use of artificial intelligence and prevent abuse of AI-generated deepfakes, including those using images of celebrities to dupe Malaysians, say cybersecurity experts.
Cybersecurity laws expert Derek Fernandez said those using AI services must be made legally bound by terms and conditions, including disclosing if AI was used to generate an output.
“AI use must be legally regulated like over-the-top (OTT) platforms and be subject to the same licensing controls in so far as cybersecurity is concerned,” he said in an interview yesterday.
OTT platforms are online content providers offering streaming services, usually involving entertainment and gaming.
He said it is essential to mandate the registration and identification of all companies or individuals providing AI services capable of generating deepfakes.
“These AI service providers should be required to watermark all AI-generated content and provide the necessary tools to detect such watermarks, ensuring accountability and traceability.”
Fernandez added that platforms hosting AI-generated content must clearly mark it as AI-generated.
“Companies providing AI services should supply detection tools or enforce contractual obligations for users to disclose AI-generated output, ensuring transparency and preventing misuse,” he said.
He cited a case in Hong Kong where a company was scammed out of RM110mil due to a deepfake video where the company’s “chief financial officer” convinced a staff member to transfer the money to the scammers.
“Unregulated, this technology can, in evil hands, interfere in elections, instigate riots and threaten peace, stability and harmony in any nation,” he said.
The Malaysian Communications and Multimedia Commission member added that a mandatory 24-hour or 48-hour cooling-off period should also be adopted to avoid deepfake scams, besides imposing digital insurance on the service provider against such scams.
While there may be AI tools to detect deepfakes, Fernandez said that access to such tools might not be readily available to the general public at the moment.
“AI tools to detect the probability of deepfakes that have been developed must be made available to all governments and ultimately to the public so that their smartphones will now be ‘smart’ to warn a user of the probability that a video or text is a deepfake,” he said.
He also said that independently verifying a subject matter posted on social media was also helpful in avoiding being duped by deepfakes.
“Always independently verify, preferably on a different device, using official contact information if it is a business or corporation.
“For members of the public, a simple method is to call back the person you thought you were talking to or video conferencing with on the number that you know is theirs,” he advised.
Prof Dr Mohamed Ridza Wahiddin noted that deepfakes would become more difficult to detect and become more rampant in the near future.
“This is because the concept of digital twins is now extended to humans, with the term ‘human digital twins’ being more popularly used nowadays.
“These are electronic data perfected by AI. The dark side of the latter is that it makes deepfakes very difficult to detect,” he said when contacted yesterday.
Although some service providers, such as TikTok, might require a deepfake disclaimer if used for a particular video, he said such measures only provided limited protection.
The founder and patron of International Islamic University Malaysia’s Centre of Excellence for Cybersecurity said there are currently cutting-edge software programmes that could help detect deepfakes.
Among them are Sentinel, which caters to governments, media and defence agencies to protect against disinformation; Intel’s FakeCatcher, which can detect fake videos with 96% accuracy; and Microsoft’s Video Authenticator Tool, which analyses photos or videos to determine if they have been manipulated.
At the moment, he said such software is primarily used by corporations and is not widely available to the general public.
“I anticipate that such tools will eventually become available both as open-source software programmes and proprietary,” he added.
For the moment, Mohamed Ridza said that education and social media literacy among the public were the best defence against deepfakes.
Federation of Malaysian Consumers Association (Fomca) chief executive officer Saravanan Thambirajah said consumers must practise their own fact-checking before making any purchases.
“For consumers, it is more important than ever to approach online content with a healthy dose of scepticism, especially when it involves endorsements from celebrities and public figures.
“If you come across a promotion that seems too good to be true, or if the content seems suspicious in any way, I strongly advise that you do not immediately trust it.
“Instead, take the time to verify the information through official channels or trusted sources,” he said when contacted.
Saravanan acknowledged that the use of AI-generated deepfakes to promote products or commit scams has been deeply concerning, especially involving local well-known personalities.
“Public figures or anyone victimised by these AI manipulations should have the right to take swift legal action against the perpetrators, ensuring that such content is promptly removed and that those responsible are penalised,” he said.
He added that social media platforms must bear the responsibility of implementing robust detection mechanisms to identify and prevent the spread of deepfake content.