The saying “don’t believe everything you see online” has never been more relevant due to artificial intelligence (AI) and deepfake technology.
Earlier this year, a clerk in Hong Kong was tricked into transferring HK$200mil (RM115mil) of company funds. She thought she was on a conference call with her company’s management, but the people on the call were actually deepfake creations made by scammers using publicly available videos. They deceived her into making payments to various bank accounts.
Malaysia has not been spared. Last Thursday, popular content creator Khairul Amin Kamarulzaman, better known as Khairul Aming, warned fans about an AI deepfake video mimicking his voice and likeness to scam people into buying a wok, falsely claiming it was to fund warehouse repairs.
In response, Communications Minister Fahmi Fadzil urged platforms to label “AI-generated content” to prevent such incidents.
In July, the Securities Commission cautioned the public over investment scams being falsely endorsed by deepfaked public figures on social media.
One such incident involved badminton legend Datuk Lee Chong Wei, with scammers appropriating his likeness to create videos promoting a fraudulent financial scheme.
That same month, Fahmi urged the public to be wary of similar scams following reports of singer, songwriter, actress and businesswoman Datuk Seri Siti Nurhaliza being impersonated in live WhatsApp calls.
As the technology poses significant risks, it’s important to understand what being “deepfaked” means.
Putting words in their mouths
According to Tan Aik Keong, founder and CEO of Malaysian software development firm and IT services provider Agmo Group, deepfakes are “a type of synthetic media in which a person’s likeness is digitally manipulated to create realistic but false images or videos”.
“Using advanced AI and machine learning techniques,” he says, “deepfakes can convincingly replace a person’s face and voice in a video with someone else’s, or even generate completely fabricated media from scratch.”
To put it simply, AI mimics a person’s appearance and/or voice, creating a virtual “sock puppet” for various, and usually nefarious, purposes.
This encompasses scams, political misinformation, and non-consensual deepfake pornography, all of which tech companies like Google and Meta are actively working to combat.
Vice president of business development with identity verification provider Sumsub, Penny Chai, highlights that deepfakes “pave the way for identity theft, scams, and misinformation campaigns on an unprecedented scale”.
Chai believes that the technology is a major threat to both cybersecurity and personal privacy, stating that a Sumsub study found that Malaysia experienced a 1,000% increase in deepfake incidents between 2022 and 2023.
“Our recent annual Identity Fraud 2023 Report revealed that the APAC (Asia Pacific) region has seen a significant rise in deepfake cases from 2022 to 2023, averaging 1,530%.
“Malaysians have valid reasons to be concerned about its growing use by cybercriminals. Deepfakes’ ability to mimic a person’s likeness so convincingly increases the risk of personal and financial harm.
“This erosion of trust in digital content is alarming, as cybercriminals can exploit this technology for identity theft, financial fraud, and even deceiving family members.
“Cybercriminals can use deepfakes for corporate espionage and fraud, impersonating senior executives to manipulate employees into transferring funds or disclosing sensitive information,” she says.
Scammers impersonating a legitimate business entity or employee through email to deceive and manipulate recipients is known as business email compromise (BEC).
Chai says this attack vector becomes more convincing and dangerous when combined with deepfake technology, which creates realistic but fake audio, video, or images.
Tan adds that social media platforms are experiencing a surge in the spread of deepfake content.
“There are significant gaps in real-time detection and alert systems, and despite existing legislation such as the Communications and Multimedia Act 1998, Malaysia lacks comprehensive fraud prevention mechanisms,” he claims.
Imposter invasion
For David Rajoo, senior systems engineer specialist at cybersecurity firm Palo Alto Networks, the true danger of deepfakes lies in their customisability and convincing nature. And by exploiting information leaked from past data breaches, he says cybercriminals are capable of creating highly credible schemes.
“In the past, such information might have been used in relatively unsophisticated scams, like phishing emails or phone calls from strangers.
“However, with the advent of deepfake technology, the threat level has escalated dramatically. Imagine receiving a video message that appears to be from a trusted friend, family member, or notable figure, urging you to take some action.
“The personalised nature of these attacks makes them particularly dangerous, as victims are more likely to trust and act on the information presented in a convincing deepfake,” he says.
David further predicts that as deepfake technology advances and becomes more convincing, the situation will only deteriorate due to the difficulty in distinguishing between fake and real content.
“We see everyday users incorporating AI into their daily lives, and it is no different for criminals. As deepfake technology advances, we can expect the situation to get much worse.
“The quality of deepfakes will improve, making it increasingly difficult for both individuals and automated systems to distinguish between real and fake content.
“This could lead to a surge in misinformation and disinformation campaigns, with significant impacts on social stability, political processes and personal reputations,” he says.
Even more concerning, Chai notes that the tools to create deepfakes have become increasingly accessible, making it easier than ever to produce believable fake content.
“Fraudsters don’t require complex setups and Hollywood studios with high-end equipment. Today, anyone with a computer and Internet access can dabble in these technologies and find tools readily available on app stores.
“This accessibility comes with an obvious dark side. Many of the fraudsters employing AI-powered tools aren’t just random individuals trying their luck; they’re part of organised groups with the resources and intent to deceive on a large scale,” she says.
According to David, malicious actors with a higher level of technical expertise are not only employing more sophisticated techniques but are also offering their services for sale.
Tan, on the other hand, says that in its current state, the technology is already powerful enough to replace a face and voice in real-time during video calls.
He says virtual cameras, a technology popularised by live streaming software, can allow scammers to seamlessly integrate pre-recorded or AI-generated video feeds into live video calls, such as Google Meets and Microsoft Teams.
By selecting the virtual camera as the source, he says scammers can connect to the deepfake AI feed instead of a real camera.
“This allows scammers to impersonate someone else with high accuracy, mimicking facial expressions, lip movements, and even voice modulation,” he says.
Unmasking threats
Unlike traditional image manipulation tools like Photoshop, which require a certain level of skill and time to create convincing fake images, David says AI deepfake technology can quickly generate realistic videos and images with minimal effort.
“This rapid production capability allows cybercriminals to flood their targets with a high volume of deepfake content, increasing the likelihood of successful scams,” he says, adding that for targets that aren’t classified as high-value, cybercriminals typically adopt a quantity-over-quality strategy.
However, this does not mean that people are completely at the mercy of cybercriminals utilising deepfake technology.
There are still telltale signs that can be used to identify synthetic content. A recent Bloomberg report showed how crucial it is to stay alert and verify identities, using an attempted Ferrari scam as an example.
An executive at Ferrari received texts from someone claiming to be CEO Benedetto Vigna. Shortly after, the executive got a call from a scammer using deepfake technology to mimic Vigna’s southern Italian accent.
The executive became suspicious when he noticed subtle voice inconsistencies and asked for the title of a recent book that the CEO had recommended. The scammer quickly hung up.
Tan highlights that video deepfakes often struggle with matching facial movements and lighting, advising the public to watch for these signs in online video calls.
“Look out for blurring, resolution differences, or colour mismatches between the face and the body, hair, or neck. These issues arise because deepfakes frequently overlay digitally modelled faces, which can result in noticeable differences.
“Additionally, pay close attention to lip movements and facial expressions, especially during head turns or when objects pass in front of the face, as deepfakes may have difficulty with accurate tracking in these scenarios,” he says.
Another dead giveaway is audio mismatches, where voice cloning might not sync well with the video.
David similarly advises remaining alert for unnatural eye and head movements, a lack of blinking, emotions and facial expressions that do not match the current context, poor video quality, and the inability to perform specific actions upon request in video calls.
He also says that the public should make it a practice to verify the sources of suspicious videos or images.
“Before believing or sharing any questionable media, it is advisable to cross-check it with reliable sources, such as news articles, official statements, or fact-checking websites.
“It’s important to note that as technology improves, these signs may become less obvious. Therefore, it is worth it to always verify important information through multiple channels,” he says.