Cybersecurity expert: Easy targets plentiful


PETALING JAYA: Easy access to an individual’s data, including images online allows scammers to more easily create ultra-realistic, manipulated, or fake photos, says Assoc Prof Dr Selvakumar Manickam.

The Universiti Sains Malaysia cybersecurity expert said excessive sharing on social media and the frequent use of photo editing apps create opportunities for criminals to misuse them for harmful purposes.

“Coupled with the wide availability of artificial intelligence (AI) at present, the issue of image doctoring is more challenging.

“AI increases the efficiency of creating and distributing deceptive images by criminals.

“AI-powered Deepfake technology generates highly convincing fabricated videos and images, perpetuating falsehoods and impersonations,” he told The Star yesterday.

He said AI’s capacity to swap faces in visuals also helped facilitate revenge pornography, identity theft, privacy breaches, and the dissemination of false information.

ALSO READ: The dark side of AI on social media

Nonetheless, he emphasised that image manipulation existed long before AI became widely available and anyone with average image editing skills could easily create convincing fake images.

He also said online tutorials were readily available to help individuals acquire such skills.

“People engage in this practice for various reasons, including deceiving others, improving photo aesthetics, or producing fake images for harmful purposes.

'CLICK TO ENLARGE''CLICK TO ENLARGE'

“Besides fabricating naked images of individuals, it can also be exploited for malicious activities like political manipulation and fraud.

“Image alteration, or image doctoring, involves changing photos to make them appear different or misleading.

“These changes can be as minor as removing a blemish or as significant as creating dishonest portrayals,” Selvakumar said.

As such, he urged parents to monitor their children’s social media usage and, when possible, restrict their use of apps heavily reliant on visual content, such as TikTok, Snapchat, FaceApp and others.

These also apply to all those “photo-sharing addicts”, he added.

ALSO READ: Social media is introducing ways to make AI-generated content easier to identify

Fellow and chair of Information Technology Computer Science at the Academy of Sciences Malaysia, Prof Datuk Dr Mohamed Ridza Wahiddin, also pointed out that such technology has existed since 2019.

He said the public and relevant authorities needed to be more proactive in facing such challenges.

“The Communication and Multimedia Act 1998 is due to be amended early next year.

“Section 233 of the Act is very relevant to addressing the issue raised here.

“In terms of Internet service providers, they may also adopt and adapt deep learning software to protect internet users,” he said.

Follow us on our official WhatsApp channel for breaking news alerts and key updates!
   

Next In Nation

Melaka to tackle roaming livestock risks with QR code tagging
Asean chairmanship: Kuala Lumpur, Putrajaya ready to host key meetings, says Dr Zaliha
'Beary' cute: Tourism Malaysia's Wira and Manja capture Malaysian hearts
GOF foils attempt to smuggle pig carcasses worth over RM160,000, nabs three men
SJKT Ladang Ayer Tawar partially destroyed in fire
Delivery man loses RM63,826 in love scam after 20 days of online relationship
Four injured in Kuala Selangor after palm oil tanker hits two vehicles, catches fire
Telegram receives licence to operate in Malaysia
Umno is bowing to DAP to stay in government, alleges Dr M
Security firm refuses to leave despite termination at PJ condo

Others Also Read