The human touch still required, says MCMC man


PETALING JAYA: Combing through millions of posts on social media every minute of the day to ensure that they do not pose a threat to the country is an arduous task that falls on the Malaysian Communication Multimedia Commission (MCMC).

Its chief network security officer Datuk Dr Mohamed Sulaiman Sultan Suhaibuddeen said that as of April 22, as many as 48,696 posts deemed inflammatory were taken down, out of the 65,650 that were flagged.

This is only the tip of the iceberg, as the MCMC has to scrutinise every post to determine if there are hate, provocativeness or 3R (race, religion and royalty) violations.

While artificial intelligence (AI) has made huge leaps in assisting the MCMC in detecting “provocativeness”, he said human intervention is still needed to verify its findings.

This is due to Malaysia’s unique social landscape, and training the AI in the country’s complex culture and informal language is not easy.

While most social media platforms operating in Malaysia are not licensed or registered here, Mohamed Sulaiman said the commission has the authority to raise concerns about content that violates Malaysian laws or MCMC guidelines to these platform operators.

“The decision to take down content ultimately rests with the platforms themselves, based on their own community guidelines,” he said.

He added that while the MCMC upholds the right to healthy discussions and the freedom of expression of Malaysians, there are ways to do so without inciting public mischief or defaming others.

Mohamed Sulaiman said the removal of posts deemed as “provocative content” is not an automated process, as there is an appeal and remedial process.

“Provocative content is generally defined as material likely to cause offence, hatred, or violence. It can be subjective, but content that touches on sensitive issues such as the 3R and incites violence or discrimination is considered by the MCMC as provocative.

“The MCMC acts primarily on content that violates the Communications and Multimedia Act (CMA) 1998, particularly Sections 233 (offensive content) and 211 (multimedia content that can cause public mischief),” he added.

For content potentially falling under other laws such as the Sedition Act, the MCMC may collaborate with relevant law enforcement agencies that have jurisdiction over those specific offences.

“These agencies may then take appropriate legal action.

“Content removal can be initiated by the MCMC based on internal monitoring or complaints from the public or law enforcement.

“We prioritise content that poses a clear and present danger.

“Before removal, we will investigate the content’s nature and potential impact. During removal, the MCMC issues takedown notices to relevant platforms. Escalation may not always lead to content removal by the platform.

“After removal, the MCMC monitors the situation and may take legal action if necessary,” he said.

He also pointed out that content creators can appeal to the commission if their content is removed.

“Reinstatement is possible if the content is deemed not genuinely provocative,” he added.

Due to the overwhelming amount of content that needs to be monitored, the MCMC is now working hard on an AI portal to assist it in identifying “provocativeness”.

“The AI portal uses natural language processing and image or audio recognition to analyse content for potential provocativeness across various formats. This project is currently at the proof-of-concept level.

“The AI portal’s accuracy is constantly being improved. Human analysts review flagged content to ensure appropriate action.

“The portal uses a combination of AI algorithms and human expertise. MCMC analysts will then review AI-flagged content to make the final decision,” said Mohamed Sulaiman.

He said that as Malaysia has a unique social landscape due to its many cultures, races and religions, the AI model has to be adapted to better understand local context and informal language use.

“The MCMC is reviewing its AI model to ensure it fits the expected criteria to adequately support content identification in line with the legal framework.

“The goal is to strike a balance between content regulation and freedom of expression. At this stage, AI is deployed to assist in identification, and human expertise is still required to confirm,” said Mohamed Sulaiman.

Cybersecurity consultant Fong Choong Fook concurred, saying that AI itself cannot be used to detect “provocativeness”, truth or myth without human intervention.

“Essentially, AI is just a tool. AI cannot be used to validate whether posts are true or fake.

“You still need humans to train AI to know whether the news is true. For example, if there is a post saying a public figure has tendered his resignation, AI cannot tell if this is indeed true until a human informs the AI model that it is a fact or false.

“Having said that, AI is good at detecting articles written by AI itself. As to whether it is myth or truth, AI cannot do so completely on its own,” said Fong, who is also LGMS Bhd founder and executive chairman.

Follow us on our official WhatsApp channel for breaking news alerts and key updates!

MCMC , AI ,

   

Next In Nation

Integrity a fundamental pillar of public service, says PSD deputy DG
DPM Fadillah flags off TNB night walk and run
Kedah MB must explain why BZI pulled out of RM40bil Langkawi project, says Sg Petani MP
Pahang exceeds RM1bil in revenue collection for a third straight year
Over 200,000 visitors thronged 2TM over first two days, says Pacu
6m-deep sinkhole detected on Jalan Lojing-Gua Musang
Vietjet launches Hanoi-Kuala Lumpur route, aims to boost Asean connectivity
Cops to investigate cat abuse incident at Old Klang Road condo
Step into 'shoes' of the public for better delivery, Health Ministry sec-gen urges civil servants
Authorities seize 205kg of illicit cooking oil packs, 20kg of white sugar

Others Also Read