AI learning how to push our buttons


Artificial intelligence (AI) is magnifying exponentially the fear, anger and hate that social media has already weaponised, journalist and Nobel laureate Maria Ressa (pic) has warned.

“If the first generative AI was (about) fear, anger and hate – weaponising those – this one now leads to weaponising intimacy,” Ressa, who won the Nobel Peace Prize in 2021 with Russian journalist Dmitry Muratov for standing up to authoritarian regimes, told The Straits Times.

Ressa, who founded the Philippine online news site Rappler, was in Singapore this weekend for the New.Now.Next Media Conference organised by the Asia chapter of the Asian American Journalists Association.

It was hosted at Google’s Singapore office from Thursday to Saturday.

She said the first iteration of AI – seen in machine-learning programmes – was meant to get users addicted to scrolling through social media, so that companies such as Facebook and Twitter could make more money from targeted ads and harvested data.

But what these programmes learnt was that lies “spread six times faster than really boring facts”, she said, adding that the algorithms that power social media platforms keep churning out lies.

“What that does to you is that... it pumps you with toxic sludge – fear, anger, hate – and when you tell a lie a million times, it becomes a fact,” Ressa said.

This, she said, has helped populist and autocratic leaders rise to power.

Ressa and Rappler had been in the crosshairs of a strongman, Rodrigo Duterte, who was elected President of the Philippines in 2016. He was aided by a massive social media campaign that pushed his populist platform, anchored by anti-crime rhetoric.

She is currently facing civil and criminal cases lodged by the Justice Ministry and regulators under Duterte that she sees as retaliation by the former president for Rappler’s critical coverage of his brutal war on the narcotics trade.

His anti-drug crusade led to over 20,000 suspects killed in police raids or by unnamed vigilantes.

Ressa added that the impact goes beyond politics, citing a report issued by United States Surgeon-General Vivek Murthy last Tuesday that showed growing evidence that social media use may seriously harm children.

Dr Murthy said while social media can help children and adolescents find a community to connect with, it also contains “extreme, inappropriate, and harmful content” that can “normalise” self-harm and suicide.

Ressa said the new generation of AI – chatbots like ChatGPT created by Microsoft-funded OpenAI and Google’s Bard – would be spreading lies even faster, more broadly and more intimately if they are “released into the wild” without guardrails.

“It’s like open-sourcing the Manhattan Project,” she said, referring to research that led to the development of the atom bomb.

Wrongly used, she warned, AI would allow “bad actors” to stoke more online hate and violence that could spill over to the real world, prettify the resumes of despots, and serve up even more “micro-targeted”, invasive ads.

She said even those responsible for coding these chatbots warn that there is a “10% or greater chance that this leads to an extinction-level event, not hitting another species, but humanity”.

“It’s like releasing something as dangerous as nuclear fission into the hands of people with no guardrails,” she said.

Ressa said OpenAI’s own chief executive Sam Altman has told US lawmakers about how dangerous AI can be.

“But no one asked him, ‘If it’s so dangerous, why are you releasing it?,’” she said.

Microsoft’s chief economist Michael Schwarz has warned of the potential risk of “bad actors” causing “real damage” making use of AI.

“I’m quite confident that, yes, AI will be used by bad actors, and, yes, it will cause real damage,” he said at an event hosted by the Word Economic Forum on May 3.

“We have to put safeguards” to prevent hucksters and tyrants from profiting off AI with money-making scams and vote rigging, he said.

Ressa said AI, as it is shaping up to be, has to be reined in, along with the rest of the technology sector, which she described as the “least regulated industry in the world”.

“The problem with a godlike tech is that it is being used for profit – and that’s what we need to stop. This is where governments need to come in and protect their citizens,” she said. — The Straits Times/ANN

   

Next In Aseanplus News

South Korea pushes for better work-life balance to ease falling birth rate
Chinese rocket debris reenters atmosphere, mostly burning up
In world's largest refugee camps, Rohingya mobilise to fight in Myanmar
Thai medical tycoon wanted for fraud has fled to China, police say
In world's largest refugee camps, Rohingya mobilise to fight in Myanmar
Hong Kong Airlines flight to Japan diverted to Taipei after suspected fuel leak
Anwar, Yoon hold talks on Malaysia-South Korea ties
Oil holds at 2-week high as Russia, Iran tensions support prices
Foreign funds record RM165.3mil weekly net sale of Malaysian equities
New Zealand citizen poisoned by tainted alcohol in Laos returns home

Others Also Read