Opinion: Social media companies must curtail the spread of misinformation


The rise of artificial intelligence to create sophisticated chatbots such as ChatGPT and deepfake technology will worsen the spread of fake news, further threatening democracy. Policymakers must soon strike a balance between the First Amendment and regulating social media. — AP

About 500 hours of video gets uploaded to YouTube every minute. The online video-sharing platform houses more than 800 million videos and is the second most visited site in the world, with 2.5 billion active monthly users.

Given the deluge of content flooding the site every day, one would surmise that YouTube must have an army of people guarding against the spread of misinformation — especially in the wake of the Jan 6, 2021, insurrection that was fuelled by lies on social media.

Well, not actually.

Following recent cutbacks, there is just one person in charge of misinformation policy worldwide, according to a recent report in the New York Times. This is alarming, since fact-checking organisations have said YouTube is a major pipeline in the spread of disinformation and misinformation.

YouTube is owned by Google. The cutbacks were part of a broader reduction by Alphabet, Google’s parent company, which shed 12,000 jobs in an effort to boost profits, which were around US$60bil (RM266bil) last year.

YouTube is not the only social media company easing some of the already limited safeguards put in place following the Russian disinformation campaign that helped elect Donald Trump in 2016.

Meta, which owns Facebook, Instagram and WhatsApp, slashed 11,000 jobs last fall and is reportedly preparing more layoffs.

Those cuts came as Facebook, which made US$23bil (RM102bil) last year, quietly reduced its efforts to thwart foreign interference and voting misinformation before the November midterm elections.

Facebook also shut down an examination into how lies are amplified in political ads on the social media site and indefinitely banned a team of New York University researchers from the site.

Twitter implemented even deeper cuts, laying off 50% of its employees days before the midterm election in November. The cuts included employees in charge of preventing the spread of misinformation. Additional layoffs in the so-called trust and safety team occurred in January.

It’s not just the spread of political misinformation that is misleading and dividing the public. Twitter recklessly ended its ban on Covid-19 misinformation, which will likely lead to more needless deaths.

Hate speech also exploded on Twitter since Elon Musk purchased the company for US$44bil (RM195bil) in October.

In the weeks after Musk took control of Twitter, antisemitic posts jumped more than 61%. Slurs against Black people soared by more than 200%, while slurs against gay men increased by 58%. The hate spewed online has been linked to an increase in violence toward people of colour and immigrants around the world.

But Musk says he is a free speech absolutist — except when it impacts him. The billionaire temporarily suspended the accounts of several journalists and blocked others who rebuked him on Twitter. He also fired employees at SpaceX, one of his other companies, who criticised him.

More to the point, Musk fails to understand that freedom of speech is not absolute. As much as this board supports and cherishes the First Amendment, there are rules and regulations surrounding what can be said.

For example, you can’t harass or violate the rights of others. Just ask Alex Jones. The conspiracy theorist and Infowars founder was ordered to pay nearly US$1bil (RM4.44bil) in damages to the families of eight victims of the Sandy Hook Elementary School shooting for his repeated lies that the massacre was a hoax.

To be sure, the First Amendment makes it difficult to regulate social media companies. But doing nothing is not the answer. The rise of artificial intelligence to create sophisticated chatbots such as ChatGPT and deepfake technology will worsen the spread of fake news, further threatening democracy. Policymakers must soon strike a balance between the First Amendment and regulating social media.

Texas and Florida have already muddied the regulation debate by passing laws that will upend the already limited content moderation efforts by social media companies and make the internet an even bigger free-for-all. The US Supreme Court put off whether to take up the cases, leaving the state laws in limbo for now.

Meanwhile, the European Union is pushing forward with its own landmark regulations called the Digital Services Act. The measure takes effect next year and aims to place substantial content moderation requirements on social media companies to limit false information, hate speech, and extremism.

The spread of misinformation and disinformation is a growing threat to civil society. Social media companies can’t ignore their responsibility. – The Philadelphia Inquirer/Tribune News Service

Follow us on our official WhatsApp channel for breaking news alerts and key updates!
   

Next In Tech News

Britannica didn’t just survive. It’s an AI company now
'Who's next?': Misinformation and online threats after US CEO slaying
What is (or was) 'perks culture’?
South Korean team develops ‘Iron Man’ robot that helps paraplegics walk
TikTok's rise from fun app to US security concern
Musk, president? Trump says 'not happening'
Jeff Bezos says most people should take more risks. Here’s the science that proves he’s right
Bluesky finds with growth comes growing pains – and bots
How tech created a ‘recipe for loneliness’
How data shared in the cloud is aiding snow removal

Others Also Read