Researchers: Meta struggles to curb hate speech before US vote


Supporters holding flags during the Election Night rally for US Democratic presidential nominee Kamala Harris, in Washington, US. Some Facebook posts using hate speech are not being removed promptly ahead of the US vote, researchers say. — Reuters

LOS ANGELES: Meta – the owner of Facebook and Instagram – is struggling to fully contain and address hate speech ahead of the US election, according to research shared exclusively with the Thomson Reuters Foundation.

Non-profit Global Witness tested how Facebook was dealing with hate speech ahead of the presidential vote by analysing 200,000 comments on the pages of 67 US Senate candidates between Sept 6 and Oct 6.

When Global Witness researchers used Facebook’s reporting tool to flag 14 comments that they considered particularly egregious violations of Meta’s hate speech rules in its "community standards", Meta took days to react.

The comments flagged by the researchers referred to Jews as “inbred and parasitic”, and called one political candidate a “lezbo pig”.

Meta removed some but not all of the 14 comments from Facebook after Global Witness emailed the company directly, the researchers said.

"There was a real failure to promptly review these posts," said Ellen Judson, a researcher with Global Witness who oversaw the test.

The findings come as Meta has long faced criticism from researchers, watchdog groups, and lawmakers for not fostering a healthy information ecosystem during elections across the globe.

Only in April, the European Commission opened an investigation to assess whether Meta may have breached EU online content rules ahead of the European Parliament elections.

Judson said Facebook’s handling of the comments flagged by Global Witness points to a breakdown in how the platform deals with hate speech.

In an email, a spokesperson for Meta said the Global Witness work was "based on a tiny sample of comments and we removed those that violate our policies".

"This is not reflective of the work our teams – including the 40,000 people working on safety and security – are doing to keep our platform safe ahead of the election," the spokesperson said.

Facebook’s community standards say content that "attacks individuals based on their race, ethnicity, national origin, sex, gender, gender identity, sexual orientation, religious affiliation, disabilities, or diseases is considered a violation".

While it is not clear how many users were exposed to the hate speech, Judson said the impact could be large.

"Online abuse can have negative psychological impact and can make people reconsider being in politics. For outside observers, seeing that kind of discourse can perhaps give them the impression that this isn’t a place for me," she said.

"A small amount of abuse can still do a lot of harm."

Lack of investment?

The failure is part of a broader lack of investment in election preparedness ahead of the upcoming US vote, said Theodora Skeadas, a former public policy official at Twitter – now X.

"They have laid off staff and decreased resources towards monitoring political content," said Skeadas, now CEO of Tech Policy Consulting, which addresses issues including AI governance and information integrity.

Over the past several years, Facebook has reduced its headcount across multiple teams.

Facebook and Instagram are the second and third most popular social media platforms in the United States, according to the US-based Pew Research Center, with 68% of US adults reporting that they use Facebook, and 47% saying they use Instagram.

Over a third of users rely on the platforms to get information about current events, the Pew Research Center found.

According to Meta, the relevance of hate speech violating content is very low on its platform – about 0.02% on Facebook and between 0.02%-0.03% on Instagram, meaning for every 10,000 content views about 2 to 3 of them would contain hate speech.

The Meta spokesperson told the Thomson Reuters Foundation that in the second quarter of 2024, Facebook took action against 7.2 million pieces of content for violating hate speech policies and 7.8 million pieces of content for violating its bullying and harassment policies.

But Jeff Allen, a former data scientist at Meta, who is the co-founder of the non-profit Integrity Institute, said that automated systems used to flag hate speech often miss a lot. They can fail to grasp the context of a comment, or be fooled by slang or oblique language, he said.

Allen also said platforms like Facebook are wary of being too heavy-handed about removing posts, as it can interfere with the amount of time people spend online.

"If you are more aggressive about taking down content, you see engagement go down – there are trade-offs," he said.

In a February blog post outlining its strategy for elections, Meta’s president of global affairs Nick Clegg wrote: "No tech company does more or invests more to protect elections online than Meta – not just during election periods but at all times."

Clegg said the company invested more than US$20bil in the effort leading into the 2024 US presidential election, highlighting Meta’s commitment to making political advertising transparent and to strengthening teams hunting down hate groups on the platform.

Transparency

Despite Facebook’s pledge to help safeguard elections, a number of recent reports point to instances where false advertising, election misinformation, and hate speech have been permitted.

In October, Global Witness carried out a test of major social media platforms’ advertising systems that found that some paid ads with election misinformation were still being accepted and posted on Facebook, even though the platform had improved its review process.

Forbes reported in October that Facebook was running over a million dollars of ads falsely claiming that the US election could be postponed or rigged, while the Bureau of Investigative Journalism published a report in November saying ecommerce companies were selling merchandise via Facebook that contained similar falsehoods. In both cases, Meta said it was reviewing the matter, according to the reports.

Researchers like Allen say that Meta could be much more transparent about how it tackles hate speech, by releasing data on how many users are exposed, explaining how often posts are submitted to human reviewers, and disclosing more about how their automated systems work.

Meta phased out "CrowdTangle", a tool widely used by outside researchers to track viral misinformation on the platform, in August. The moved fueled complaints from groups and experts who used it, but Facebook said it had introduced new tools that gave a fuller picture of activities on its platform.

"We need metrics on the scale of harms," Allen said.

Global Witness said that Facebook did not engage with it at all on the findings of its research – leaving it in the dark about how hate speech was being handled in the days before the US election.

Without more transparency, it is impossible to know how seriously the platforms are taking abuse at this critical time, leaving it to outside researchers to flag violations of the company’s own rules, said Judson, with Global Witness.

"For them, it’s always a ‘catch-up’ situation, it’s not pro-active," she said. – Thomson Reuters Foundation

Follow us on our official WhatsApp channel for breaking news alerts and key updates!
   

Next In Tech News

How 'CoComelon' became a mass media juggernaut for preschoolers
Evolution of smartphone damage: From drips to drops
Are you tracking your health with a device? Here's what could happen with the data
US judge rejects SEC bid to sanction Elon Musk
What's really happening when you agree to a website's terms of service
Samsung ordered to pay $118 million for infringing Netlist patents
Sirius XM found liable in New York lawsuit over subscription cancellations
US Supreme Court tosses case involving securities fraud suit against Facebook
Amazon doubles down on AI startup Anthropic with another $4 billion
Factbox-Who are bankrupt Northvolt's creditors?

Others Also Read