People are disinformation’s biggest problem, not AI, experts say


So far, the reach and influence of AI-generated disinformation remains unclear, but there is cause for concern. — Reuters

Lawmakers, fact-checking organisations and some tech companies are working to combat the threat of a new wave of AI-generated disinformation online, but experts say these efforts are undermined by the public's distrust of institutions and a general lack of literacy in spotting fake images, videos and audio clips online.

"Social media and human beings have made it so that even when we come in, fact check and say, ‘nope, this is fake,’ people say, ‘I don't care what you say, this conforms to my worldview,’” said Hany Farid, an expert in deepfake analysis and a professor at the University of California, Berkeley.

"Why are we living in that world where reality seems to be so hard to grip?” he said. "It's because our politicians, our media outlets and the internet have stoked distrust.”

Farid was speaking on the first episode of a new season of the Bloomberg Originals series AI IRL.

Experts have warned of the potential for artificial intelligence to accelerate the spread of disinformation for years. However, the pressure to do something about it increased notably this year after the introduction of a new crop of powerful generative AI tools that make it cheap and easy to produce visuals and text. In the US, there are fears that AI-generated disinformation could impact the 2024 US presidential election. Meanwhile, in Europe, the biggest social media platforms are required under a new law to fight the spread of disinformation on their platforms.

So far, the reach and influence of AI-generated disinformation remains unclear, but there is cause for concern. Bloomberg reported last week that misleading AI-generated deepfake voices of politicians were being circulated online days ahead of a narrowly contested vote in Slovakia. Some politicians in the US and Germany have also shared AI-generated images.

Rumman Chowdhury, a fellow at the Berkman Klein Center for Internet & Society at Harvard University and previously a director at X, the company formerly known as Twitter, agreed human fallibility is part of the problem in combatting disinformation.

"You can have bots, you can have malicious actors,” she said, "but actually a very big percent of the information online that’s fake is often shared by people who didn't know any better.”

Chowdhury said Internet users are generally savvier at spotting fake text posts thanks to years of being confronted with suspicious emails and social media posts. But as AI makes more realistic fake images, audio and video possible, "there is this level of education that people need.”

"If we see a video that looks real – for example, a bomb hitting the Pentagon – most of us will believe it,” said said. "If we were to see a post and someone said, ‘Hey, a bomb just hit the Pentagon,’ we are actually more likely to be sceptical of that because we've been trained more on text than video and images.” – Bloomberg

Follow us on our official WhatsApp channel for breaking news alerts and key updates!
   

Next In Tech News

TikTok's rise from fun app to US security concern
Musk, president? Trump says 'not happening'
Jeff Bezos says most people should take more risks. Here’s the science that proves he’s right
Bluesky finds with growth comes growing pains – and bots
How tech created a ‘recipe for loneliness’
How data shared in the cloud is aiding snow removal
Trump appoints Bo Hines to presidential council on digital assets
Do you have a friend in AI?
Japan's antitrust watchdog to find Google violated law in search case, Nikkei reports
Is tech industry already on cusp of artificial intelligence slowdown?

Others Also Read