People are disinformation’s biggest problem, not AI, experts say


So far, the reach and influence of AI-generated disinformation remains unclear, but there is cause for concern. — Reuters

Lawmakers, fact-checking organisations and some tech companies are working to combat the threat of a new wave of AI-generated disinformation online, but experts say these efforts are undermined by the public's distrust of institutions and a general lack of literacy in spotting fake images, videos and audio clips online.

"Social media and human beings have made it so that even when we come in, fact check and say, ‘nope, this is fake,’ people say, ‘I don't care what you say, this conforms to my worldview,’” said Hany Farid, an expert in deepfake analysis and a professor at the University of California, Berkeley.

"Why are we living in that world where reality seems to be so hard to grip?” he said. "It's because our politicians, our media outlets and the internet have stoked distrust.”

Farid was speaking on the first episode of a new season of the Bloomberg Originals series AI IRL.

Experts have warned of the potential for artificial intelligence to accelerate the spread of disinformation for years. However, the pressure to do something about it increased notably this year after the introduction of a new crop of powerful generative AI tools that make it cheap and easy to produce visuals and text. In the US, there are fears that AI-generated disinformation could impact the 2024 US presidential election. Meanwhile, in Europe, the biggest social media platforms are required under a new law to fight the spread of disinformation on their platforms.

So far, the reach and influence of AI-generated disinformation remains unclear, but there is cause for concern. Bloomberg reported last week that misleading AI-generated deepfake voices of politicians were being circulated online days ahead of a narrowly contested vote in Slovakia. Some politicians in the US and Germany have also shared AI-generated images.

Rumman Chowdhury, a fellow at the Berkman Klein Center for Internet & Society at Harvard University and previously a director at X, the company formerly known as Twitter, agreed human fallibility is part of the problem in combatting disinformation.

"You can have bots, you can have malicious actors,” she said, "but actually a very big percent of the information online that’s fake is often shared by people who didn't know any better.”

Chowdhury said Internet users are generally savvier at spotting fake text posts thanks to years of being confronted with suspicious emails and social media posts. But as AI makes more realistic fake images, audio and video possible, "there is this level of education that people need.”

"If we see a video that looks real – for example, a bomb hitting the Pentagon – most of us will believe it,” said said. "If we were to see a post and someone said, ‘Hey, a bomb just hit the Pentagon,’ we are actually more likely to be sceptical of that because we've been trained more on text than video and images.” – Bloomberg

Follow us on our official WhatsApp channel for breaking news alerts and key updates!
   

Next In Tech News

What's really happening when you agree to a website's terms of service
Samsung ordered to pay $118 million for infringing Netlist patents
Sirius XM found liable in New York lawsuit over subscription cancellations
US Supreme Court tosses case involving securities fraud suit against Facebook
Amazon doubles down on AI startup Anthropic with another $4 billion
Factbox-Who are bankrupt Northvolt's creditors?
UK should use new powers to probe Apple-Google mobile browser duopoly, report says
EU regulators scrap probe into Apple's e-book rules after complaint was withdrawn
Hyundai recalls over 145,000 electrified US vehicles on loss of drive power
'World of Warcraft' still going strong as it celebrates 20 years

Others Also Read