People are disinformation’s biggest problem, not AI, experts say


So far, the reach and influence of AI-generated disinformation remains unclear, but there is cause for concern. — Reuters

Lawmakers, fact-checking organisations and some tech companies are working to combat the threat of a new wave of AI-generated disinformation online, but experts say these efforts are undermined by the public's distrust of institutions and a general lack of literacy in spotting fake images, videos and audio clips online.

"Social media and human beings have made it so that even when we come in, fact check and say, ‘nope, this is fake,’ people say, ‘I don't care what you say, this conforms to my worldview,’” said Hany Farid, an expert in deepfake analysis and a professor at the University of California, Berkeley.

"Why are we living in that world where reality seems to be so hard to grip?” he said. "It's because our politicians, our media outlets and the internet have stoked distrust.”

Farid was speaking on the first episode of a new season of the Bloomberg Originals series AI IRL.

Experts have warned of the potential for artificial intelligence to accelerate the spread of disinformation for years. However, the pressure to do something about it increased notably this year after the introduction of a new crop of powerful generative AI tools that make it cheap and easy to produce visuals and text. In the US, there are fears that AI-generated disinformation could impact the 2024 US presidential election. Meanwhile, in Europe, the biggest social media platforms are required under a new law to fight the spread of disinformation on their platforms.

So far, the reach and influence of AI-generated disinformation remains unclear, but there is cause for concern. Bloomberg reported last week that misleading AI-generated deepfake voices of politicians were being circulated online days ahead of a narrowly contested vote in Slovakia. Some politicians in the US and Germany have also shared AI-generated images.

Rumman Chowdhury, a fellow at the Berkman Klein Center for Internet & Society at Harvard University and previously a director at X, the company formerly known as Twitter, agreed human fallibility is part of the problem in combatting disinformation.

"You can have bots, you can have malicious actors,” she said, "but actually a very big percent of the information online that’s fake is often shared by people who didn't know any better.”

Chowdhury said Internet users are generally savvier at spotting fake text posts thanks to years of being confronted with suspicious emails and social media posts. But as AI makes more realistic fake images, audio and video possible, "there is this level of education that people need.”

"If we see a video that looks real – for example, a bomb hitting the Pentagon – most of us will believe it,” said said. "If we were to see a post and someone said, ‘Hey, a bomb just hit the Pentagon,’ we are actually more likely to be sceptical of that because we've been trained more on text than video and images.” – Bloomberg

Follow us on our official WhatsApp channel for breaking news alerts and key updates!
   

Next In Tech News

GlobalFoundries forecasts upbeat Q4 results on strong demand from smartphone makers
Emerson sharpens automation focus with offer for rest of AspenTech in $15 billion deal
Data analytics firm Palantir jumps as AI boom powers software adoption
Tax fraud investigators search Netflix offices in Paris and Amsterdam, says source
Singapore's Keppel to buy Japanese AI-ready data centre
Tesla increases wages for staff at German gigafactory by 4%
Apple explores push into smart glasses with ‘Atlas’ user study
Japan's Kioxia sees flash memory demand almost tripling by 2028
Hacker gets into woman’s email, changes every password, tries to make purchases
Foxconn says Oct revenue +8.59% y/y, Q4 outlook good

Others Also Read