AI ‘supercharges’ online disinformation and censorship, report warns


A pedestrian looks at a mobile phone while walking in London’s central business district of Canary Wharf. By some estimates, AI-generated content could soon account for 99% or more of all information on the Internet, overwhelming content moderation systems that are already struggling to keep up with the deluge of misinformation, tech experts say. — AFP

Rapid advances in artificial intelligence are boosting online disinformation and enabling governments to increase censorship and surveillance in a growing threat to human rights, a US non-profit said in a report published on Oct 4.

Global Internet freedom declined for the 13th consecutive year, with China, Myanmar and Iran having the worst conditions of the 70 countries surveyed by the Freedom on the Net report, which highlighted the risks posed by easy access to generative AI technology.

AI allows governments to “enhance and refine online censorship” and amplify digital repression, making surveillance, and the creation and spread of disinformation faster, cheaper, and more effective, said the annual report by Freedom House.

“AI can be used to supercharge censorship, surveillance, and the creation and spread of disinformation,” said Michael J. Abramowitz, president of Freedom House. “Advances in AI are amplifying a crisis for human rights online.”

By some estimates, AI-generated content could soon account for 99% or more of all information on the Internet, overwhelming content moderation systems that are already struggling to keep up with the deluge of misinformation, tech experts say.

Governments have been slow to respond, with few countries passing legislation for the ethical use of AI, while also justifying the use of AI-based surveillance technologies such as facial recognition on the grounds of security.

Generative AI-based tools were used in at least 16 countries to distort information on political or social issues over the period June 2022 to May 2023, the Freedom House report noted, adding that the figure is likely an undercount.

Meanwhile, in at least 22 countries, social media companies were required to use automated systems for content moderation to comply with censorship rules.

With at least 65 national-level elections taking place next year including in Indonesia, India and the United States, misinformation can have major repercussions, with deepfakes already popping up from New Zealand to Turkey.

“Generative AI offers sophistication and scale to spread misinformation on a level that was previously unimaginable – it is a force multiplier of misinformation,” said Karen Rebelo, deputy editor at Boom Live, a fact-checking organisation based in Mumbai.

While AI is a “military-grade weapon in the hands of bad actors”, in India political parties and their proxies are the biggest spreaders of misinformation and disinformation, she said, and it is not in their interest to regulate AI.

While companies such as OpenAI and Google have imposed safeguards to reduce some overtly harmful uses of their AI-based chatbots, these can be easily breached, Freedom House said.

Even if deepfakes are quickly exposed, they can “undermine public trust in democratic processes, incentivise activists and journalists to self-censor, and drown out reliable and independent reporting”, the report noted.

“AI-generated imagery ... can also entrench polarisation and other existing tensions. In extreme cases, it could galvanise violence against individuals or whole communities,” it added.

For all its pitfalls, AI technology can be enormously beneficial, the report noted, so long as governments regulate its use and enact strong data privacy laws, while also requiring better misinformation-detection tools and safeguards for human rights.

“When designed and deployed safely and fairly, AI can help people evade authoritarian censorship, counter disinformation, and document human rights abuses,” said Allie Funk, Freedom House’s research director for technology and democracy.

For example, AI is being increasingly used in fact-checking and to analyse satellite imagery, social media posts, and images to flag human rights abuses in conflict zones. – Thomson Reuters Foundation

Follow us on our official WhatsApp channel for breaking news alerts and key updates!

   

Next In Tech News

Sirius XM found liable in New York lawsuit over subscription cancellations
US Supreme Court tosses case involving securities fraud suit against Facebook
Amazon doubles down on AI startup Anthropic with $4 billion investment
Factbox-Who are bankrupt Northvolt's creditors?
UK should use new powers to probe Apple-Google mobile browser duopoly, report says
EU regulators scrap probe into Apple's e-book rules after complaint was withdrawn
Hyundai recalls over 145,000 electrified US vehicles on loss of drive power
'World of Warcraft' still going strong as it celebrates 20 years
Northvolt CEO steps down, saying group needs up to $1.2 billion
Bitcoin at record highs, sets sights on $100,000

Others Also Read