Opinion: Will 2024 be the year fake news destroys democracy?


AI-based disinformation has already begun to proliferate – and gets harder to spot as fake with every passing month. — Photo by Jorge Franganillo on Unsplash

In 2024, democracy will face a test for which it is unready. For the first time since the Internet age began, the world’s four largest electoral blocs – India, the European Union, the US, and Indonesia – will hold general elections in the same year. Almost a billion people may go to the polls in the next 12 months, amid a storm of disinformation and digital manipulation unlike anything the world has yet seen.

The stakes are extraordinarily consequential for the future of democracy itself. In the US, the electoral favourite appears to revel in the possibility of becoming a dictator. In the EU, the far right is poised to surge continent-wide. Indonesia’s front-runner is a former general once accused of human-rights violations. And in India, a beleaguered opposition faces its last chance to stave off what may otherwise turn into decades of one-party rule.

We have known since 2016 at least that elections in the digital age are unusually vulnerable to manipulation. While officials responsible for election integrity have been working diligently since then, they are fighting the last war. Former President Donald Trump’s 2016 victory and other votes around that period were influenced by carefully seeded narratives, bot farms, and so on. In response, a small army of fact-checkers emerged around the world and mechanisms to keep “fake news” out of the formal press multiplied.

The experience of India – which, given that it has the most voters, is also the world’s largest lab for election malpractice – demonstrates the limits of this work. The more scrupulous fact-checkers are, the easier they can be overwhelmed with a flood of fake news. They’re also, unfortunately, human - and therefore too easy to discredit, however unfairly.

Some new ideas have begun to emerge. Even Elon Musk’s critics appear fond of the “community notes” he has added to X, formerly known as Twitter, which tag viral tweets with crowd-sourced fact-checks. Because these are crowd-sourced, they respond organically to the amount of fake news in circulation and, because they are not associated with any individual group of fact-checkers, they are harder to dismiss as biased.

Yet technology has moved even faster. AI-based disinformation has already begun to proliferate – and gets harder to spot as fake with every passing month. Oddly, stopping such messages from going viral is harder when they don’t immediately come across as offensive or particularly pointed. In Indonesia, for example, a TikTok video that appeared to show defence minister and presidential candidate Prabowo Subianto speaking Arabic was viewed millions of times. It was an AI-generated deepfake meant to bolster his diplomatic (and possibly his Islamic) credentials.

Nor can we assume that an increasingly digital-savvy electorate will be able to navigate this new information landscape without help. If there’s one thing we have learned from the information war that has accompanied various conflicts in 2023, it’s that people who grew up with the Internet are not those best-equipped to identify obvious propaganda. In fact, they seem to be least able to tell fact from fiction.

The threat to democracy is transnational. The platforms being used are global; so is the messaging being deployed. Its defence, therefore, cannot be national. For one thing, it is not a task any government can accomplish alone. For another, it is not a task any one government can be trusted to pursue on its own.

But every country has different approaches when it comes to securing its elections, and both would-be manipulators and the platforms they exploit have taken advantage of this disunity. The level of disinformation that will emerge over the coming year will wash away our individual defenses unless we adopt a more strategic and unified approach.

We do not yet know what mechanisms – whether crowd-sourcing, or transnational regulation of platforms, or shared norms on speech and de-platforming – will work best. What we will need, however, is to swiftly share information on what measures do seem to work, as well as unified pressure on platforms to adopt them.

We can learn from each other: India’s TikTok ban seems to have been more effective than expected, for example. But we must also share a commitment to transparency. Regulators in India and Indonesia must be convinced that US-based platforms’ online norms are designed as much to protect their national cohesion and political integrity as they are to defend northern Californian speech shibboleths.

Above all else, we need to work together. The defence of democracy has always been one of the major reasons for multilateral action. In 2024, that defence must include the protection of our national elections. – Bloomberg Opinion/Tribune News Service

Follow us on our official WhatsApp channel for breaking news alerts and key updates!
   

Next In Tech News

US study finds China’s tech innovation ‘much stronger’ than previously understood
Hidden AirTag helps officers track down college student’s stolen bicycle, US cops say
Cop shows up on Ring video with DoorDash order in Texas. ‘Did I do something wrong?’
Alibaba accelerates AI push by releasing new open-source models, text-to-video
MrBeast named in California lawsuit over ‘Beast Games’ show
Indonesia probes alleged hack of Jokowi, six million taxpayers
Nintendo sues ‘Pokemon with guns’ maker for patent infringement
Zoom fatigue? Try some nature in your background, study says
A key employee says the Titan sub tragedy could have been prevented
Snapchat pushes ‘safer’ platform image, but not everyone agrees

Others Also Read