Bollywood star or deepfake? AI floods social media in Asia


Bollywood actors Ranbir Kapoor (left), Bobby Deol (centre) and Rashmika Mandanna (right) pose for a photograph in Mumbai. The lycra video, said Mandanna, was ‘extremely scary not only for me, but also for each one of us who today is vulnerable to so much harm because of how technology is being misused’. — AFP

There was the Bollywood star in skin-tight lycra, the Bangladeshi politician filmed in a bikini and the young Pakistani woman snapped with a man.

None was real, but all three images were credible enough to unleash lust, vitriol – and even allegedly a murder, underlining the sophistication of generative artificial intelligence, and the threats it poses to women across Asia.

The two videos and the photo were deepfake, and went viral in a vibrant social mediascape that is struggling to come to grips with the technology that has the power to create convincing copies that can upend real lives.

“We need to address this as a community and with urgency before more of us are affected by such identity theft,” Indian actor Rashmika Mandanna said in a post on X, formerly Twitter, that has garnered more than 6.2 million views.

She is not the only Bollywood star to be cloned and attacked on social media, with top actors including Katrina Kaif, Alia Bhatt and Deepika Padukone also targeted with deepfakes.

The lycra video, said Mandanna, was “extremely scary not only for me, but also for each one of us who today is vulnerable to so much harm because of how technology is being misused”.

While digitally manipulated images and videos of women were once easy to spot, usually lurking in the dark corners of the Internet, the explosion in generative AI tools such as Midjourney, Stable Diffusion and Dall-E has made it easy and cheap to create and circulate convincing deepfakes.

More than 90% of deepfake videos online are pornographic, according to tech experts, and most are of women.

While there are no separate data for South Asian countries, digital rights experts say the issue is particularly challenging in conservative societies, where women have long been harassed online and abuse has gone largely unpunished.

Social media firms are struggling to keep up.

Google’s YouTube and Meta Platforms – which owns Facebook, Instagram and WhatsApp – have updated their policies, requiring creators and advertisers to label all AI-generated content.

But the onus is largely on victims – usually girls and women – to take action, said Rumman Chowdhury, an AI expert at Harvard University who previously worked at reducing harms on Twitter.

“Generative AI will regrettably supercharge online harassment and malicious content ... and women are the canaries in the coal mine. They are the ones impacted first, the ones on whom the technologies are tested,” she said.

“It is an indication to the rest of the world to pay attention, because it’s coming for everyone,” Chowdhury told a recent United Nations briefing.

Deepfakes and the law

As deepfakes have proliferated worldwide, there are growing concerns – and rising instances – of their use in harassment, scams and sextortion.

Regulations have been slow to follow.

The US Executive Order on AI touches on dangers posed by deepfakes, while the European Union’s proposed AI Act will require greater transparency and disclosure from providers.

Last month, 18 countries – including the United States and Britain – unveiled a non-binding agreement on keeping the wider public safe from AI misuse, including deepfakes.

Among Asian nations, China requires providers to use watermarks and report illegal deepfakes, while South Korea has made it illegal to distribute deepfakes that harm “public interest”, with potential imprisonment or fines.

India is taking a tough stance as it drafts new rules.

IT Minister Ashwini Vaishnaw has said social media firms must remove deepfakes within 36 hours of receiving a notification, or risk losing their safe-harbour status that protects them from liability for third-party content.

But the focus should be on “mitigating and preventing incidents, rather than reactive responses”, said Malavika Rajkumar at the advocacy group IT for Change.

While the Indian government has indicated it may force providers and platforms to disclose the identity of deepfake creators, “striking a balance between privacy protection and preventing abuse is key,” Rajkumar added.

Women targeted

Deepfakes of women and other vulnerable communities such as LGBTQ+ people – especially sexual images and videos – can be particularly dangerous in deeply religious or conservative societies, human rights activists say.

In Bangladesh, deepfake videos of female opposition politicians – Rumin Farhana in a bikini and Nipun Roy in a swimming pool – have emerged ahead of an election on Jan 7.

And last month, an 18-year-old woman was allegedly shot dead by her father and uncle in a so-called honour killing in Pakistan’s remote Kohistan province, after a photograph of her with a man went viral. Police say the image was doctored.

Shahzadi Rai, a member of Pakistan’s Karachi Municipal Council, who has been the target of abusive trolling with deepfake images, has said they could exacerbate online gender-based violence and “seriously jeopardise” her career.

Even if audiences are able to distinguish between a real image and a deepfake, the woman’s integrity is questioned, and her credibility may be damaged, said Nighat Dad, founder of the non-profit Digital Rights Foundation in Pakistan.

“The threat to women’s privacy and safety is deeply concerning,” she said, particularly as disinformation campaigns gain steam ahead of an election scheduled for Feb 8.

“Deepfakes are creating an increasingly unsafe online environment for women, even non-public figures, and may discourage women from participating in politics and online spaces,” she said.

In several countries including India, entrenched gender biases already affect the ability of girls and young women to use the Internet, a recent report found.

Deepfakes of powerful Bollywood stars only underline the risk that AI poses to all women, said Rajkumar.

“Deepfakes have affected women and vulnerable communities for a long time; they have gained widespread attention only after popular actresses were targeted,” she said.

The heightened focus now should push “platforms, policymakers, and society at large to create a safer and more inclusive online environment”, she added. – Thomson Reuters Foundation

Follow us on our official WhatsApp channel for breaking news alerts and key updates!
   

Next In Tech News

Polish e-commerce Allegro's unit sues Alphabet for $568 million
Elon Musk's X lifts price for premium-plus tier to pay creators
US crypto industry eyes possible day-one Trump executive orders
Britannica didn’t just survive. It’s an AI company now
'Who's next?': Misinformation and online threats after US CEO slaying
What is (or was) 'perks culture’?
South Korean team develops ‘Iron Man’ robot that helps paraplegics walk
TikTok's rise from fun app to US security concern
Musk, president? Trump says 'not happening'
Jeff Bezos says most people should take more risks. Here’s the science that proves he’s right

Others Also Read