Meta overhauls rules on deepfakes, other altered media


FILE PHOTO: People walk behind a Meta Platforms logo during a conference in Mumbai, India, September 20, 2023. REUTERS/Francis Mascarenhas/File Photo

NEW YORK (Reuters) - Facebook owner Meta announced major changes to its policies on digitally created and altered media on Friday, ahead of U.S. elections poised to test its ability to police deceptive content generated by new artificial intelligence technologies.

The social media giant will start applying "Made with AI" labels in May to AI-generated videos, images and audio posted on its platforms, expanding a policy that previously addressed only a narrow slice of doctored videos, Vice President of Content Policy Monika Bickert said in a blog post.

Bickert said Meta would also apply separate and more prominent labels to digitally altered media that poses a "particularly high risk of materially deceiving the public on a matter of importance," regardless of whether the content was created using AI or other tools.

The new approach will shift the company's treatment of manipulated content. It will move from one focused on removing a limited set of posts toward one that keeps the content up while providing viewers with information about how it was made.

Meta previously announced a scheme to detect images made using other companies' generative AI tools using invisible markers built into the files, but did not give a start date at the time.

A company spokesperson told Reuters the new labeling approach would apply to content posted on Meta's Facebook, Instagram and Threads services. Its other services, including WhatsApp and Quest virtual reality headsets, are covered by different rules.

Meta will begin applying the more prominent "high-risk" labels immediately, the spokesperson said.

The changes come months before a U.S. presidential election in November that tech researchers warn may be transformed by new generative AI technologies. Political campaigns have already begun deploying AI tools in places like Indonesia, pushing the boundaries of guidelines issued by providers like Meta and generative AI market leader OpenAI.

In February, Meta's oversight board called the company's existing rules on manipulated media "incoherent" after reviewing a video of U.S. President Joe Biden posted on Facebook last year that altered real footage to wrongfully suggest he had behaved inappropriately.

The footage was permitted to stay up, as Meta's existing "manipulated media" policy bars misleadingly altered videos only if they were produced by artificial intelligence or if they make people appear to say words they never actually said.

The board said the policy should also apply to non-AI content, which is "not necessarily any less misleading" than content generated by AI, as well as to audio-only content and videos depicting people doing things they never actually did.

(Reporting by Katie Paul; Editing by Chizu Nomiyama)

Follow us on our official WhatsApp channel for breaking news alerts and key updates!

   

Next In Tech News

How they celebrated the holidays 250 miles above Earth
The speed of human thought lags far behind your Internet connection, study finds
The tale of 'Shatter Special', the world's first fully computerised comic book
Opinion: Read your messages closely and don’t click those links
Trump’s 'Made in USA' bitcoin is promise impossible to keep
Why Taiwan’s Foxconn, an iPhone supplier, is investing in Texas and Thailand
Elon Musk’s go-to cost-cutter is working for DOGE
US man used fake Instagram profiles to trick kids for nude images, videos
Japan Air resumes ticket sales after overcoming cyberattack
This university is deactivating alumni emails. One grad is so unhappy, he sued the school

Others Also Read