Tech giants agree to child safety principles around generative AI


Amazon, Google, Meta, Microsoft and ChatGPT creator OpenAI are among the companies to have signed up to the principles designed to combat the creation and spread of AI-generated child sexual abuse material. — Photo: Philipp von Ditfurth/dpa

Some of the world’s biggest tech and AI firms have agreed to follow new online safety principles designed to combat the creation and spread of AI-generated child sexual abuse material.

Amazon, Google, Meta, Microsoft and ChatGPT creator OpenAI are among the companies to have signed up to the principles, called Safety By Design.

The commitments have been drawn up by child online safety group Thorn and fellow nonprofit All Tech is Human and sees the firms pledge to develop, deploy and maintain generative AI models with child safety at the centre in an effort to prevent the misuse of the technology in child exploitation.

The principles see firms commit to develop, build and train AI models that proactively address child safety risks, for example by ensuring training data does not include child sexual abuse material, as well as maintaining safety after their release by staying alert and responding to child safety risks that emerge.

Generative AI tools such as ChatGPT have become the key area of development within the technology sector over the last 18 months, with an array of AI models and content generation tools being developed and launched by the major firms.

The rapid rise has seen social media and other platforms flooded with AI-generated words, images and videos, with many online safety groups warning of the implications of more fake and misleading content being seen and spread online.

Earlier this year, the UK children’s charity the NSPCC warned that young people were already contacting Childline about AI-generated child sexual abuse material.

Speaking about the new agreed principles, Dr Rebecca Portnoff, vice president of data science at Thorn, said: “We’re at a crossroads with generative AI, which holds both promise and risk in our work to defend children from sexual abuse.

“I’ve seen first-hand how machine learning and AI accelerates victim identification and child sexual abuse material detection. But these same technologies are already, today, being misused to harm children.

“That this diverse group of leading AI companies has committed to child safety principles should be a rallying cry for the rest of the tech community to prioritise child safety through Safety by Design.

“This is our opportunity to adopt standards that prevent and mitigate downstream misuse of these technologies to further sexual harm against children. The more companies that join these commitments, the better that we can ensure this powerful technology is rooted in safety while the window of opportunity is still open for action.” –

   

Next In Tech News

US plans to reduce Intel's $8.5 billion federal chips grant below $8 billion - New York Times
Opinion: Ultimate Fakebook
Students innovate to combat waste, dementia and allergies
Innovative AI solution by Malaysian teens aids stroke rehabilitation
Former BP boss Looney to chair US data company Prometheus Hyperscale
Indian regulator rejects Apple request to put antitrust report on hold
Share too much info on social media and risk being hacked, warns MCMC
What is Bluesky and why are people leaving X to sign up?
Opinion: Messages can gobble up storage space
ChatGPT writes better poetry than Shakespeare, most people think

Others Also Read