Predators exploit AI tools to generate images of child abuse


Bloomberg News found one open web paedophile forum with a guide to generating fake child sex abuse material on Stable Diffusion, an image generation tool created by London-based billion-dollar startup Stability AI, which was posted October last year. — Photo by Philipp Katzenberger on Unsplash

Child predators are exploiting generative artificial intelligence technologies to share fake child sexual abuse material online and to trade tips on how to avoid detection, according to warnings from the National Center for Missing and Exploited Children and information seen by Bloomberg News.

In one example, users of a prominent child predation forum shared 68 sets of artificially generated images of child sexual abuse during the first four months of the year, according to Avi Jager, head of child safety and human exploitation at ActiveFence, a content moderation startup. The figure marks an increase from the 25 posts that ActiveFence observed during the final four months of 2022, said Jager, who declined to name the forum for safety purposes.

The images appear to be created by artificial intelligence technology and don’t show real humans being harmed, ActiveFence said in a report published Tuesday.

In 2022, the US National Center for Missing and Exploited Children’s CyberTipline received more than 32 million reports of suspected child sexual exploitation. Almost all of them were of apparent child sexual abuse material, or CSAM. The tipline is a centralised reporting system for reports of child exploitation in the US run by the nonprofit, which was established by Congress.

Forum users have posted "thousands” of times about how so-called GenAI tools could be used to produce other predatory content including scripts to assist with creating fake personas to earn the trust of minors and tips on where to find vulnerable targets.

The use of this emerging technology to generate disturbing images depicting the sexual abuse of kids comes as the NCMEC already receives roughly 80,000 reports of traditional photographs and content of abuse daily, according to Yiota Souras, the organisation’s chief legal officer.

"We’re on the precipice of something new,” she said.

Although Souras cautioned that there hadn’t been an overwhelming surge yet, she said that as companies work to scrub their platforms of CSAM-related materials, they now are poised to encounter a wave of images of sexual abuse created by algorithms.

"We anticipate that this is going to get bigger,” Jager said. "This is just getting started.”

Many of the more popular GenAI tools block certain problematic keywords but Jager said he observed people sharing tips on how to use keywords that evaded any guardrails in place. Users on such websites recommended typing in languages other than English, as image and text generation tools tended to have less censors for non-English phrases, Jager said.

Bloomberg News found one open web paedophile forum with a guide to generating fake child sex abuse material on Stable Diffusion, an image generation tool created by London-based billion-dollar startup Stability AI, which was posted October last year.

Stability AI "strictly prohibits any misuse for illegal or immoral purposes across our platforms and our policies are clear that this includes CSAM,” said Motez Bishara, the company’s director of communications.

"Over the past seven months, we have taken numerous steps to significantly mitigate the risk of exposure to Not Safe For Work content from our models,” he said. "These include installing filters that block unsafe and inappropriate content in all our imaging applications, APIs and models, and training our imaging models on datasets that have filtered NSFW images.”

Complaints about artificially-generated child abuse materials are already being reported. The National Center for Missing and Exploited Children is in talks with lawmakers and platforms over what to do about what the centre fears may become "a flood” of such content, said Souras, the group’s chief legal officer.

Souras said the organisation has observed a rise in sextortion, enticement and sex trafficking in which online predators appear to use AI-generated scripts to entice their victims. Other concerns include how to distinguish AI-generated content depicting abuse from real photography, which could distract from law enforcement and content moderation, said Souras.

Generating fake images of children involved in sexually explicit conduct is illegal in the US, according to the Department of Justice. Generating fake images of sexual abuse of children is illegal in countries such as the UK, Australia and Canada. – Bloomberg

Follow us on our official WhatsApp channel for breaking news alerts and key updates!
   

Next In Tech News

Japan's antitrust watchdog to find Google violated law in search case, Nikkei reports
Is tech industry already on cusp of artificial intelligence slowdown?
What does watching all those videos do to kids' brains?
How the Swedish Dungeons & Dragons inspired 'Helldivers 2'
'The Mind Twisting Quadroids' review: Help needed conquering the galaxy
Albania bans TikTok for a year after killing of teenager
As TikTok runs out of options in the US, this billionaire has a plan to save it
Google offers to loosen search deals in US antitrust case remedy
Is Bluesky the new Twitter for teachers in the US?
'Metaphor: ReFantazio', 'Dragon Age', 'Astro Bot' and an indie wave lead the top video games of 2024

Others Also Read