This tool protects online images from being scraped to train generative AI


Researchers have developed a tool capable of ‘fooling’ the machine learning models that currently enable AI to generate images. — AFP Relaxnews

Unveiled in the fall and now available for download, Nightshade is a tool capable of ‘fooling’ the artificial intelligence models currently used for automatic image generation. The idea is to help artists prevent their work from being misused without their knowledge or consent.

Unless specifically blocked, generative AI models can now be trained on any content found on the internet. This obviously poses a number of problems for artists and content creators who do not want their work to be used as part of the AI training process. Effective solutions to ‘trick’ these generative AI models include Nightshade.

This tool has been developed by a team of researchers at the University of Chicago, and is designed to enable artists to invisibly alter the pixels representing their work, in order to disrupt AI models that are prone to using such images without authorisation in their learning process. Its creators even describe it as a veritable potential “poison pill” for these models, with serious malfunctions that could affect the training data and even render them unusable should such activity become widespread.

The models referred to are those on which the best-known image-creating AIs such as Midjourney and DALL-E are based. In practice, these images would no longer be interpreted correctly, with a dog being mistaken for a cat, a house for a cake and so on.

Ultimately, Nightshade's aim is not to disrupt AI models, but to ensure that licensing images from their creators one day becomes the norm. The tool is now available to download from the university website.

Nightshade joins Glaze, another solution previously developed by the same researchers, which allows images to be manipulated so that they are not picked up by models trained for AI. Of course, these tools will only be effective as long as these models are unable to detect, recognise and decode these manipulations. The day that AI is powerful enough to get past them, other solutions would need to be found. – AFP Relaxnews

Follow us on our official WhatsApp channel for breaking news alerts and key updates!

   

Next In Tech News

SpaceX 'forcefully rejects' FAA allegation it violated launch requirements
Brazil's top court orders X not to circumvent ban at risk of daily fine
Amazon adds chatbot for its sellers, boosting automation
Social media users lack control over data used by AI, US FTC says
US-listed crypto stocks jump after bumper rate cut from Fed
Samsung sues Indian labour union over strike as dispute escalates
Intel says it has no plans to divest majority stake in Mobileye
Booking.com's price curbs on hotels may hinder competition, EU top court says
UnitedHealth tech unit's rivals say new, post-hack customers are staying
Google buys carbon removal credits from Brazil startup, joining Microsoft

Others Also Read