American researchers have developed a tool capable of “fooling” the artificial intelligence models used today for automatic image generation. The idea is to help artists prevent their works from being scraped by AI.
Given the name Nightshade, this tool is designed to enable artists to invisibly alter the pixels representing their work, in order to disrupt AI models that are prone to using such images without authorisation in their learning process.
Its creators even describe it as a veritable potential "poison pill" for these models, with serious malfunctions that could affect the training data and even render them unusable should such activity become widespread.
The models referred to are those on which the best-known image-creating AIs such as Midjourney and DALL-E are based. In practice, these images would no longer be interpreted correctly, with a dog being mistaken for a cat, a house for a cake and so on.
This tool, developed by a team of researchers at the University of Chicago, aims to address the concerns of many artists who find their practices disrupted by AI, and whose works are already being used as “models” for images generated in this fashion. The idea is therefore to fight back against the infringement of these artists' copyright and intellectual property rights.
Were it ever to be rolled out, this tool would join Glaze, another solution already developed by the same researchers, which allows images to be manipulated so that they are not picked up by models trained for AI.
Of course these tools will only be effective as long as these models are unable to detect, recognise and decode these manipulations. The day AI is powerful enough to get past them, other solutions would need to be found – or risk seeing artists’ original creations become fodder for an industry of learning models and AI generation. – AFP Relaxnews