Nightshade Nightshade

Computer scientists at the University of Chicago have developed a new tool called Nightshade that is meant to “poison” digital artwork, making it detrimental for training image-generating AI models that engage in intellectual property theft, such as DALL-E, Midjourney, and Stable Diffusion.

Where did Nightshade come from? Nightshade was developed as part of the Glaze Project, which is run by a group of computer scientists from the University of Chicago led by professor Ben Zhao. The group previously developed Glaze, a tool designed to alter how AI training algorithms perceive the style of digital artwork to confuse the models.

How does it work? The program uses the open-source machine learning framework Pytorch to tag images at the pixel level. The tags aren’t obvious to humans looking at the image, but AI models see them differently, adversely affecting how the images are used for training.

What’s the difference between Glaze and Nightshade? Glaze convinces training models that they’re seeing a different artistic style than a human looking at the image would see. For example, Glaze can convince AI models that a “glazed” charcoal drawing is actually an oil painting, while any human looking at the image would still see a charcoal drawing. While Glaze tricks training models into confusing different styles, Nightshade convinces the models the content of an image is something different than what a human would see.  So, a “shaded” image could convince an AI model that a photo of a cat is actually a photo of a dog. The model would train itself on that data, and when a user inputs a text prompt asking for a picture of a cat, it would instead receive an image of a dog.

Nightshade
This image shows how feeding ‘poisoned’ image data into AI models can confuse the system’s outputs.

What are the drawbacks? The glazing and shading processes add some noise to digital images. The level of distortion will vary from image to image and can be adjusted by the user.

So is this the end of image-generating software? Not at all; and it’s worth pointing out that Glaze Project is not anti-AI. As stated earlier, Glaze and Nightshade use open-source AI software in the image tagging process. Instead, the programs have been developed to create an ecosystem in which users of image-generating programs would need the approval of rightsholders to get unaltered access to training images.

What does the Glaze Project have to gain from this? According to Zhao, not money. On the group’s website, he writes:

Our primary goals are to discover and learn new things through our research, and to make a positive impact on the world through them. I (Ben) speak for myself (but I think the team as well) when I say that we are not interested in profit. There is no business model, no subscription, no hidden fees, no startup.