
Most generative AI models scrape the internet for text and images, putting them all into a giant dataset that they dip into whenever we enter a prompt. They then spit out a seemingly new product that is derived from this soup of data. But AI companies are not entirely transparent with where their data comes from, and often skirt intellectual property laws and ethical concerns.
Many artists who post their work online have come forward and denounced this practice. Sarah Andersen, the woman behind the comic book Sarah’s Scribbles, wrote for The New York Times about her experience with art theft. She laments that her art style has been repeatedly used to push neo-Nazi ideology. Andersen mentions the “violation” she felt when she found out someone had made a font of her handwriting, which she regards as “personal and intimate” and “a piece of [her] soul.” When it comes to the art industry, specifically, artificial intelligence raises many questions regarding the authenticity and value of art: What does it mean if an AI-generated image is indistinguishable from an artist’s “real” work? Do artists have any recourse if their art is used to train AI? More importantly, what can they do to protect their creations?
A team of students and professors at the University of Chicago has designed two tools for artists to protect their art. The team aims to recalibrate the relationship between technology and people, so that “human creatives retain agency and control over their work products and their use.” In 2022, the team created Glaze and Nightshade, offensive and defensive tools “to disrupt unauthorized AI training on [artists’] work product.” Their tools are free to use and available to everyone.
Glaze works by understanding the AI models that are trained on human art and uses machine learning algorithms to compute a set of minimal changes to artworks, that are unnoticeable to our eyes, but appear to AI models like a dramatically different art style. Its main function is to protect from style mimicry. “For example, human eyes might find a glazed charcoal portrait with a realism style to be unchanged, but an AI model might see the glazed version as a modern abstract style, a la Jackson Pollock. So when someone then prompts the model to generate art mimicking the charcoal artist, they will get something quite different from what they expected,” the Glaze website says.
While Glaze is not without limitations, it represents a step forward in protecting human art. The program is by no means a permanent solution against AI mimicry, as AI algorithms are ever-evolving. Despite its limits, the team has designed Glaze to be as robust as possible. Filtering, compressing, blurring, or screenshotting the artwork cannot work around Glaze, because the effect is only something that AI models can see, rather than humans. They also offer a phone and tablet version called WebGlaze.
Nightshade, on the other hand, is a tool meant to disincentivize art theft. Though they work in similar ways, the result is different. Nightshade “poisons” images, making them appear radically different to an AI model. Where we would see a delicious fruit salad, AI would see assorted nail polish bottles.
The team notes that Nightshade has some of the same limitations as Glaze, and they unfortunately cannot be used at the same time, though they are working on it. Nightshade does not protect against style mimicry.
There are some shortcomings to the programs. For an artist like Van Gogh, whose art has already been widely circulated and used to train AI for some time now, Glaze cannot offer much help. However, for a smaller artist, it would still be an effective method of protection. The team also highlights that, to them, imperfect protection is still more valuable than no protection at all.
The issue that Glaze and Nightshade are hoping to address was displayed in full force recently. In March, shortly after the launch of OpenAI’s GPT-4o, widespread AI-generated images of a specific style became a viral trend. Specifically, an AI trend involving Studio Ghibli exploded overnight; the internet was flooded with AI-generated images inspired by the animation studio. Quickly, people began criticizing it and pointed out that it was against everything Hayao Miyazaki’s work stood for. Many recalled Miyazaki saying he was “utterly disgusted” when presented with an early form of generative AI in 2016. He called it “an insult to life itself.” Netizens resonated with this. Searches for the topic “AI slop”—a term used to describe low-quality media, characterized by an inherent lack of effort, logic, or purpose—have been steadily increasing since late 2024, and peaked in March.
In January 2023, Andersen and several others filed a copyright infringement lawsuit against some AI companies, including Stability AI and Midjourney. The lawsuit describes generative AI as a “complex collage tool” that creates “derivative works.” The trial is set to begin in September 2026, after a judge found there to be sufficient grounds to move forward with the copyright infringement claim.
Currently, the team behind Glaze and Nightshade is tirelessly updating their system against attacks and developments of AI. While it is not a complete solution, the programs offer a reprieve and a method of protection for artists. Until the global legal system catches up with the advances in AI, open source solutions are vital in mitigating the harmful effects of the technology.