Authentication in the Expanded Field
NFTs could be an improvement on the paper certificates used to authenticate reproducible artworks. But much work needs to be done to realize that potential.
On July 20, Open AI announced that their image generator DALL-E can be used to make commercial projects. Reflecting on the possibility of hordes of creators lazily using AI to churn out low-effort NFTs, I tweeted that directly minting AI outputs as NFTs was a bad idea. I was mocked and called a Luddite. Maybe I rushed to judgment? So I asked people to tag artists who were using AI image generators—whether DALL-E, Midjourney, or others—to make NFTs. The tweet received many enthusiastic responses, with dozens of artists tagging themselves and others. Many artists said they refined or manipulated AI outputs before finalizing them as NFTs. Others are simply minting what the AI gives them. None of the responses were particularly compelling artistically, however, and most seemed to illustrate the capabilities of the software rather than the capabilities of the artists themselves. Polishing or adding to the AI illustrations didn’t get at the foundational questions that arise as artists begin to incorporate this tool into their practice. Below are five notes on how artists might (or might not) find productive ways to use AI image generators.
A number of artists replied to my tweet with images that appeared very painterly. The sfumato and rich ochre colors are underwhelming on the small, glassy screens of phones and computers. These are not paintings, nor are they records of the activity of painting. One of the best things about looking at a painting is walking through the construction of the image with your eyes. Why did the artist arrange it this way? What did they change as they worked through the image? What happened fast, what happened slow? We can trace the artist’s path through the image. AI images, even ones that look painterly, don’t provide this experience. They average a vast canon of images. The difference is like that between one person’s life story and a demographic study of an entire population. Both describe human life, but one shows general truths through specificity, while the other presents a single image fed by a black box of innumerable generalities. When we look at AI images, we’re unable to match our subjectivity as viewers with the artist’s subjectivity as a creator. Instead of a particular human experience, we’re shown only averages.
There are arguments swirling on Twitter over whether the outputs of AI image generators are art. This debate is not interesting. Sure, they can be: if an artist calls an image art, it’s art. The more pressing questions are why that designation is made, what the designation does to our response to an image, and how the claim relates to the larger context of the artist’s work. The potential problem with declaring AI images to be art is in what they lack. An artwork is the record of a strategy that the artist devised to make it. An image or object carries the story of its own making, and the story of its maker. In a broader sense, it is a record of the artist’s way of being in the world, of how they make objects, of how they find their voice. This is just as true for jpgs as it is for paintings and marble sculptures. The medium doesn’t matter. The trouble with AI image generators is that the strategy for making the image is hidden. In the case of DALL-E, it’s a proprietary formula owned by Open AI. The only thing we can say about an artist’s way of being in the world from this kind of work is that they are a customer of Open AI, a user of the software. Tweaking prompts in order to refine DALL-E’s outputs is akin to playing a powerful computer game. It’s not that it can’t be art. But artists should claim more agency. They should be making the games themselves, or systems of similar complexity, instead of accepting the role of mere players.
Some of the sharpest critiques of AI image generators have come from artists who claim, like David O’Reilly, that these companies have committed intellectual and artistic theft by training their models on the work of countless uncredited artists, illustrators, and photographers. This point seems shortsighted, given that the AIs are simply automating an important aspect of how image-making already works. Humans are visual sampling machines, quoting the vocabulary of pictures in the same way that we speak with words that we did not invent. An entirely original image would be illegible. All pictures, even abstract ones, use a language of visual reference and quotation passed down from other artists.
Something is changing, however. It used to be that there was a limited set of words, phrases, and narrative archetypes that elicited the visual elements of a genre. “Cowboy” and “pirate,” for example, are words that conjure images in our minds that are defined by our shared understanding of their respective genres. But the list of words and phrases that can now be visualized in a general way is no longer limited to our loose list of genres. Now that we can elicit images from any expression of language, all possible phrases have become microgenres. This is why O’Reilly’s accusation of plagiarism doesn’t hold up. AI is not exactly copying artists. It’s using their images to generalize visual depictions of everything language can express.
If all possible expressions of language have become microgenres, then it’s tempting to conclude that all possible images already exist, and that AI image generators are really a kind of search engine. This calls to mind conceptual artist Sherrie Levine’s statement from 1982 where she reflects on how the world is “filled to suffocating” with images. “We know that a picture is but a space in which a variety of images, none of them original, blend and clash,” she writes. “A picture is a tissue of quotations drawn from the innumerable centers of culture.” This is how she justifies an artistic practice where appropriation flirts with plagiarism.
The image saturation of the late twentieth century looks quaint compared to the glut of images we face today. If we were suffocating in 1982, by 2022 we’ve been drowned for decades. In more practical terms, this new technology means that the “tissue of quotations” of image-making is no longer something individual artists need to assemble themselves. That work is already done. Image-making becomes searching, like a visual version of Borges’s “Library of Babel,” the story about an infinite library where every possible combination of letters is printed in never-ending volumes. The greatest works of literature already exist somewhere. The denizens of the library just have to find them.
NFTs are certificates that should signify three things: an artist made the work indicated by the certificate, the certificate and the media it points to are unique, and a person owns the media indicated by the certificate. AI-generated NFTs break two of these three tenets. The artist did not make the image; it’s more accurate to say they found it. The work is not unique but rather a sample from a stream of infinite images. The token can still be owned, of course. Perhaps AI-generated NFTs are like a picture of a fisherman proudly displaying their catch. They’re proof of what was found.
My unease around the practice of minting AI images as NFTs comes from the unsettled question of the cultural significance of NFTs overall. Perhaps the ease of minting AI outputs signals a dead-end definition of NFTs: a financial asset attached to a vaguely painterly image that has nothing to do with the rich human experience of making or standing in front of a painting. My hope is that NFTs can delineate artworks that go beyond those made by averaging the connections between pictures and words. DALL-E is a powerful tool and artists should absolutely use it. But artists should also refuse to settle for being customers of technology companies. Artists should take tools, old and new, and push them past their limits. If AI can automate the work of making a picture, why not also use AI text generators to write prompts and use AI-based code generators to write smart contracts? Artists could even train an AI to analyze the market success of past NFT drops. The question of which elements of artistic practice could be replaced by AI is terrifying and thrilling, but it’s an investigation that should be driven by artists themselves.
Kevin Buist is a design strategist, curator, and writer based in Grand Rapids, Michigan.