Libby Heaney & Sarah Shin
How can the empirical fields of science and technology be reimagined to reflect a more mysterious reality? Might quantum physics offer a useful framework?
In popular discourse, the term “artificial intelligence” is often used to conjure a mysterious, non-human agent of extraordinary power—one that, depending on your perspective, either threatens to conquer or promises to save us. The reality is more mundane: essentially, machine learning tools use available data to make decisions and predictions, and to generate new data. For Sofia Crespo and Fabiola Larios, this process is precisely what attracts them to the technology. What does the content produced by generative AI tell us about the information it is trained on? Crespo—who works both solo and as part of the collaborative studio Entangled Others—uses AI to explore the limits of our knowledge and creativity, particularly when it comes to visualizing the natural world. Larios, meanwhile, focuses on representations of people, from heavily filtered selfies put through a GAN to the offensive stereotypes in portraits churned out by image generators like MidJourney and Dall-E. Outland brought the artists together to discuss some of their key projects, as well as wider questions that come with working with AI. Above all, they agreed on the importance of taking responsibility for these systems which are, despite their apparently superhuman abilities, an entirely human invention.
SOFIA CRESPO My interest in AI goes back to 2017, although my first project on the topic was more like speculative AI. I was introduced to the actual technology in 2018, when as a birthday present a friend invited me to a workshop by the artist Gene Kogan. The workshop was about generating images with AI. That opened up the possibility of using machine learning for my work, which had felt like something very distant that only people with PhDs in a specific area would be able to do. From there I was compulsively trying to learn everything I could online, on platforms like Coursera. Kogan had a lot of online material too.
What fascinated me was the ability to extract patterns from data and generate variations. I had never encountered a tool before that allowed me to do something like that. It felt really powerful visually. There were so many patterns that I wanted to look at, and take out of context—for example, taking the texture of sea anemone and putting it in the context of a plant. I’ve never been good at drawing, so I was looking for a way to automate this process. Then there was the element of randomness, the fact that you can’t control everything in the image, which became interesting, because you have to think about the whole system when creating each of the works.
Then in 2019, I met Feileacan McCormick—we met on Twitter, but back then he was working on scanning old trees, and were brought together by our shared interest in visualizing digital aspects of nature. We started a studio called Entangled Others, and our work focuses on this entanglement between technology and the natural world. We’ve recently been combining machine learning with quantum computing.
FABIOLA LARIOS I got interested in AI around 2017 as well. I visited Trevor Paglen’s exhibition “A Study of Invisible Images” at Metro Pictures in New York. The show was about the way machines look at us, from the data sets they are trained on to the images they produce. I was fascinated by this idea of making art with data, because I’m a data hoarder. I’m always saving things that I find on the internet. But I was living in Mexico at the time, and there weren’t many opportunities to learn about how to make art using machine learning. All the available classes were more about math and programming.
Then in 2020 my husband bought these courses from Derrick Schultz, who is an amazing teacher of AI art. We took his class, and started experimenting—initially with Runway, and then we started using Google Colab. It was such a great way to bring ideas to life. Like you, I’m not really a drawing person—I’m always frustrated because I can’t draw what’s in my imagination. When I paint, I can draw in a certain style, and I can draw from a picture, but a machine learning model can make new images. My first AI project, Internet Humans (2020), was about creating new people from the internet.
I’d wanted to make something with selfies—my college thesis in 2011 was about selfies and self-representation on the internet, and I’ve always been interested in how humans behave online. I grew up with the internet but not this level of immersion that we have now, so I saw the evolution of certain people becoming increasingly online, and developing these alter egos. I was interested in the idea of performing, whether by telling lies on the internet or just making yourself look better with Instagram filters. So I scraped all these images from Instagram—I found them through the #selfiefilter hashtag. I had some ethical questions about using these images, but I thought if they’re using these hashtags and have a public account then they want to be seen. Also, the data set I put together is not public.
CRESPO I was fascinated with generating creatures, and I realized that there was a whole field—spanning from robotics to chemistry to software—dedicated to the study of artificial life. I was blown away by the idea that people were trying to use software to replicate things they’d observed in the natural world. I also discovered this amazing work by Luigi Serafini, the Codex Seraphinianus, published in 1981. The book resembles an encyclopedia but the language and the objects or creatures it depicts, all drawn by hand, are invented. They are based on languages and things that we have seen but combined in a new context, so the brain has to work extra hard to make sense of what is happening.
One of my first AI projects exploring these ideas was Neural Zoo (2018–21). The concept is very simple. It’s about visualizing things that look like they belong in the natural world but aren’t recognizable as things you’ve seen before. I wanted to think about the way that human imagination and creativity works—by combining elements in our brains—and the limits of this. For example, we cannot imagine a color that we haven’t seen before. Through making the data sets I realized that an AI obviously also cannot create something if it hasn’t been given an existing example. Everything it does will always be traced back to an example it received in the data set.
LARIOS I’ve read a lot about how data sets and AI technology are biased. I watched the documentary Coded Bias (2020), where Joy Buolamwini talks about how she is perceived in the world by technology. She gives an example of a soap dispenser which uses a light sensor that only works with pale skin. All my life I hadn’t understood why soap dispensers didn’t work for me; it was because my darker skin wasn’t reflecting the light emitted by the sensor. Or a few years ago in China, people realized they could unlock other people’s phones because the facial recognition software wasn’t working properly. The data set behind it wasn’t inclusive enough.
This problem comes up again with AI image generation. Stereotypes are just perpetuated. That’s why I made Born into BIAS (2022). It’s a piece of net art that you can access in your browser; I also showed it on an old Apple computer in an exhibition at Panke Gallery in Berlin. The work focuses on the stereotypes about Mexican people—the sombreros, the moustaches, the lazy worker sleeping on a cactus. At the top of the site there are cartoons, live action clips, posters, bits of text, and then as you scroll down you start to see images that I prompted from Dall-E and MidJourney. This was a year ago, so those models have been modified since then and they are a bit better. But there are still a lot of stereotypes. If you give the prompt “beautiful Mexican woman,” for instance, it will show you this very light-skinned brown person with a Kardashian body.
If these models do not work well for creating visual images, what can we expect from the more practical uses of AI in our everyday lives? We’re going to depend on this technology in the future. We need technology to see everyone, we need technology to work for everyone. For me, making art is a way to get this message across. The satirical tone and the simplicity of my work is a way to make it accessible and easy for people to understand. With TikTok and Reels and Stories, you have like fifteen seconds to get people’s attention.
CRESPO In a lot of my work I’ve focused on creating creatures that don’t exist, but with Critically Extant (2022) I wanted to use AI to look at how creatures are represented online. Part of that work involved researching the hashtags used online for conservation efforts. You’ll find that cuter species have much more traffic, and greater conservation efforts. The red panda, for instance, is very popular, but while a grasshopper or a fungus might be endangered as well—and be just as pivotal to the ecosystem—very few people are tweeting or posting about them.
We don’t have an exhaustive open-source data set with all the species out there. So if a critically endangered species goes extinct, what data on it will we have? Will we remember it? I found that for many species the only information available is their scientific name. I decided to train a model to show that lack of data. I trained it on the minimal data available and then asked it to reconstruct a specific species. It would produce these creatures that don’t look quite like the real thing.
When engineers hear about this project, they ask “Why the hell would anybody do that? That’s not the point of AI.” It is a misuse of the technology, in a way, because I’m using it to show its limitations rather than its potential. The whole point is to talk about how important data is and how we need to create better representations—in this case of species, specifically species that are on the edge of extinction.
LARIOS Sometimes people blame AI. I’ve heard people say, “AI is racist.” It’s not the AI, it’s the people behind the data sets and the fact that they’re not being inclusive. The AI is a reflection of people’s agendas. For instance, if you train a model on Reddit or Twitter, it’s going to start sounding racist and misogynistic because people on the internet feel comfortable trolling and saying things they wouldn’t always say in person.
We probably need to be careful about how we are treating people online or on our phones or around devices like Alexa and Google Home. There’s more surveillance than we think, and all the data captured by these devices is going to be used to train AIs. If we want a good AI, then we need to be nice humans and respectful toward everyone. And we need people who are not just white middle-aged men working on the data sets. We can’t say that first the AI just needs to work, and then we can add more data and fine-tune it. We need to prepare the data first, so it can work for everyone.
CRESPO Yes! One of my big issues with conversations about AI is that it gets attributed a kind of consciousness, which it doesn’t have. When we discuss whether nonhuman or more-than-human creatures have consciousness, we are evaluating them according to the same metrics that we would use to evaluate humans. But you can’t give an IQ test to a plant. It has a different kind of intelligence. I think that the reason we attribute so much to AI, even though it’s essentially software that runs on hardware, is that it reminds us of ourselves. But that’s problematic because if you grant an AI agency, then you are attributing responsibility to a system that ultimately shouldn’t be responsible. We should be responsible. If there’s bias, we are responsible for that bias.
We also need to get rid of the myth of the black box, that no one understands how AI works. It’s true to a point; we can’t understand everything that’s happening. But it’s important to acknowledge that there are people dedicated to understanding how learning systems work. If we are capable of creating the systems, then there is a logic of design behind an algorithm and how it learns. It’s not like AI is emerging on its own in a vacuum. When I work with these technologies, I want to help demystify them rather than add an extra layer of fear around them.
—Moderated by Gabrielle Schwarz