Previous
August 18, 2025

Minne Atairu on Shaping Our Own Image

Minne Atairu, artist and doctoral candidate in the Arts and Art Education program at Columbia University, speaks with Peter Bauman (Monk Antony) about shaping AI systems to better reflect Black people and other underrepresented communities. Atairu discusses her use of generative AI in art as a tool for activism, highlighting its biases as well as its potential for empowerment.
About the Author
Minne Atairu, Blonde Braids Study II (Detail), 2023. Courtesy of the artist


Minne Atairu on Shaping Our Own Image

Minne Atairu, artist and doctoral candidate in the Arts and Art Education program at Columbia University, speaks with Peter Bauman (Monk Antony) about shaping AI systems to better reflect Black people and other underrepresented communities. Atairu discusses her use of generative AI in art as a tool for activism, highlighting its biases as well as its potential for empowerment.

Peter Bauman: As an artist, researcher and educator, you focus on generative AI-mediated processes. Some thinkers like Mat Dryhurst have described generative AI as an entirely new media paradigm on par with the early internet or digital’s rise in the 1960s. Does that framing resonate with you? How are you internalizing and contextualizing the rise of generative AI in relative importance?

Minne Atairu: As a kid, I grew up in Nigeria when the Internet was just permeating that space. I became very curious about artificial intelligence as an artist as well as a researcher because I'm fascinated by understanding how our teachers might start integrating new media technologies, particularly artificial intelligence.

This was five years ago, prior to the AI boom we're experiencing today. I started by looking at these more rigid, brutal systems that are not as popular as they used to be, like StyleGAN and machine vision and all of that. Generative AI is certainly transformative because I've changed my practice as well. I started playing around with StyleGAN while I was researching it. I'm particularly interested, not so much in the aesthetic, because I don't know that the works I created are aesthetically pleasing; it's not for everyone and I've come to terms with that.

I'm more interested in understanding representation, what technologies can Black people like myself use and how do we see ourselves in it? 


From my observation as a researcher, what changes is how I can impact the direction that the developers take in building and fine-tuning it to work for communities like mine. My art emerges from that, though I think about it in different ways. One lens through which I view that is hair, which is a very political form of expression for Black people, particularly for Black women today. If we keep our hair natural in some places, until the CROWN Act (2021) was passed in the US, you would likely lose your job.

Minne Atairu, Fro Studies II, 2022. Courtesy of the artist



Taking that as this microcosm through which we think about artificial intelligence and the kinds of questions that might inform or shape the representation of Black hair. It’s not just the universal, “Oh, this is an Afro.” But when a Black woman is wearing a braid, what does that mean for the aura of the person that is depicted when it comes to visualization? For example, one of the works I was exploring, Blonde Braids Study, played with that concept, thinking about blonde braids and Black women and what that means.

Basically, my work is more investigative than it is an expression of art in the classical sense.


Peter Bauman: You mentioned your Hair Studies work. Can you talk more about what your investigations reveal about the biases in the systems, which might have gone over the heads of their researchers?

Minne Atairu: Let's look at Blonde Braids Study. When Black people wear blonde braids, it's actually a reference to the hair extension, because we wear a lot of extensions. “Blonde” is a reference to the color of the extension that is then woven into your hair. When I prompted for blonde braids, and this was generations ago using Midjourney, there was a tendency for the system to try to also replicate blondeness in the figure’s natural hair. That then starts to play and tap into who has blonde hair and then the texture that we start to see.

Another example more recently is I'm working with Google’s Imagen 3. I was playing around with box braids, which is a very general and common representation of braids for Black women. But the system was very case sensitive. So when I had “box braids” in uppercase, there was a tendency for it to represent them by just creating some grid or tessellation on the hair of the figure, which isn't uninteresting as a hairstyle, but that's not what it refers to in the real world.

Peter Bauman: In a talk you gave at Columbia, you said that models are choosing to misrepresent Black people this way. You mentioned artistic ones but also ones that may not even align with their trained dataset. How should we think about model-versus-human agency with these misrepresentations?

Minne Atairu: It certainly is reflecting human choices. We know generative AI is not magic, right? So whatever decisions or visualizations you generate are a function of a dataset that is either present, absent, under- or overrepresented. Sometimes it’s simply not having enough of that data, underrepresentation. It might be present but the system has not learned those connections well enough to represent that concept.

Sometimes it's not really the lack of representation but are there enough of those samples within the dataset for it to understand that when you prompt for blonde braids on a Black woman, the blondeness should be a function of the extension and not the natural hair. When that mapping isn't clear within the system, it starts to extrapolate from “blondeness,” which is overrepresented, what blonde braids look like on a white woman. Then it starts to map it on a Black person, making it look totally different.

Minne Atairu, Portrait of Mami Wata II, 2023. Courtesy of the artist



Peter Bauman: Are the researchers taking your concerns seriously? Have you seen any progress as the models are updated?

Minne Atairu: Absolutely. Blonde Braids Study was from a couple of years ago. With Imagen 3, which is what I've been working with recently, those distinctions are more clear. But even Imagen 3 still does not understand braids by name. What does it mean to have butterfly braids? It does not understand and has a tendency to generate very generic representations of braids. You can't really be specific with it unless you describe the style, the design you want to see, rather than name the braid.

It's like saying, "I want a green house.” But rather than saying “green,” you're saying, “I want to mix blue and yellow to make green for me.” That's really difficult, especially thinking about embedding the systems in the real world. As an artist who has worked with generative AI for a long time, I know how to prompt. I can try to push the system using prompts. But if you're deploying systems like this in the real world and making claims that it's transformative, going back to how we started, not so much. I'm thinking about my braider, who might want to use a system like this to generate a braid. It's going to be frustrating for you.

I don't think people in the real world want to take a class on how to prompt before using a system. They just want to use it. 


But there are so many claims from these companies that make AI seem like magic or some mystical thing, capable of every single thing humans can do, but it's not always true. Much of what we see are just really good samples selected by people, whether it's art or the developers themselves, but it's not a true representation of the capabilities of any single system.

Peter Bauman: Speaking of these capabilities, you've also explored areas like consistent skin tones. Do you see this as primarily a dataset problem, like we mentioned with underrepresentation, or is it something more structural, tied to deeper culturally embedded hierarchies?

Minne Atairu: I think it's all the above. It's also an issue with those who are building the systems. If you're not familiar, like you said, with braids, then you don't understand the details and the nuances and the differences in the name and structure and the vocabulary.

So who is building these systems and what do they know about Black hair? What do they know about women who live in Harlem? 


The dataset is also a function of the structure. Who's gathering the dataset and how are they thinking about underrepresentations? Some concepts are very easy to represent in a given dataset: dogs, cats, houses. But when it comes to minorities or vulnerable groups, those are more difficult even to find on the Internet today. And when we do find them, in most cases, they are stereotyped; they come from the history of the world. For colonized communities like Nigeria, a lot of them have deeply embedded and colonized perspectives. So much violence has shaped our history and our representation on the Internet.

So how do we sieve all those things and try to design systems that are more true to us? The solution is not up to Google. We can certainly advocate for better systems that are universally deployed or become foundational to what we build.

I think it's up to our communities to start thinking about how to design tools that work for us. Ultimately, it is up to us to start shaping them in our own image because we know best what represents who we are—better than anyone else.


Peter Bauman: With all these challenges, how do you balance exploring the flaws of these tools while also advocating for their ethical use as an educator? What draws you to pedagogical engagement rather than outright avoidance?

Minne Atairu: It goes back to what we were saying earlier: how do we shape these systems that are already permeated and infused in everything we do? It's almost impossible to use any technology today that does not use one form of AI, whether it's generative or not. For our communities, historically we’ve found, and research has shown, that we're always the last to have the systems deployed. Often when they are deployed, they’re not used in a way that benefits us.

I'm very interested in how we engage in this conversation interactively rather than reactively. 


I'm very interested in building from the ground up and that's what I see education—and myself as an educator—doing: working with Black kids, teaching them about AI, teaching them how to work with AI so they can become builders, not just users.

Even my role as an artist now is shifting to thinking more about designing systems that I want to use rather than using systems that are not designed for me in the first place.


Peter Bauman: You mentioned your role as an artist and how it's tied to investigation and now system design. Yet artists engaging with generative tools tend to have their work flattened to cliches. Often prompt-based work is reduced to hollow superficiality even by prominent voices. How does your work challenge those critiques?

Minne Atairu: That's an interesting question. I’m of the belief that if you baptize anything as art, it is art. If it's a tool that is adopted by an artist, maybe it can be used to make art. I don't think that AI itself without human input can make art because most systems cannot make decisions on their own. Well, maybe you could code an agent to do that today.

Minne Atairu, The Virgin and Child (Madonna), 2024. Courtesy of the artist



Peter Bauman: One of your works that challenges the supposed superficiality of prompt-based work is THE VIRGIN AND CHILD, where you interrogate photography through AI. How do you see generative tools reshaping—or even destabilizing—our relationship to image-making? Your work seems to emphasize empowerment—righting historical wrongs or making tools more accessible to underrepresented cultures.

Minne Atairu: I think in time, generative AI will get to that point where we don't need to make this argument, where it can defend itself. In terms of THE VIRGIN AND CHILD—all of my work that I've used generative AI as a tool to reimagine what has not existed or what could have been—it's a choice I make. I could have used other mediums to do the same because I went to art school and I think I have a pretty good handle on other mediums as well.

Generative AI is just the tool I choose to express myself as an artist because I'm interested in this emerging form of technology.


It’s beginning to be recognized more in relation to fields like photography. I was invited about a year ago by Aperture to talk about how generative AI was transforming photography and the representation of Black folks.

My work, especially, I struggle to describe it as art alone because so many different groups derive or engage with it differently. When I'm speaking to art historians, they're thinking of my work through art history. When I'm working with photographers, they're thinking about it through the lens of photography. If I'm working with OpenAI, it's an example of how to demonstrate these tools are working in the real world for Black folks. For artists, then this is art. If you're primarily functioning in the art world, then maybe you're thinking about how to define your work as art. But for me, it really doesn't matter.

If you engage with it as art, that's fine. If it's history, that's fine. If you don't like it at all, that's also good. You're learning something from that experience. But I'm not fighting for it to be art at all. It just is what it is. It doesn't have to be anything. I just appreciate how it's able to connect with different people and different groups and professions in different ways. They're able to make meaning and learn from it. And that's all it has to be for me.

Peter Bauman: New audiences have recently been able to engage with your work as it was exhibited in Detroit at MOCAD’s Code Switch and then at Beyond the Human?—part of PST last year. What do you hope your work is communicating to this more general museum audience?

Minne Atairu: That's a good question. I want the work to speak to them more on a conceptual level than an AI level. AI is so ubiquitous today and there's so much AI art on the internet.

My work is multidimensional and not just about the medium; it's the message, the stories I choose to tell with this medium. 


I hope they connect with those stories—whether it's the Benin Bronzes, the Black Madonna or my Hair Studies—that they're able to focus on the message and not so much the technology. It’s both but more the message.

Peter Bauman: In addition to speaking to these museum audiences, through your work in K-12 education, you also interact with children. When you observe these younger generations engage with AI, what do you notice that's different from adults?

Minne Atairu: Many of the kids I work with, and typically my research participants, are all growing up in the age of AI. It's so natural to them.

In a way, especially the younger kids cannot conceptualize the world without artificial intelligence.


They are engaging with AI in so many different ways. It's more like true downstream apps than the way I engage, which I think is interesting from a study I conducted earlier this year. When I was talking about AI, I was thinking about Midjourney and ChatGPT, but they were showing me all of these apps that they use in their everyday life. That to them was AI. So they're more accepting and open to it.

Especially in the US, in my district, most of the schools I work with ban AI or they limit access to it on a network level. So the students in their classrooms do not have access to artificial intelligence but they're engaging with it nonetheless through their phones and at home. Whether you ban it or not, it's still natural to their existence and engagement in the world through their social groups and online communities.

They're not so opposed to it. For some kids who are interested in traditional art forms, I've seen more resistance from them. Not that they don't use it. This one kid was like, “I'll use it for my homework but I don't want to use it to make my art. I just want to sit down and draw my anime and do my thing.” He enjoys the experience of making these characters, drawing and playing with color.

And I think that's just the future. Some of us will work with AI and some people won't. 

I don't know that it's going to take over the world. I still use other mediums as a part of my artistic process; it's not just AI. For artists, maybe for some of us who are trained in traditional methods, there's always this interest in leaning back on what we already know and fusing different methods to tell a story.



- - - - -



Minne Atairu is a researcher and interdisciplinary artist interested in generative artificial intelligence. Utilizing AI-mediated processes and materials, Atairu’s artistic practice is dedicated to illuminating understudied gaps and absences within Black historical archives. Atairu’s academic research focuses on generative AI, art and educational policy in urban K-12 art classrooms. She has exhibited at The Shed, New York (2023); Frieze, London (2023); The Harvard Art Museums, Boston (2022); Markk Museum, Hamburg (2021); SOAS Brunei Gallery, University of London, London (2022); Microscope Gallery, New York (2022); and Fleming Museum of Art, Vermont (2021).

Peter Bauman (Monk Antony) is Le Random's editor in chief.