Aaron Hertzmann on Caring about People

Aaron Hertzmann on Caring about People
AI scientist, researcher and artist Aaron Hertzmann spoke with Peter Bauman (Monk Antony), arguing that AI can automate certain tasks but that true authorship remains a human social construct rooted in personal relationships and agency. They cover the enduring question of whether machines can make art, AI’s historical analogies from photography to streaming, and what is genuinely new about this moment.
Peter Bauman: In 2025, I wrote that we’re in a phase of post-AI art, which takes a question you’ve been asking for years, “Can computers/AI make art?” as foundational but treats the answer as assumed.
Why is the question of whether machines can make art still relevant today?
Aaron Hertzmann: A few reasons. The first-order question is "Can the computer be the artist?” The second-order question is "What does it mean to use these new technologies in the artistic process?” And, "What is the art we make with these new technologies?”
The first question is still important because so many people haven't participated in that broader discussion and aren't past it. People have been asking it since the 1960s. Whatever the next development is ten years from now, people will still say, “Well, maybe now the computer is the artist.” Post-AI thinking tackles versions of that question but in a more nuanced way.
I come back to the question partly because people care about it and partly because it's a starting point for discussing the whole history of these different technologies that have changed the way people make art.
Those different histories have a lot to say about the current moment. A hundred and fifty years ago they asked, “Will photography kill art?” Fifty years ago: “Will computer animation make animators obsolete?” Once artists start playing with the tool, they say, “Oh, actually it doesn't make me obsolete.” Then what does it actually mean to make art with photography? What does it mean for computer graphics to be making pictures, animations or movies?

I see the post-AI idea as the advanced stage of the process. At Carnegie Mellon they had a program called Stage Three. The idea was stage one is you invent new technology and play around with it. Stage two is when you begin mimicking old mediums. And stage three is when the art form comes into its own.Post-AI seems related to stage three. It's not about mimicking paintings or conventional computer art in the same way. It's doing something new with AI art that's different from previous art forms.
Peter Bauman: In that post-AI article, I proposed that levels of entanglement with the technology might distinguish between AI and post-AI art. Gene Kogan today—with Abraham and his extensive work with agents—seems to have a deeper relationship with the technologies than he did while he was tinkering with GANs. Is that fair?
Aaron Hertzmann: From the creative process, it's not clear to me what it would mean to not be entangled with the media that you use. When you paint with oil or watercolor, you have a very different relationship with a medium, and you're very tied to it.
Over the years I’ve heard stories of people trying to get computer animation companies to adopt new algorithms. But the artists do not want to use them. They don't want to change their workflow. They're so embedded in a particular way of working.
People who have studied the violin and got really good at it do not want you to move their strings around or “improve” anything about how it works.
Whenever you're really in deep with any tool, you really are entangled with how it works. You've learned it as an extension of your own affordances, your own ability to move and think.
Peter Bauman: I wonder if we’ll reach a point where, like life itself, the distinction between human and non-human becomes less clear. You’ve summed up your thinking on computation and authorship as, “We care about people. Only people make art. Computers are not people.”
Do we privilege this idea of the human too much? The way we privilege human needs over other biological creatures, for instance?
Aaron Hertzmann: There's something important about our biological pedigree. We have more in common with animals than with things, including computers. My dog is more of a “person” than any computer. If she cared about aesthetic artifacts, she could be an artist.
I took a lot of the “man is dead” quotes from your article as an argument against Romantic-era notions of humans as being very special, above nature or society. I would agree with the erosion of the Enlightenment notion of people as these independent, rational beings.
The Romantic era is important because that’s when the notion of the “artist” was really invented. Artists had to advertise themselves in a way that they hadn't before. So artists created this mythology around art as being this moment of creation and genius in ways that had not existed prior to that era.
We’ve seen that a lot of what artists do can be automated. That's what we saw with photography. It's natural but incorrect to look at all the ways artistic activities turned out to be automatable and conclude that all of art is mechanical and automatable.
Peter Bauman: One of the reasons you've said that computers can't be considered artists is that they don't exhibit social behavior. Might that be changing with the rise of AI agents that can now interact with each other and people?
Aaron Hertzmann: Let me start by clarifying a few things. I'm not merely making a normative statement that we shouldn't consider computers to be artists. I'm offering more of an explanation. We've had computers that make amazing pictures for decades and we haven't yet considered any of them to be artists. So why is that?
Take Botto, which I think is a super fascinating piece. It challenges a lot of past work that takes the form of: I write the code, I execute it, I select the results and therefore I'm the artist. Botto has more autonomy than that. But I would still say that it's a system Klingemann has built and set up. The system doesn't have the agency to say, “You know what, I'm sick of taking human input. I'm going to do it a different way.”
In terms of autonomy, I'd say that Botto is very similar to the Electric Sheep system that Scott Draves made in the 90s. It ran an evolutionary system on its own with user feedback.
I had it running as a screensaver on my computer for years and I would just watch it. It was so cool and so creative and continued to be fascinating for a long time. The main differences between Botto and Electric Sheep are that Botto uses pretrained neural networks instead of procedural image generation. That and all of the language around AI autonomy.
The other thing I want to clarify is that I'm not saying being an artist is just about having social behaviors. That would suggest any social agent could be an artist, like Eliza, the chatbot from the 1960s. People thought it was alive and real for a while, even though it's a very simple rule-based system.
My claim is that we care about other people; we have social relationships with other people. Art came about as a product of those social relationships.
Merely having the ability to engage in social-like interactions the way chatbots or agents do is not what makes you a person or an artist.
That said, we are increasingly seeing people treat chatbots like people, which I think is scary. The more we treat AI agents like people, the more we replace human relationships with simulated, artificial relationships, the more people would accept these agents as artists.
Treating computers as artists means treating computers as people.

Peter Bauman: Before you said you don’t think computers are really changing. What about software? Andrej Karpathy has talked about software 3.0. The idea is that 1.0 is code, 2.0 is neural networks and 3.0 is prompts. Each progression has cannibalized the previous one, which he demonstrates using real-world data from Tesla’s internal code.
How do you see that shift from hard programming—things like Harold Cohen's AARON—to deep learning impacting what art is and how it’s made?
Aaron Hertzmann: Defining art is so contextual. It means something very different in a fine art, contemporary gallery context versus as a professional designer. In the professional context, it's not clear that it's really changing what art is. One small aesthetic shift is that people resist the AI look. Even if someone made something by hand that looks AI-generated, people reject it.
For the process of making art, there's a whole new generative space that may be more about manipulating weights or how you train models than just the code you write. A lot of artists have worked by training their own models. But it's hard to say broadly that this is a really different kind of art or a different meaning of art. It's just one more medium, one more place to work, where the process will change the qualities of the final output.
Peter Bauman: Isn’t Karpathy’s concept of software 3.0 also about how these forms of computation are opening up to new people?
Aaron Hertzmann: Totally. Take filmmaking. Making a movie a hundred years ago took lots of budget, crew and specialists, potentially with a deep knowledge of chemistry and optics. By the 80s, one person could make a movie on their own. That doesn't mean every movie made by one person was a brilliant masterpiece. It meant more people could make amateur movies and occasionally some of those amateurs, like Steven Spielberg, who started making home movies as a kid, get really good at it.
When people say democratization, they often seem to be saying we won't need great artists anymore, that we won't need talent or skill because the computers will have the talent and skill.
Even as the tools get better and better, the bar gets raised, and there's still a real separation between people who make works that resonate deeply and have meaning and significance beyond just what the tool can do.

We're now in this huge period of transition where we don't really have a sophisticated understanding of what these tools can do, both as creators and as an audience. When the tools become more mature—past even stage three—we'll have a better sense of what it means to make art with these tools. What the pieces are that we really care about.
What the artist's role was in making that work something special beyond just what anyone could type into a prompt. As the audience gets more sophisticated, the art can too.
We can’t predict how it will evolve. It has to be figured out over time, with people experimenting with the tools, making better tools and seeing how others respond.
Peter Bauman: One thing that's come up a few times—and it’s one of the reasons I wanted to speak with you—is your fluency in historical analogies. I find them very helpful. History doesn't repeat precisely but there do seem to be broad patterns we can glean something from.
What are some of the major historical analogs to the rise of AI, and what can we learn from them? Are deep learning generative models as disruptive as something like the digital revolution or the internet? Or are we hyping them?
Aaron Hertzmann: Yeah, I think that these are really disruptive technologies that are really going to change a lot in ways that are very hard to understand. Looking at historical analogs is really valuable because it helps us overcome cognitive biases.
With each of these new technologies, it’s tempting to say, "This never happened before. This is totally different. Everything is changing. Artists will be obsolete.” Looking at historical analogs is useful for saying, “Okay, this actually has happened before. Is it really different?”
If you want to make the case that it is, there should be good reasons why it's genuinely different and not just a manifestation of similar historical patterns. There are ways the AI boom is different. If it were exactly the same, it wouldn't be a big disruptive event.
The most useful analogs are 1) photography, 2) film and video and 3) music recording, distribution and streaming. Photography is a clear example. It seemed to automate what an artist does, making pictures. It's just pressing a button, which is exactly the same thing people said about the first text-to-image systems.
With sound recording, no one says the recording device is an artist. Yet people campaigned against it in terms of job loss and copyright. In the campaigns they said the recorded performance doesn't have a soul, that it can't compare to a live human performance because it's not authentic. And copyright was actually changed because of music recording to protect composers.
Music and video distribution technologies have had huge effects on the nature of the medium in ways people didn't predict. We can't imagine music like hip hop existing without recorded music. The distribution side has also had a big impact on artists' jobs in ways that arise from the exploitation of the technology. The 2023 WGA writers' strike is a good example.
Here’s the way that I think the new technologies are different: It's a technology that makes it very easy to create images and text where you cannot tell if it was made the old way or not. Photography replaced a lot of painters but you could always tell the difference between painting and photography.
Today you often cannot tell whether a picture was AI-generated or made in some existing way. And this is a fact with serious societal implications beyond art.
Peter Bauman: What's interesting about these historical analogs is that you can really see the human nature and psychology of it all. First we're a little afraid of the new thing, then we want to see what it can do, then we mostly accept it.
But what's striking is that those early photographers were really tinkerers and craftsmen. It reminds me of people like you, Alec Radford, Memo Akten, Anna Ridler, who immersed themselves with these tools and were exploring them for creativity's sake. How does that fit into the broader pattern, like with photography?
Aaron Hertzmann: The pattern is that the early people who play with a new technology, once they see they can make with it, they start to explore. Either it's individuals who have both technical and creative sides or it's artists and technologists working closely together.
Take the early photographer, Talbot; he wouldn't call himself an artist. But his works are now shown in art museums.
The early days of Pixar were technology people like Ed Catmull and Alvy Ray Smith, who really wanted to make movies with computers but didn't have the artistic skill themselves. So they brought on people like John Lasseter, who had the animation skill and experience. It was these individuals working very closely together that created that new medium.
Peter Bauman: It really reminds me of today’s machine learning researchers who wouldn't call themselves artists but are making incredibly creative work that impacts society on a large scale.
How do you think about the supposed difference between engineers and artists?
Aaron Hertzmann: I think there's a lot in common, especially between inventors and artists. If we're talking about the kind of artist who wants to push the envelope, there are many shared skills and behaviors around creativity. It’s that impulse of "What happens if I try this?" It’s having a sense of what's worth trying, the technical skill to actually pursue that exploration and not having a fixed outcome, making discoveries along the way.

That can happen in art and in engineering. Sometimes there's a cultural gulf where people say, "I'm not an artist; all I know how to do is write code or do math." They separate themselves from the people who say, "I'm terrible with numbers; I'm a creative." But there are people who don't make those cultural distinctions.
That distinction was also invented in the Romantic era. The nineteenth century was about specialization of knowledge. Before that, philosophers were just philosophers, whether it was natural philosophy, ethics or poetics. The same person might be writing poetry, drawing pictures and trying to figure out how magnetism works.
The concept of the scientist was invented when people became more specialized in understanding the natural world and formed their own societies. The concept of the artist-as-we-understand-it was only invented when people making poetry and painting had to advocate for themselves as a special societal class.
Leonardo could not have said whether he was an artist or a scientist because neither concept had been invented yet.
Peter Bauman: It does seem arbitrary but the distinction remains. A. Michael Noll, when he was making some of the very earliest computer art in the early ‘60s, Bell Labs wouldn't allow him to call it art. And he didn't really call himself an artist back then. The same thing happened with those early photographers you mentioned with Pictorialism.
Aaron Hertzmann: It's a useful point to make: to say maybe these distinctions we have are useful but they're a little arbitrary.
Regarding historical analogies, I think what we're experiencing with AI and art is a combination of two things: the historical trends of new technology affecting art and everything specific to AI right now.
The AI moment is about what these things mean as agents. Are they intelligent? How are they disrupting and threatening society, politics, culture?
It's that along with all the historical trends in art that makes this moment unique. So when we're talking about whether algorithms can be artists, that's really about whether these AI systems are intelligent or how much like humans they are. That's not specific to the art trends.
We're seeing all the same trends with AI-impacted art that we've seen with other technologies. Everything that’s unique and new about this moment is really what's unique and new about AI as a technology.
------
Aaron Hertzmann is a researcher, writer and artist. He is Principal Scientist at Adobe Research and an Affiliate Professor at the University of Washington, where his work spans computer graphics, human vision, and the theory of art and computation.
Peter Bauman (Monk Antony) is Le Random's editor in chief.
