Kyle McDonald on Computer Softness

Kyle McDonald on Computer Softness
Kyle McDonald is an artist working with code. He spoke with Peter Bauman (Monk Antony) about his wide ranging engagements with technology and culture. They cover McDonald's chatbot-to-AI-researcher-to-artist origin story, how 2015 made computers feel softer and more human, plus the idea that AI is building a dense mirror world out of every trace of our digital past.
Peter Bauman: In late 2025, you spent fifty days in the South Pacific working on digital infrastructure and trying to capture a rare flash of light called te lapa. How was the experience part of your art practice?
Kyle McDonald: My practice is wide-ranging with everything from machine learning to traditional boat building. But it's all connected to culture and technology broadly, with all its different shapes.
That got me connected to some folks in the Pacific on an island called Taumako in the nation of Solomon Islands as well as Basilaki Island in Papua New Guinea. They still sail traditional boats similar to what was used to settle the Pacific three or four thousand years ago.
I got connected to them because one of the many traditional signs they use for navigation is a mysterious phenomenon that's not explained or understood by science called te lapa, which means “the flashing.” It's a burst of light that happens over the open ocean at night leading to distant islands one hundred fifty miles away or more, which is still a few days of sailing.
No other signs tell you an island is there at that distance so it’s very useful. It happens rarely, only a few times every night and it's very faint. You have to be taught to see it.
It piqued my interest as someone who thinks about different types of light and mysterious things that are at the intersection of light and technology.
I've been working with folks on these islands for the last five or six years to document te lapa. We only recently started to have the right camera equipment for documenting it because it's so low light. So we haven't captured it yet.

I've started making simulations, interviewing people and doing reverse photography to create representations of times people saw it. It turns out they converge, which is anthropologically and culturally interesting. People see the same thing.
I've also gotten involved in their other goals related to digital technology, which include Internet access and solar power. So I've now gone to both of these islands to help install Internet for the first time. That itself was a really fascinating dialog to have and one of the most incredible moments that I've seen of culture and technology intersecting in my life.
Working on these islands, at the intersection of contemporary digital technology and traditional methods, has really changed my perspective on what the roles of technology, art and design are in society.
Peter Bauman: It seems like that multidisciplinary instinct has been with you from the beginning. As a university student, you were originally planning to be an AI researcher before deciding to become an artist. What prompted that decision? What about your skills and interests were more conducive to art-making?
Kyle McDonald: I made art from the first moment I could pick up a pencil, from the first moment I could write code.
I knew what I did with the pencil was art but it took me until my late teens to realize that programming could be art, too.
When I was in high school around 2001, I got really obsessed with the idea of chat bots. I was building these very simple programs that would sit in IRC [Internet Relay Chat] channels and try to pass as human. I got a kick out of that.
I thought, “There's something here; this is the future somehow. I don't know how it's going to work but this is what the future will feel like.” I wanted to research and study it. I went to university to do AI research and make machines that helped us reflect on what it means to be human—maybe could even pass as human.
After studying a few years, I noticed I was skipping many of my computer science classes to go to arts classes instead. It was around that time that my AI lab director told me, “Kyle, I think you might actually be an artist. You've got all these crazy ideas and you're not really guided by the same kinds of research questions that a scientist or an engineer is guided by. Maybe you should take that more seriously.”
What I've learned since then is, yes, I'm an artist. If anyone is wondering if they're an artist, the way you can tell is that you can't stop making art. If you can stop making art, it's a hobby. You're not an artist.
That's been happening my whole life. I've been unable to stop. I've always got new ideas for what I want to make. That transition out of an AI research lab into an MFA was the moment I started realizing it was serious and inescapable.
Peter Bauman: That was way before this current deep learning AI boom; it was a fascinating and experimental—even aimless—time in AI research.
Kyle McDonald: If it was a decade later or something, yeah, it would have felt very different. Then in my AI research lab, one of the topics I was working on was “reference resolution.” It’s for ambiguous sentences like “I put the orange on the table. Can you get it for me?” What does “it” refer to? Humans intuit that “it” refers to the orange.
At the time in AI research, that was a really complicated problem that you had to solve with a lot of code and special cases. In 2007, I was telling my lab director, “There are new techniques based on neural networks that were tested in the '80s but never really took off. I have this feeling that there's something there. Maybe if we had enough data, we could train a neural network to find the answer for us.”
He was like, “Oh, that's never going to work. That sounds like a terrible idea.” If it had been just five years later, it would have been 2012 and AlexNet would have been out. There would have been something I could point to that would say neural networks are coming back. They weren’t lost in the '80s.
Peter Bauman: The level that neural nets were considered dead is endlessly fascinating. I spoke with Ian Goodfellow in late 2025, who said something similar. His professor at the time was even Andrew Ng. Ian asked him whether the course would cover neural nets at all. Apparently, Ng was like, “Oh, they don't work. Don't bother learning about them.” Andrew Ng, now one of deep learning’s superstars!
So you finished your MFA in 2009. How and when did you begin experimenting with machine learning in your art practice?
Kyle McDonald: At the end of undergrad, I was already doing some experiments with neural networks. They were broadly with AI but I especially loved neural networks. My experiments were more music-adjacent because at that time, most of my practice was sound-focused.
I built a system in 2005 that tried to clap along with you. It tried to understand the rhythm that you were clapping in real-time and follow along. It trained a neural network and treated it as a probabilistic problem.
Another system would try to continue drawing strokes and interpret those drawing strokes as musical gestures. I could sit with a Wacom tablet, like a pen interface and draw these abstract lines. I was training a neural network in real-time to imitate my drawings. It would continue drawing once I took the pen off the tablet. That would then get synthesized as this very noisy performance. There were other experiments like that.
I was really fascinated by the application of neural networks for real-time sonic exploration, thinking of them as this stand-in for what happens in a performance context between multiple musicians who are improvising together.
That turns out to be one of the least common applications of them today. Mostly, we still think of neural networks as non-real-time processors that are designed to receive a prompt and then spit something out—not with real-time dynamics. I'm still interested in those real-time dynamics when I make new work with AI today.
I’m in Tokyo now working with Daito Manabe on a project called Transformirror, which we built in 2023 originally with SDXL Turbo. It was the first model that did image style transfer with prompt-driven generative aesthetics at a real-time frame rate—more than ten, fifteen frames per second.
We've been building variations on that piece for the last two years and installing them at different festivals. I love that when you stand in front of this piece; it really shows you what's happening internally in these systems.
You can explore it with your body and understand it intuitively, which I think is one of the most important roles of the arts and of artists: we are here to give people new intuition and direct experience.

Peter Bauman: I’m fascinated by that transition to deep learning in the 2000s and its impact on creativity. Systems art has a deep relationship with sound and music, going back to Musikalisches Würfelspiel [eighteenth-century “musical dice games”], digital computers and ‘80s neural nets with people like Peter Todd.
Kyle McDonald: There's a reason for that: music is very low-dimensional compared to video, images and even text. There was also a lot of early experimentation with text, which is step number two after music, starting with CharRNN.
Before that, there was a system I was really influenced by in the early 2000s called MegaHAL (1998) by Jason Hutchens. It was the first semi-human passing chatbot that I ever saw. Of course, it broke really quickly. After more than two questions, you could tell it was a bot.
But the individual answers were not something you’d expect from a computer. It was based on a Markov chain system that has some similarities to the way LSTMs were built in 2015.
Peter Bauman: Speaking of 2015, when I researched early artists responding to deep learning, your name came up because that year you were engaging with things like GANs from very early on. When Alec Radford first posted about DCGANs, you were one of the first artists to really notice. Radford, of course, then went on to invent ChatGPT.
Kyle McDonald: I'm a huge fan of Alec. He's one of the nicest people I know in this space and one of the smartest. He's your favorite DJ's favorite DJ. Everybody who knows knows who he is but he's not out there getting the headlines. I'm happy for him because it means he can focus a little bit more.
I saw DCGANs really quickly, which was around the same time that style transfer was starting to pick up.
Peter Bauman: 2015 was really the year with DCGAN, style transfer and Deep Dream. You also mentioned CharRNN. How did those developments impact what you were working on?
Kyle McDonald: CharRNN was the first in May 2015. Then came Deep Dream and Style Transfer. That was just an insane year. Another one of those red pill moments for me was learning about word2vec. They all represented a drastic change in what computers were, to me, at least.
It was also around that time I learned about t-SNE, even though it had been around for a little while. It required me to completely rewire my brain in terms of how I saw what the computer does.
It made me feel so much more comfortable with computers in 2015. I felt like computers were soft again in this way that I'd lost from too much time spent programming. Computers were fuzzy and human again.
They made mistakes and they had glitches in a way that wasn't a programmatic glitch.
I had this new idea for vector embeddings and similarity functions that could be applied to anything. Before 2015, I'd heard about these technical ideas. I was familiar with dynamic time warping for audio similarity and edit distance for text similarity. But I didn't really have this general idea of embedding distance.
That’s what made computers soft to me. It meant that there was a way that you could treat an entire set of things as a collection of numbers where you could work with them the way you work with other numbers. That made me feel like the world was my oyster. I could work with images, text and anything I could create an embedding of.
I can have these high-dimensional spaces that work similarly to the way my brain feels. I feel like I'm constantly sorting, organizing, creating hierarchies and conceptual infrastructure for myself. But I never felt that I could really do that easily with the computer. That year, 2015, I felt like machine learning had the breakthrough it needed for me to relate to the computer that way. That was the big conceptual change for me.
Latent space was how I conceptualized the world already; I just didn't have the tools, name or code for it.
Peter Bauman: Your inquiries into machine learning have consistently been critical and not just experiments with the latest buzzy tech. I’m thinking of Terrapattern (2016) with Golan Levin, David Newbury, and others.
Kyle McDonald: I was doing work that was about AI before that 2015 moment. But I don't know that audiences could understand it well back then. In 2015, I worked on this project called Exhausting a Crowd, which is a twelve-hour video from Piccadilly Circus in London. It's online and you can click on something that you think is interesting and then leave a note about what you saw.
Everything is happening there. People are making out. Pigeons are landing on people's heads. Couples are going for walks. Kids are acting up. There's breakdancing. I had this idea that in the future, these kinds of spaces will probably be policed by automated systems in ways that are far beyond what any individual human could pretend to accomplish.
Maybe because of the way that convolution networks were improving, we could build an early version of that, not to celebrate it, but to bring a little bit of the future into the present and try to see how it makes us feel and reflect on it.
I realized I was still a year or two early. The tech stack was not there yet but I could build a preview by crowdsourcing it and treating humans as the AI. Now when you go there, there's this strange feeling of a computational intelligence looking at this scene, even though it's made up of hundreds of thousands of people who have visited and left notes. So for me, that project is really about AI. But if you go there, it might not immediately feel that way.
There's some earlier work I did with Lauren Lee McCarthy. For pplkpr [People Keeper] in 2014, we had an automated system for managing your social life based on biometric data. It would look at your stress level over the course of the day. If someone stressed you out all the time, it would just block them from your contact list. If someone made you feel good, it would start sending them messages to schedule a date.
.jpg)
Lauren and I were thinking a lot about the agency we give up to automated systems, what we would now call AI. Knowing that those systems were going to keep getting more accurate and that we would want to give more agency to them.
It's very disconcerting to be controlled by something that moves in a very non-human way.
The only reason I was making these projects is because I'd already spent ten years at that point thinking about AI. It just wasn't until 2015 that we really had the tools to make work relevant to wider audiences.
Peter Bauman: Part of remaining relevant is having a sense of the future. In July 2024, you talked about the future of engaging with AI as “its own open world which we access through a tiny portal.” In May 2025 you said, "I feel like people still aren't putting it together that this is probably the future of most computation—a shared, simulated hallucination, loosely structured by a story/script instead of code.”
How do you see the future of world-building with AI?
Kyle McDonald: Thank you for picking up on that. These ideas for me are connected to a feeling I had in 2011 when I got my first phone with Internet. I felt like there was this bubble of access that just expanded, touching everywhere I went. Before that, if I wanted Internet I had to find a cafe and pull out my laptop. Then I needed a WiFi connection and password. Now it was with me all the time.
Since then, I started noticing other expanding bubbles of access and connection. I took a picture off the Santa Monica Pier around 2016. I looked at the photo and I realized, after uploading this to the internet, people's faces weren't recognizable. I couldn't run face recognition on them. But if you cross-referenced it against everything else that was publicly available from that day, you could probably figure out who a lot of those people were.
I was thinking a lot about AI and the way that AI will feel when it becomes that investigative researcher that ties everything together. Today, we have these LLMs that have ingested everything. They have the ability to look up information and piece things together.
We're rapidly converging into this future where our access to the past is multiplying and things that we used to think of as indecipherable are becoming legible to AI.
I have this sense that there's this bubble of history and shared reality that's expanding. We’ll have the ability to investigate any moment from any perspective. That's the feeling. That's really exciting and weird and scary to think that all of these traces that we've left in our pasts on our digital lives could be legible in a totally different way.
The people who know what this is like are the people who have had a level of celebrity where people have tried to learn everything about them. We don't know what that's like for everyone to have that experience all the time.
There's a way that all of the data we've ever put online, every bit of documentation that's ever been captured, every street view photo, every satellite image, every book that's been written, every blog post, every bit of personal information that's been linked to the Internet—it's all coming together in a way that creates the first singularity, the singularity before our identities are merged.
The deeper we go down this road, we will come across more and more surprising outcomes that will be a side effect of this weird, secondary, mirror world that AI inhabits and mediates for us.
A big part of the map of human history and society is being populated in a way that is so much more dense than we understand. Then these other parts are still completely unknown. That worries me sometimes but I'm hopeful those spaces will be maintained as well. It’s a huge world model that I see getting slowly constructed.
This is connected to the idea you asked about but it's not the same. That tweet was more about the way we interface with computation. I'm looking at the new research where people are using video models to simulate desktop environments. Instead of a video model generating a video of Will Smith eating spaghetti, the video model is generating a video of someone using their computer.
It turns out when you do that, the computer acts normally. When it generates a cursor and you double-click on an app, it opens and ats as usual.
With LLMs providing a general-purpose conversational layer to computation, I think this will all merge to become an interface layer for this big world model I mentioned.
------
Kyle McDonald is an artist working with code. He crafts interactive installations, sneaky interventions, playful websites, workshops, and toolkits for other artists working with code. He explores the possibilities of new technologies to understand how they affect society, to misuse them and to build alternative futures.
Peter Bauman (Monk Antony) is Le Random's editor in chief.
