Jess Tucker on Longing for a Face
.jpg)
Jess Tucker on Longing for a Face
Multidisciplinary artist Jess Tucker examines how contemporary surveillance technology tracks our bodies as well as manipulates our minds. The artist spoke with Peter Bauman (Monk Antony) about the machinic gaze, entanglement and misusing face and body tracking to expose control systems.
Peter Bauman: When did you first get interested in the deep learning space?
Jess Tucker: I started working directly with machine learning a couple of years ago. When it came into my practice, it was with a specific relation to questions I'd been asking with other media technologies.
I jumped in at the diffusion moment and found myself wishing I’d gotten into it earlier. At the same time, I'm pleased that some of my questions were developed enough already before I started using it so it wasn't coming without that underpinning of my own investigative interests.
Indirectly, I started using it because I'd been using face tracking for as long as that's been a thing. Face tracking originally wasn't dependent on deep learning but increasingly became so. Noticing the aesthetic outcomes of that switch was something I was thinking about in my work. I missed those early glitches of face tracking when it wasn't as good.
Peter Bauman: Now, it’s scarily good. You mostly studied media studies and film and I wonder how you see deep learning as impacting those fields?
Jess Tucker: My first undergrad was cinema and media studies. I was really working through philosophy and thinking about subjectivity, the gaze and how media deals with that. Coming out of my first undergrad, I started making my own artwork. I was really interested in the Internet selfie culture of that time.
Facebook had just opened up to more than college students and YouTube vlogging was really big at the time. I was like, “Okay, I've been studying how we consume and digest and metabolize dominant media and make that part of ourselves and reproduce the ideologies that are carried through those mediums. But now we're increasingly being the producers of that media ourselves and we're making ourselves the content of that and circulating and distributing media more in this social way rather than top-down.”
So I made a video installation where I was sampling a lot of these YouTube vloggers and I was just very interested in this new situation of the gaze that it represented. Who are you talking to when you make a vlog?
How is it that person is actually seeing their own reflection in the moment of creating that index and that everyone else who sees that is seeing it later?
Is this a space of presence or a space of absence?

As for deep learning in film and media, I became increasingly interested, too, in what the machinic gaze is within these interactions. It's not just these displaced and fragmented human gazes but there's a very active machine gaze that gets stronger and stronger. I think with deep learning that becomes even more true. A lot of the content we're seeing is now mimicking that self-created content.
Peter Bauman: I’m interested in how many post-Internet ideas, like you seemed to touch on early in your career, are now manifesting themselves again in the rise of AI.
You’ve said before that “I explore cracks in machines as in humans, finding empathy in our entangled faults and fortes.” How do you see the relationship between these technologies and yourself? Can they be separated or is it an entangled relationship?
Jess Tucker: Entanglement is a key word throughout a lot of my research and experimentation, even my belief in reality.
But I think, discursively, separation is not only possible but important sometimes. So entanglement, for sure, and a deemphasis on the individual is definitely something I'm thinking about.
At the same time, I've been asked a lot since I started working with AI, “How does it feel to be collaborating with AI?” And find that to be not quite the right word for it because I think we keep getting really caught up on the agency of the AI and the human meeting.
I'm more interested in the breakdown of our definition of agency and intelligence to begin with and a blurring of that independence. So if I say that the machine has agency and I'm collaborating with that agency, it also implies a possibility for consent from the machine where it agrees to work with me.
But it’s important to note that's not the nature of that relationship. It's a different encounter.
Entanglement is also not necessarily something new to art made with AI. There have been artists working with chance-based systems or sourcing randomness from nature who have deemphasized their own artistic genius or individual vision.
Peter Bauman: Yeah, even the person considered the originator of AI art, Harold Cohen, thought AARON was a form of self-portrait and an extension of himself. So this idea has been around since the beginning, even when it was just a hard-coded expert system.
Back a bit to media studies, I wonder to what extent the ideas of Donna Haraway and cyborg feminism have impacted your work.
Jess Tucker: I have one series of sculptures called something other than a shroud, which is a direct quotation from A Cyborg Manifesto. When I read it, I was still quite young, but it stuck with me in many ways because, on the one hand,
I've always been interested in how technology is entangled with our experiences of embodiment and selfhood and identity.

I'm also thinking about that through a feminist lens and trying to find my own feminist lens. My first undergrad was half at MIT and half at Wellesley College, which is this all-women's college in Massachusetts.
A lot of the work I was doing was thinking about feminist film studies and the male gaze. While a lot of that was really influential and revolutionary for me, I kept feeling like something was missing and that a lot of the feminist proposals that I was dealing with were falling short.
I felt like they more reinforced a lot of the problems and reinforced these binaries. And A Cyber Manifesto is a real call to undo the binary thinking and the universal truth, to embrace something more entangled and more dynamic and to not come up with a stable doctrine for the way things are.
Peter Bauman: Those ideas of Cohen and Donna Haraway are part of a post-AI tradition that I and others have written about.
Writers like Joanna Zylinska and Martin Zeilinger too have framed post-AI art as embracing post-humanism. Do you see your practice as embracing more of this tradition?
Jess Tucker: This was one of the proposals that was a little new to me: the idea of AI art versus post-AI art because I do find a kinship with this experimental tinkering approach, whether it's early or later. But the latter is dealing with things that have already emerged within the cultural reality, where “AI art” is already participating in reality rather than it being the dawn of that thing and a lack of familiarity with it in the general public.
It's like that becomes part of the material that I'm working with as well. It's not just the AI technology itself. It's the role of that aesthetic and the cultural awareness of it becoming part of the work as well. That's part of why my work makes sense right now: there is this familiarity to the aesthetic but I'm also playing with it in this unusual way.
I'm tinkering but I'm just tinkering at a later point so I have different material to include within that.
Peter Bauman: With other technologies, like photography, the digital and the internet, after this initial phase of exploration, it seems like the importance of disrupting and breaking these technologies became centered. Do you see your interest in disrupting as signifying this shift?
Jess Tucker: It’s more misusing versus breaking because I'm not fully breaking it, right? I am also capitalizing on what it's uniquely capable of doing. It's more that I see what it's set up to do and I see how people are trying to improve its ability to achieve those purposes.
I'm more interested in its failures to achieve those purposes: the mistakes it makes in creating bodies or placing the face in the right place. I try to emphasize those mistakes and to think about what effect that has.
One of the things we’ve been developing is face and body tracking technologies that are now much more boosted by AI, improving their accuracy levels and their understanding of where the face goes and of never losing that tracking information, making it realistic, making it complete, this totalizing view.
Part of my misuse is to try to either directly get in the way of that accuracy or to even spiral it into a level of absurdity of mistake that I say is no longer the mistake but is the purpose.

That relates to the earlier question of this empathetic relationship of cracks in machines as in humans. Especially when it's done on a face or a body, we have this innate way of relating to this failure or this uncanniness or this horror if it's achieved when the machine makes a mistake in its representation.
It reveals something about us and our longing for a face or a complete picture and our predictive technology that we have built in of what we expect to see in an image.
When it fails, there's an emotional response to that failure.
We train our machines to achieve the things we long for and desire. When it fails, we feel that as well.
Peter Bauman: I'm very interested in human empathy with different intelligences because it gets to the more interesting questions today about implications for the nature of these intelligences, like AI. But empathizing with a machine is something different because it's not living. Is it just a ludicrous idea in the first place?
Jess Tucker: It's a question about what the function of empathy is because when I say empathizing with machines, there's something that it reveals about ourselves. So it's also getting in touch with something of yourself that maybe you weren't aware of.
I don't know the future of machines and the levels of agency that will be achieved and therefore what kinds of rights we should think about for machines.
But for now, the empathy is not so much a subject meeting a subject, a mind meeting a mind, but a mind meeting a mirror.
Although technology definitely contributes a lot to the image that comes out of the mirror, we're in a feedback, entangled relationship with machines for sure.
Peter Bauman: And that watchful mirror can even be used for control. I love something you said: “I'm particularly interested in face- and body-tracking technologies and their relationship to historical systems of visual control, from the development of linear perspective as a totalizing viewpoint to religious and imperial imagery that rendered bodies as either sacred or deviant.”
This highlights how these systems are never neutral and derive from apparatuses of power. What do you hope your work achieves by making these systems visible?
Jess Tucker: So systems of visual control, and especially thinking about surveillance more explicitly, are not precisely where I started with making work. I wasn't coming from this tactical or political activist approach that we should refuse these systems. I was more in this abstract, poetic, philosophical relation to it.
But more and more over time I realized how surveillance and control mechanisms are something we increasingly internalize through our visual training.
This visual training is something that relies on and evolves through the technologies used to create it and these technologies are often pursued with the goal of imposing forms of control and power.
These aren't always obvious to us in the ways that we participate within them. I think increasingly people are aware but it's not there at the front. It is purposefully hidden. So the impression is just received through the participation in those images.
It's not necessarily a one-to-one communication like, “You need to be aware of this, and I hope my work is doing that.” But it's hopefully something people walk away with and think about more afterwards.
Peter Bauman: Technology has always had these complicated relationships with power, going back to digital computers that were basically developed for war-making.
Do you think that will happen with any of these technologies? Is it that we always need to be careful when we are dealing with things as sensitive as surveillance and body tracking? Or is it that, oh, eventually, this is just how self-driving cars are going to be used?
Jess Tucker: I think it's both at the same time always and that's why it's so effective because you can't really separate it. I'm thinking, for instance, of the ways that we track our steps or we track our music preferences, and we feel really satisfied with this perspective we can get on ourselves or that the machine gives back.
It can feel like it's serving our desires but embedded within that is the trade-off, right? This also renders your intimate life visible to an invisible entity that doesn't necessarily have your interests at heart.
There's this trade-off of convenience, efficiency and effectiveness and things that can serve our needs, but a realization, too, that to enable that, we're giving up something that can be used for these other purposes.
Peter Bauman: Those trade-offs with our intimate life your work explores a lot. For example, your project cycles uses your own photos and videos as part of a custom AI workflow.
Jess Tucker: People work with this in a lot of different ways. Everyone always says, "You haven't gone to art school until you've done a piece naked.” And I was never that person. I'm pretty shy about that thing. But then dealing with AI, especially in what deepfake porn has become, I've had multiple friends who've had to deal, unfortunately, with their images being used non-consensually to generate this content.
Just thinking about how much of our private lives we do give over to machines, whether that's naked or not. They're just with us all the time and we're trusting that our privacy will remain private but knowing more and more that it's not.
I for sure wanted to use my own material because it was like, “How can I take my own material into my hands and not have it used by someone non-consensually?” Rather, I was saying, “I'm going to use it in my own way. And it's my material for sure, and I'm not taking anybody else's.”
How do I play with what is actually visible in that private material? Because I basically overexpose my face to the machine such that I've trained my own models to reproduce everything they see out of only my face.
On the one hand, it's this hyper-exposure of self and my identity but it covers everything I see.
When I feed it an image of myself somewhat naked, that's never actually seen in the end. Or one of the video pieces, fickleporno, is a motion capture sequence of my partner and me but there's no image, actually.
It's just data points, just motion capture. The image only emerges through these AI processes and you do not see anything sexual or naked in the final outcome. It's only faces and you can tell by the forms that there was something intimate that happened there, yet it's hidden.
-----
Jess Tucker is an artist whose performances and installations combine video, electronic music, prints, sculptures, and digital interactivity to playfully examine how machinic mediation shapes our experiences of embodiment, selfhood, and desire. Tucker participated in Art on Tezos: Berlin this November with the The Second Guess as well as being a finalist for the Lumen Prize 2025 Still Image Award. Tucker is teaching a new course on surveillance at NYU Berlin and even has a recent album out.
Peter Bauman (Monk Antony) is Le Random's editor in chief.
