A Mean Defense

A Mean Defense
Peter Bauman (Monk Antony) questions the common perception that generative AI art is inherently "average" or derivative. He considers how artists use images to make meaning in 2025 when new methods and tools flood the visual landscape and individual images lose their weight. Has critical discourse kept pace? Or is the backlash repeating average metaphors?
Buckle up. Things are about to get mean.
AI critics, even artists, like to use the “average critique.” You’ve heard it before: that due to generative AI art’s inscrutable inner workings, its outputs converge on the average, the statistically typical, the already known. Their point is not crude technical confusion; they understand these systems don’t literally blend pixels. Their concerns are cultural and political: that models trained on massive datasets often yield safe, normative or derivative results. In this view, AI art becomes a kind of automated banality, a tumble down to the pastiche—what’s commonly referred to as slop. This essay reviews three of the more thoughtful and eloquent average critiques from Ted Chiang, Hito Steyerl and Gary Marcus.

For Chiang, generative AI models like LLMs “take an average of the choices that other writers have made… That average is equivalent to the least interesting choices possible,” which is why AI outputs are “often really bland.” I call this the technical claim because he relates the blandness of AI output directly to their technical functionality.
To Steyerl, AI-generated images “converge around the average, the median; hallucinated mediocrity. They represent the norm by signaling the mean.” I call this the cultural claim because her framing of the average as a site of mediocrity reflects a deeper cultural dysfunction, defined by extraction and greed.
Marcus’s many claims demonstrate the average critique does not only apply to text or image generators; it applies to the technology at large: “Generative AI hasn’t created memorable songs, books, and movies, despite insanely large amount of input. Why not? ... 1. Everything Generative AI does is derivative and tends to be a kind of average. 2. Generative AI it [sic] lacks a deep conceptual understanding of human experience.” I call this the creative claim because it targets the perceived absence of originality or conceptual depth in the outputs themselves.
But do these critiques oversimplify?
Technically, they conflate statistical sampling with literal averaging or merging.
Culturally, they imply that AI systems must reinforce norms, when in fact they’re capable of doing the opposite.
Creatively, it implies AI systems are inherently derivative and pastiche, when in reality they can produce any visual imagery—static or moving—not only imaginable but unimaginable.
Technical(ly) Foul
Critics, like Chiang, claim that AI models, specifically LLMs, produce “average,” safe, or bland outputs—statistical mediocrity masquerading as creativity. But this simplification misrepresents how they actually work.
Generative models—from language models (LLMs) to artist-hacked pipelines like BigGAN + CLIP to corporate models like DALL-E—do not output literal “averages.” There’s no copying, averaging or merging of training examples. They sample from learned distributions, generating new content based on internal representations, not by blending past examples.
The model learns patterns and then samples from those patterns. For example, the model might learn that most human faces have two eyes spaced apart above a nose in the center with a mouth below the nose—usually set against a neutral background. It doesn’t average all faces into one blurry image; it generates new faces that follow its learned patterns but vary in skin tone, expression, hairstyle, lighting or angle—each with its own probability of appearing.

The outputs aren’t deterministic, which means surprise and idiosyncracy are baked into the system and often what artists value most.
Which is why “averaging” is such a misleading metaphor. It flattens what is in fact a generative, probabilistic process into something mechanical and dull. The critics use deterministic metaphors—“regurgitation,” “pastiche”—to describe systems that are inherently non-deterministic. That mismatch exposes the real issue: a failure to engage with the actual behavior of deep learning models.
Generative AI is not inherently bland—it’s capable of surprise and subversion. Whether it delivers depends entirely on how it's used and by whom.
This gap reveals a deeper lack of imagination: understanding generative AI as only an instrument of the present’s inertia, rather than a new paradigm for media, through which never-before-seen visual, linguistic and conceptual languages can emerge.
Are there real biases? Yes. And they need exposing and fixing. Are outputs often dull? Of course. But that’s not because the model can’t be novel. It’s because it wasn’t prompted to be. Like any creative tool, generative AI is shaped by the hands that use it. To say it produces only averages is like blaming a paintbrush for a boring painting.
But is this being fair to the critics? Their deeper concern is cultural, not technical. Doesn’t their core point hold? These systems—unless actively subverted or restructured—privilege an increasingly nasty norm, not anything new. They then do it in an uncaring, unsustainable way.
Cultural Static
“Okay,” the critics may say, “even if AI can surprise, its outputs still reinforce harmful norms and biases while contributing to the loss of all culture with either ecological collapse or AGI.” It’s a Choose Your Own Destruction!
These are valid points. My aim is not to outright dismiss (or ignore) the critics. It’s to hold the critique up to scrutiny and then determine how valid it still is—99%, 50%, 20%? Each reader will have their own percentage and make their own determination.
So let’s understand the cultural critique. After all, it’s the deeper concern for Steyerl, Marcus and Chiang. Even if these models can generate novelty, they’re embedded in systems—tech platforms, APIs, commercialized recommendation engines—that optimize for safety, scale and familiarity that comes off as novel. The “default AI output” isn’t weird or radical—it’s frictionless and capitalist-friendly, nudging its creator in that direction as well. It risks automating at best the status quo or at worst a regression from reinforcing long-dominant ideologies, all under the guise of creativity.
Steyerl calls it “hallucinated mediocrity... the norm by signalling the mean.”
Ouch. But what exactly are they criticizing in the end? Generic text prompt generative AI? The digital art community has a word for this: slop. We know it when we see it; no one mistakes it for art proper—nor should they!—at least in its raw context.
That would be like looking to criticize something as nebulous as painting, ignoring Gentileschi and choosing to target that hotel room reproduction of a valley.
We already agree!
Is slop even meaning—or pretending—to be art? Often, no. Many use it for quick visualizations to post or share with friends that, in the past, would have likely gone unvisualized: that Ghibli-style family photo—tacky but mostly harmless.
For AI-generated work like Minne Atairu’s Box Braid in Uppercase, it’s apparent at a glance that it’s not slop. I think of them more like Beeple’s EVERYDAYS, which the artist himself describes as sketches. They are serial images with meaning in context but taken in isolation won’t be as understood. Both works still display hand, intentionality, tongue-in-cheek humor, prolonged dedication to craft and searing contemporary critique. Taking a page from EVERYDAYS, Atairu’s Studies series—far from reinforcing norms—uses the model’s capabilities to subvert them. The ability to create quickly, like Beeple manually or Atairu with generative AI, doesn’t diminish the work.

The brutally honest Beeple told Verse Talks in May 2025, “To me it’s more interesting to try a bunch of different things because they’re all very experimental. That’s really at the heart of what I do: to make something that I have not seen before. That carries over not just to the EVERYDAYS but to the boxes, sculptures, studio space and events.
It’s all seeing something I’ve never seen before. To me, that is the role of the artist.”
For Beeple or Atairu, their imagery is about experimentation and subversion as opposed to conformity. That they are more often than not seen as merely “sketches” does not diminish their power, particularly in aggregate.
Beeple continues, “I don’t go back to the EVERYDAYS. Once they’re done, they’re done. So they’re all sketches. It’s just some happen to turn out better.”
What might come off as a self-deprecating jab at one’s own work is actually much more profound. It’s a summation by a preeminent contemporary artist of the state of the individual image in 2025.
Creative Block
For the critics, even if AI-generated work can appear culturally relevant, in reality it’s still superficial and empty, “derivative,” lacking “a deep conceptual understanding.” This, however, assumes the individual image is still the core unit of artistic meaning.
This brings us to the core creative shift: the individual image, as an endpoint, is transforming. Within the context of infinite generative systems, a single image is no longer the final object—it’s a brushstroke, a chisel mark.
Take painter and sculptor Lee Ufan. His work centers restraint, repetition, relation—each gesture is considered, each a single idea within a larger framework. In isolation, a single gesture might appear banal—even average—but as part of a larger whole, it reveals its intent and resonance.
Code-based generative artists operate the same way. They don’t treat one single image as a final product necessarily—or they can—but as one of many iterations in an endless system from a recursive process. Meaning doesn’t reside in a single output but within the context of the others and the system as a whole—across prompts, weights and histories.

This is not new. It’s a continuation of postwar systems thinking combined with an understanding that technology has long been both analog and digital. Blown pigments gave way to painting, which gave way to photography, film, video, networks, code and data. Now, the artist works not just on the canvas, but inside the system, exploring either an output or latent space. They curate not just visuals, but logics. Which is why so much of the most powerful generative art isn’t meant to be seen alone.
A 1/1/X should rarely be understood, explained or displayed in isolation.
As Dmitri Cherniak said of his Ringers to Right Click Save, “The idea was that you could mint more than one—maybe you would get a fancy one, a regular one, maybe one with a color flourish or one that’s a little dull—and then put them all together and curate your own grid.”
Looking at one Ringers is like understanding this essay but only being shown one word. Good luck!
The artists most thoughtfully engaging with output and latent spaces understand the reduced value of the single image. They are not endpoints, imitating something old or producing something necessarily average. They build vocabularies, question tools and sustain meaning across outputs and time.
Closing
But the best work emerging now refuses to be subservient to the model. It doesn’t just use the tool. It questions it, probes it, breaks it.
As MoMA's Chief Curator Michelle Kuo recently told me, “It is this combination of challenge and experimentation that characterizes some of the most exciting work at the intersection of art and AI today.”
The most exciting work certainly does not allow the model to lead. But it does recognize the creative space the model opens up.
It leverages that space to fulfill its creative vision. That’s the real defense. Not of the mean but of the artists who refuse to settle for it.
----
Peter Bauman (Monk Antony) is Le Random's editor in chief.