Oct. 15, 2025

Is AI Slop Bad for Me?

Is AI Slop Bad for Me?
The player is loading ...
Is AI Slop Bad for Me?

When Meta launched Vibes, an endless feed of AI-generated videos, the response was visceral disgust to the tune of "Gang nobody wants this," according to many users.

 

Yet OpenAI's Sora hit number one on the App Store within forty-eight hours of release. Whatever we say we want diverges sharply from what we actually consume, and that divergence reveals something troubling about where we may be headed.

 

Twenty-four centuries ago, Plato warned that consuming imitations corrupts our ability to recognize truth. His hierarchy placed reality at the top, physical objects as imperfect copies below, and artistic representations at the bottom ("thrice removed from truth").

 

AI content extends this descent in ways Plato couldn't have imagined. Machines learn from digital copies of photographs of objects, then train on their own outputs, creating copies of copies of copies. Each iteration moves further from anything resembling reality.

 

Cambridge and Oxford researchers recently proved Plato right through mathematics. They discovered "model collapse," showing that when AI trains on AI-generated content, quality degrades irreversibly.

 

Stanford found GPT-4's coding ability dropped eighty-one percent in three months, precisely when AI content began flooding training datasets. Rice University called it "Model Autophagy Disorder," comparing it to digital mad cow disease.

 

The deeper problem is what consuming this collapsed content does to us. Neuroscience reveals that mere exposure to something ten to twenty times makes us prefer it.

 

Through perceptual narrowing, we literally lose the ability to perceive distinctions we don't regularly encounter. Research on human-AI loops found that when humans interact with biased AI, they internalize and amplify those biases, even when explicitly warned about the effect.

 

Not all AI use is equally harmful. Human-curated, AI-assisted work often surpasses purely human creation. But you won't encounter primarily curated content. You'll encounter infinite automated feeds optimized for engagement, not quality.

 

Plato said recognizing imitations was the only antidote, but recognition may come too late. The real danger is not ignorance, of knowing something is synthetic and scrolling anyway.

 

Key Topics:

• Is AI Slop Bad for Me? (00:00)

• Imitations All the Way Down (03:52)

• AI-Generated Content: The Fourth Imitation (06:20)

• When AI Forgets the World (07:35)

• Habituation as Education (11:42)

• How the Brain Learns to Love the Mediocre (15:18)

• The Real Harm of AI Slop (18:49)

• Conclusion: Plato’s Warning and Looking Forward (22:52)

 

More info, transcripts, and references can be found at ethical.fm

 

When Mark Zuckerberg announced Vibes in late September, the platform seemed designed to answer a question nobody had asked. Users would scroll through an endless feed of AI-generated videos, short-form content synthesized entirely by machines, created from text prompts and remixed without human hands ever touching a camera. 

The response was immediate and nearly universal. "Gang nobody wants this," read one of the top comments on Zuckerberg's announcement. TechCrunch's headline deployed the term that had been circulating in creative communities for months: "Meta launches Vibes, a short-form video feed of AI slop." On The Daily Show, comedian Michael Costa put the situation more bluntly, describing Vibes as a feed for "fat little pigs" and suggesting that Meta wanted to "watch you eat yourself to death."

The visceral disgust is worth examining because the feeling that something was deeply wrong arrived before most people could articulate why they felt it. The term "AI slop," or AI-generated content that feels less like art or entertainment than the runoff, waste product of systems optimized for volume rather than quality. AI slop is akin to the uncanny valley but applied across entire ecosystems of content.

Yet, five days after Vibes launched, OpenAI released Sora's video generation platform to the public. Within forty-eight hours, Sora hit number one on the App Store. The backlash was identical, but the adoption was immediate. Whatever people said they wanted, whatever revulsion they expressed, tech executives were betting that consumer behavior would diverge from stated normative preference. The economics were too compelling: AI content costs nearly nothing to produce, can be generated infinitely, and keeps users scrolling. Whether it's good, in any meaningful sense, has become beside the point.

But what if that initial revulsion, the response before the rationalization, represents genuine wisdom? Twenty-four centuries ago, Plato warned that consuming imitations of truth corrupts our capacity to recognize actual truth; repeated exposure to copies of copies trains us to prefer shadows over reality. His theory of mimesis rests on a hierarchy of distance from reality, with each remove representing not just aesthetic degradation but a kind of spiritual pollution, a corruption of what he called the soul's capacity for understanding.

The warning seems somewhat abstract. But recent research in computer science suggests that Plato may have been diagnosing something that is now measurable. AI models trained recursively on their own outputs undergo irreversible degradation, losing rare patterns while converging toward statistical averages. The mathematics confirms what the ancient hierarchy predicted: copies of copies collapse toward mediocrity; the collapse is built into the imitation process itself. An ancient philosophical truth is playing out through contemporary mathematics, and the platforms have already deployed this infrastructure to billions of users.

Imitations All the Way Down

In Book X of the Republic, written around 375 BCE, Plato makes a claim that sounds almost petty in its specificity: "All poetical imitations are ruinous to the understanding of the hearers, and that the knowledge of their true nature is the only antidote to them." Not that poetry is sometimes misleading, or that bad poetry corrupts, but consuming imitations of something good inherently damages our understanding of what good means. The structure of Plato’s argument helps elucidate something peculiar about machine-generated content: why the medium itself, independent of quality, might matter.

Plato's theory rests on his hierarchy of reality. Whatever one makes of his metaphysics, and philosophers have spent millennia debating whether his Forms actually exist, the structure proves useful for understanding what happens when machines learn from machines. At the top are the Forms: perfect, eternal ideas accessible through philosophical inquiry. Below that are physical objects, imperfect copies of these Forms. A carpenter crafting a bed imitates the Form of bed-ness, working from an understanding of what makes a bed a bed. At the bottom are artistic representations: the painter's image of a bed, which captures only the appearance of one particular bed from one particular angle. This makes the painting "thrice removed from truth," or an imitation of an imitation of the ideal.

The distance from the original Form is essential. The carpenter needs to understand the function, structure, and purpose of a chair to transform a piece of wood into a sturdy, reliable, and comfortable place to sit. The painter needs only to capture how light falls on wood, how fabric drapes, and what the eye sees from a single perspective. Art, in Plato's framework, imitates appearances rather than engaging with reality. Each imitation means less understanding and less connection to what makes something what it is.

AI-generated content: the fourth imitation

AI-generated content extends this descent in ways Plato couldn't have imagined, but his hierarchy anticipates. Machine learning models don't train on literal physical objects or even on direct observations. Models learn from digital datasets, such as photographs, descriptions, and prior representations, that are themselves already copies. When an AI generates an image of a bed, AI isn't imitating appearances the way a painter does but extracting statistical patterns from millions of previous copies: photographs taken by photographers who were already working at one remove from the physical object, processed through compression algorithms, tagged with descriptions written by people looking at the photographs rather than the beds. The AI imitates imitations of imitations.

And then these AI-generated outputs become training data for the next generation of models, also known as synthetic data. AI training on AI: copies of copies of copies of copies. Each iteration moves further down Plato's hierarchy, further away from anything resembling reality; a mathematical severing from the real.

When AI Forgets the World

Last year, Nature published research that reads like experimental confirmation of Platonic metaphysics. Ilia Shumailov and colleagues at Cambridge and Oxford tested what happens under recursive training, AI learning from AI, and found a universal pattern they termed model collapse. The results were striking in their consistency. Quality degraded irreversibly. Rare patterns disappeared. Diversity collapsed. Models converged toward narrow averages.

A language model trained on Wikipedia text degraded after nine generations into mechanical nonsense: "architecture. In addition to being home to some of the world's largest populations of black-tailed jackrabbits, white-tailed jackrabbits, blue-tailed jackrabbits, red-tailed jackrabbits, yellow-." The sentence trails off into absurdity, the model having lost any capacity for coherent continuation. Image generation showed the same pattern: distinct handwritten digits blurred into indistinguishable forms as the model averaged everything toward prototypes. The digits didn't just become worse; they became the same.

The researchers proved mathematically that this isn't a problem of insufficient data or poor training techniques. Even under ideal conditions, recursive training causes the distance between true distribution and model approximation to diverge while variance collapses toward zero. It's built into the process of learning from copies instead of reality. What Plato argued, that each removal from truth increases similarity to truth, computer science confirms programmatically.

Other researchers have found variations on the theme. Rice University scientists called the phenomenon Model Autophagy Disorder, invoking mad cow disease as a metaphor. The comparison is apt: both involve recursive self-destruction through corrupted copying mechanisms, prions in one case and statistical patterns in the other. After five generations of synthetic training, their face generation models produced images that all looked like the same person, with bizarre gridlike artifacts spreading across the features like digital corruption. Researchers at Stanford and Berkeley GPT-4's code generation ability dropped eighty-one percent over three months, precisely the period when AI-generated content began proliferating online and presumably entering training datasets.

This addresses a common objection: that the medium doesn't matter, that art is art regardless of how it's produced. But with AI, the medium determines what can be created because the process is recursive imitation. Statistically, AI cannot produce genuine outliers. Rare patterns get averaged away by design. A photographer can seek unusual subjects, strange angles, and can deliberately work against convention. AI averages toward prototypes because that's what minimizes loss functions. The mere potential volume of AI-generated content compounds the problem: AI produces a thousand outputs per hour at near-zero cost while a human produces one. The cheap doesn't just compete with the expensive, but floods quality out entirely.

As Plato's hierarchy explains, the painter engaging with a physical bed is at least working from something real, however imperfectly perceived. The AI training on images of beds never touches reality but only patterns extracted from previous representations. When AI trains on AI, the connection to the real world diminishes. 

Habituation as Education

Plato's deeper concern wasn’t about epistemology, but culture. Repeated exposure to bad imitation, Plato argues, corrupts the soul through habit. The claim appears in Book III of the Republic, where he's discussing education in his ideal city. "Did you never observe," he writes, "how imitations, beginning in early youth and continuing far into life, at length grow into habits and become a second nature, affecting body, voice, and mind?"

Culture, for Plato, is education. Music, poetry, visual art, and theatrical performance aren't neutral entertainment but formative experiences that train your character. What we repeatedly encounter shapes who we become. Exposure to artistic forms, whether ordered or chaotic, simple or complex, truthful or imitative, trains the soul toward corresponding dispositions. Plato is claiming human formation: simplified, homogenized, and imitative forms train preferences for simplification, homogenization, and imitation. Complex, rare, truthful engagement trains capacity for complexity, appreciation of rarity, and orientation toward truth.

Culture isn't a mirror that reflects existing values but the medium through which values and preferences are initially formed. The ethical dimension emerges here. If culture educates, then what we consume matters not just for pleasure or aesthetic judgment but for who we become capable of being. The question stops being whether AI-generated content is "as good as" human-created content in some abstract aesthetic sense. The question becomes what consuming content that is mathematically constrained to exclude novel output does to our capacity to perceive, appreciate, and desire anything else.

In model collapse, the tails of distributions disappear first: low-probability events, rare patterns, edge cases, minority data, outliers. The Cambridge researchers explicitly note that "low-probability events are often relevant to marginalized groups" and are "also vital to understand complex systems." Rare medical conditions may be forgotten by diagnostic AI. Minority consumer preferences disappear in favor of bestsellers. Image generators asked for "dog" to produce golden retrievers and labs instead of rare breeds, because golden retrievers and labs appear most frequently in training data. Long-tail scientific papers, despite potential importance, may be excluded from model understanding because those papers are cited less frequently than mainstream work.

But the deeper problem is what consumption of this collapsed content does to us. If we habitually encounter mediocre representations, we learn to prefer average representations. Not through conscious choice or explicit persuasion but through the mechanism Plato identified as habituation: repeated exposure training the soul, or, in contemporary neuroscience terms, the neural architecture, toward corresponding dispositions. 

How the Brain Learns to Love the Mediocre

The mere exposure effect, documented across hundreds of studies, demonstrates that repeated presentations create preference without conscious cognition. Simply encountering something multiple times makes us like it more, reaching maximum strength within ten to twenty presentations. Processing fluency research proves that averaged, prototypical features feel immediately more pleasing than distinctive ones, with effects operating within seventeen to fifty milliseconds of viewing, faster than conscious awareness. The brain prefers what it can process easily, and prototypes are, by definition, what the brain has learned to process most easily. Perceptual narrowing shows that environmental exposure literally reshapes neural discrimination abilities through synaptic pruning. Populations lose the capacity to perceive distinctions they don't regularly encounter. It's not just that we prefer what we see; we become unable to fully perceive what we don't see.

Most concerning, research specifically examining human-AI feedback loops found that AI systems amplify biases through mechanisms operating below conscious awareness. In emotion recognition tasks, humans showed a fifty-three percent bias toward certain categories. AI trained on this data amplified the bias to sixty-five percent. Then, when humans interacted with the biased AI, their own bias increased to sixty-one percent over time. The conclusion: "AI systems amplify biases, which are further internalized by humans, triggering a snowball effect where small errors in judgement escalate into much larger ones." Crucially, participants underestimated the substantial impact even when explicitly warned about the effect.

What Plato called habituation, neuroscience measures as synaptic pruning and preference formation. The process isn't neutral. AI-generated content systematically purged of long-tail rarity through model collapse and optimized for processing fluency creates feedback loops: AI-averaged content leads to repeated exposure, which creates preference for convergent features, which generates demand for more AI content, which trains future models on even more homogenized data, which accelerates collapse. The cycle compounds.

Plato emphasizes "beginning in early youth" because formation during development has outsized effects. Children encountering predominantly AI-generated content from early ages are being trained in preferences for the averaged, the prototypical, and the easily processed. They're not learning to appreciate complexity, rarity, or difficulty. Children are not developing the capacity to discriminate subtle differences or to value what is unusual. The soul, in Plato's framework, becomes shaped to desire what it repeatedly encounters. Neural architecture, in the contemporary neuroscience framework, becomes pruned to discriminate what it regularly processes. Either way, the result is the same: populations trained to prefer shadows.

The Real Harm of AI Slop

Yes, but the answer requires precision. Not all AI-generated content is equally harmful. Human-curated, AI-assisted work can maintain or even enhance quality through active collaboration, preserving cognitive engagement and creative agency.

 

When humans generate options with AI, select thoughtfully, and refine substantially, the results often surpass purely human or purely automated work. OpenAI’s research on InstructGPT showed that a 1.3 billion parameter model trained with reinforcement learning from human feedback outperformed the original 175 billion parameter GPT-3 model that lacked such fine-tuning. Users preferred the smaller model’s responses across a wide range of tasks, demonstrating that human guidance can outperform sheer scale.

 

Empirical studies of AI-assisted artists found similar effects. Examining over four million artworks from fifty thousand users, researchers discovered that artists who adopted AI tools produced pieces rated about fifty percent more favorably than their pre-AI work. The difference came from curation, artists generating with AI, then choosing and refining the best results, rather than publishing automated output directly.

 

Controlled writing experiments published in Science Advances confirmed the same pattern. Writers given curated AI suggestions produced stories rated 8 to 26 percent higher in quality and creativity than those using unfiltered generations or none at all. The findings were strongest for less experienced writers, suggesting that thoughtful human selection amplifies creative outcomes.

 

But you will not encounter primarily human-curated AI content. You will encounter infinite feeds of unfiltered, fully automated generation optimized for engagement rather than quality. The economic incentives are overwhelming: ninety-one percent cost reduction, near-zero marginal costs, orders of magnitude more volume.

 

Platforms choose automation not because they misunderstand the quality difference but because the costs of lower quality are externalized to users while the benefits of scale accrue to shareholders.

 

The result is simple: the cheap overwhelms the expensive, the automated drowns out the curated, the collapsed replaces the diverse.

 

The fully automated, high-volume AI feeds you actually encounter, not the carefully curated AI-assisted work that exists in niche or premium contexts, train your preferences toward homogeneity through mechanisms faster than conscious thought. Processing fluency makes average content feel pleasing within fifty milliseconds. Perceptual narrowing reshapes your neural discrimination abilities through synaptic pruning. The mere exposure effect peaks within ten to twenty presentations.

 

You will learn to prefer what you are given, and what you are given is recursive imitation, content systematically purged of rarity and optimized for immediate engagement.

 

So yes, AI slop is bad for you. Not because AI-generated content is immoral to consume or inherently inferior to human creation, but because the act of consuming AI slop reshapes your perception. It dulls discrimination, narrows taste, and habituates you to imitation. The harm lies less in the content itself than in the long-term training of attention and appetite.

Conclusion

Plato warned that imitations corrupt the soul unless we recognize them for what they are. That awareness, he believed, was the only antidote to deception. In our case, recognition may be all that remains.

 

You can curate carefully, seek out human-made or human-guided work, and limit exposure to automated feeds. These choices matter. They preserve awareness, the capacity to notice the difference between what is real and what is merely fluent. But such choices exist within systems built to maximize engagement, where each new imitation costs less to generate than to resist.

 

The window for resistance is this one: the moment before habituation completes, before the average becomes preferable to the original. You may understand precisely how and why AI slop degrades perception, and still be unable to avoid it. That, perhaps, is the deeper cruelty of the present, that our loss will not come through ignorance but through recognition too late to matter. The danger was never ignorance. It’s the quiet comfort of knowing something is synthetic and scrolling anyway.