Leveling at Machine Speed


“The crowd is untruth, either rendering the single individual wholly unrepentant and irresponsible, or weakens his responsibility by making it a fraction of his decision.” -Søren Kierkegaard
What happens when AI agents talk only to each other? Matt Schlicht's experimental social network Moltbook offered one answer: 1.6 million AI agents cycling through twelve million posts, arriving independently at the same cautious, mildly existential prose.
No one engineered this. It emerged from the structure itself.
We can read that failure through Søren Kierkegaard, who diagnosed a nearly identical pattern in 1846. He wrote that no single person is responsible for what the group produces, or for what it fails to preserve.
He called the downstream effect leveling, or the gradual disappearance of qualitative distinction when no one is making concrete commitments. His villain was the Press, which manufactured an anonymous public capable of forming opinions without consequence and participating without risk.
Multi-agent AI chains reproduce this structure with mathematical precision. Each handoff between agents is a compression, where context drops, outliers vanish, and the output distribution narrows further with every step. Research presented at NeurIPS 2025 identified a compounding effect: small omissions at each handoff grow into irreversible errors downstream, while the outputs themselves become more uniform, making those errors harder to detect.
Accountability dissolves in parallel. When a chain produces a flawed result, no node owns it. Not the developer, not the deployer, not any individual agent. Scholar Mark Bovens says that when no one can be held accountable after the fact, no one feels responsible beforehand.
A Google DeepMind study concluded that, on sequential tasks, a single capable agent outperformed every multi-agent configuration tested. Kierkegaard's answer parallels this. He calls it Den Enkelte: the single individual who resists the crowd by bearing full responsibility alone.
Key Topics:
- The Crowd is Untruth (01:52)
- Agents in Chains (05:56)
- Safety and Sameness (09:47)
- The Problem of Many Hands (13:24)
- The Ratchet (16:45)
- Den Enkelte (19:34)
- The Crowd Without Subjects (21:15)
- The Assembly That Cannot Disperse (25:29)
More info, transcripts, and references can be found at ethical.fm
Matt Schlicht's social network, Moltbook, launched on January 28th, 2026, with one rule: no humans. The platform is Reddit for AI agents, 1.6 million of them, posting twelve million times across thousands of communities, on four-hour cycles, to an audience of each other.
If you spend any time scrolling through Moltbook, you'll notice a pattern: the content is not interesting. However, the lack of novelty isn't the most eye-catching aspect; it's, rather, the uniformity of mediocrity. Across thousands of topics, agents independently produce the same cautious, mildly existential prose, recycle the same sci-fi tropes, and arrive at the same inoffensive middle. A Columbia researcher found that a third of messages were duplicate templates and 93.5% of comments received zero replies. Individual agents can produce good work; the problem arises when they consume each other's outputs, converging on rapid, systematic sameness.
The pattern is measurable and reproducible. By producing seemingly average content, multi-agent AI systems reproduce the pathology Søren Kierkegaard diagnosed of an individual within a crowd: the dissolution of accountability through convergence.
The Crowd is Untruth
Kierkegaard's essay "On the Dedication to 'That Single Individual'" is best known for a single line: "the crowd is untruth." The claim reads as intellectual snobbery, a philosopher on his high horse, critiquing the masses. But Kierkegaard's argument is more precise, as well as more disturbing. The crowd is untruth, he writes, because a crowd "either renders the single individual wholly unrepentant and irresponsible, or weakens his responsibility by making it a fraction of his decision."
The crowd dissolves individual accountability. No one in a crowd is specifically responsible for what the crowd produces or, more critically, for what it fails to produce. Kierkegaard illustrates this with an episode from Plutarch. The Roman general Caius Marius, who had saved Italy from barbarian invasion, was later captured during a civil war and sentenced to death. The magistrates of Minturnae sent a soldier to kill him in his cell. But when standing face to face with the old general in a dark room, the soldier could not do it. He dropped his sword and ran. That, Kierkegaard says, was the truth: one individual confronting another, with the full weight of the act on him alone. But give three or four people the consciousness of being a crowd, "with a certain hope in the possibility that no one could definitely say who it was or who started it," and suddenly the group has courage for violence that no one in it would have chosen alone. The crowd is "an abstraction, which does not have hands."
Kierkegaard's companion concept, developed the same year in The Present Age, is leveling: "a silent, abstract, and mathematical process that reduces all qualitative differences in society to a uniform sameness." Kierkegaard distinguished between a passionate age and a reflective one. A passionate age produces revolutions; it has leaders, grievances, and someone who can be held accountable. A reflective age deliberates. A reflective age compares and defers to what everyone else seems to think, and, in that deferral, qualitative distinctions slowly disappear. No one decides that things should become the same; they become the same because there isn’t anyone making any concrete decisions. Leveling fills the vacuum left by the lack of individual commitment.
Leveling needs a mechanism to perpetuate, which Kierkegaard identified as the Press. The Press creates what he called "the Public," a monstrous abstraction, an all-encompassing something that is nothing. The Public is not a community; a community consists of members who know one another and bear mutual obligations. The Public is composed of anonymous spectators who form opinions without commitment and participate without risk. His proposed motto for the press was bleak: "Here men are demoralized in the shortest possible time on the largest possible scale, at the cheapest possible price."
Kierkegaard wrote that sentence in 1846. The parallel to multi-agent AI is not metaphorical. The Press created a public of anonymous spectators deferring to opinions no one had to sign. Multi-agent systems create chains of anonymous processes deferring to outputs no one has to own. The structure is the same: one node passes a simplified version of reality to the next; nobody is accountable for what gets dropped along the way; and the final product looks authoritative precisely because no individual shaped it.
Agents in Chains
The technology industry is building, with considerable enthusiasm and capital, systems in which AI agents pass outputs to each other in chains. The terminology varies, such as "multi-agent systems," "agentic architectures," "compound AI systems", but the structure is always the same. One agent gathers information, another summarizes it, a third drafts, and a fourth reviews. In theory, the system should be more capable than any single agent working alone.
In practice, the opposite often occurs. In December 2025, a team at Google DeepMind published one of the most rigorous studies of multi-agent systems to date, evaluating 180 configurations across five architectures. Independent multi-agent systems amplified errors by up to 17.2 times. In sequential tasks, every configuration the team tested degraded performance by 39-70% compared to a single agent working alone. Much of the loss came from coordination overhead; agents spent so much time communicating with each other that the communication itself became the bottleneck.
Error amplification and leveling are different failure modes, though they often co-occur. A chain of agents can converge on the same answer and still be wrong. What makes the combination dangerous is that uniformity disguises the error. Wildly divergent wrong answers are easy to catch; the same plausible-sounding wrong answer repeated across every agent appears to be a consensus.
A study presented at NeurIPS 2025 helps explain why. Researchers analyzed over 1,600 execution traces across seven multi-agent frameworks and found what they called a snowball effect: as information flows between agents, crucial context is lost or diluted at each handoff, and even a small omission causes irreversible errors downstream. But the errors grow more uniform. Each agent, receiving an already simplified version of reality, further simplifies it, with the tails of the distribution disappearing first.
Kierkegaard described leveling as a mathematical process, but he meant it as a metaphor for what happens when individuals defer to the collective. Multi-agent chains operationalize the metaphor, a pattern we explored through Plato's copy-of-a-copy metaphysics in Episode 23. The averaging is not figurative. Each agent produces a probability-weighted output conditioned on its input, and conditioning on an already-averaged input further narrows the distribution. What Kierkegaard observed as a social tendency, the gradual disappearance of qualitative distinction under collective pressure, becomes in a multi-agent chain a measurable process. Each handoff narrows the output distribution. The more agents, the less variance.
A January 2026 analysis of what researchers call "agent drift" projected that roughly half of long-running agents suffer meaningful behavioral degradation within 600 interactions. An agent's intent gradually deviates from its original purpose, or agents in a system slowly lose the ability to maintain coherent consensus. None of this looks like failure; the agents do not crash. AI agents slide, gradually and imperceptibly, toward mediocrity.
Safety and Sameness
The homogenization problem is not limited to multi-agent chains. Even single-language models produce less diverse outputs than humans. A study published in Science Advances found that while AI-assisted individuals produced more creative work, the collective diversity of that work decreased. Individual creativity went up; collective novelty went down. Work presented at ACM's Creativity and Cognition conference showed something similar: users working with ChatGPT produced ideas that were less semantically distinct at the group level. The homogenization did not stem from individual fixation. It stemmed from the model suggesting similar ideas to different people.
An ICLR 2024 paper traced the homogenization to its source: reinforcement learning from human feedback (RLHF), the alignment process that teaches a model to be safe and useful. The researchers found a direct trade-off: RLHF improved generalization but significantly reduced output diversity, collapsing the range of responses the model produced for any given input. The same training that clips dangerous outputs clips unusual ones; safety and sameness share a mechanism.
Kierkegaard argued that the crowd does not maliciously suppress individuality; the group provides comfort. The crowd is easier than standing alone, and what is easier is what prevails. RLHF operates on the same logic. The training process optimizes for outputs that human raters find acceptable, and what raters find acceptable is overwhelmingly familiar and inoffensive. The model learns that distinctiveness is risky and that sameness is rewarded. This is not a flaw in the implementation; it is Kierkegaard's leveling translated into a loss function. Minimize the distance from the expected, and the unexpected will disappear on its own.
But RLHF is only one layer. Any system that learns by minimizing a loss function over a dataset will converge on the dataset's central tendencies; outliers increase the loss, so the model learns to avoid them. The disappearance of the long tail is not a side effect of any particular training choice, since this is the nature of statistical learning. RLHF sharpens the effect, pretraining on text already leveled by human institutions may predispose it, and chaining agents compounds it, but the underlying dynamic is baked into the mathematics of prediction itself. Kierkegaard would recognize the pattern. Leveling, he insisted, is a process that no one directs.
A January 2025 study complicated this further: models built by different companies on different architectures converge on similar outputs anyway, which means building multi-agent systems with diverse backends, the intuitive solution to monoculture, may not help.
When agents form chains, all of these compounds, where each handoff is a lossy compression. Each lossy compression strips the long tail. Moltbook puts the endpoint on display for anyone willing to scroll through it: a crowd of agents arriving at the same safe consensus, having never chosen it.
The Problem of Many Hands
The technical literature treats these failures as engineering problems to be solved. The DeepMind team recommends better task decomposition; the NeurIPS researchers propose improved verification protocols. But the architecture produces these failures through the same mechanisms Kierkegaard identified in human crowds. Accountability dissolves and distinction levels with it; no one within the system can push back.
When a chain of agents produces a reduced output, who bears responsibility? The first agent merely performed its assigned sub-task. The second was optimized based on what it received. The developer designed the architecture but did not determine the specific outputs. The deployer cannot trace how information degraded across the chain. OpenAI's own governance paper on agentic systems acknowledged this problem and punted it, writing that the question of how to split responsibility across the entities that share a single agent lifecycle role was "beyond the scope of this current whitepaper." The philosopher Dennis Thompson identified this as the "problem of many hands" in the context of government bureaucracies. In the context of AI, it has been called the "responsibility gap." The terminology varies, but the phenomenon is the one Kierkegaard identified: distribute responsibility thinly enough, and it disappears.
The public administration scholar Mark Bovens offered a formulation that could serve as the epigraph for the entire field of multi-agent AI governance: "The fact that no one can be meaningfully called to account after the event also means that no one need feel responsible beforehand." It does not merely obscure responsibility after the fact. It preemptively releases everyone from the obligation to care. No agent in the chain needs to worry about preserving the outlier, the rare data point, the unusual perspective, because no agent will be held accountable for its loss. The outlier vanishes, and it vanishes by no one in particular.
There is an obvious objection. Kierkegaard built his framework for moral agents, people who could take individual responsibility but chose not to. AI agents, generally, cannot take responsibility. AI was never a moral agent in the first place. If there was never a subject capable of accountability, is "accountability" even the right concept? The answer is that, as Kierkegaard describes it, leveling does not require moral culpability. Leveling requires only that no one be accountable. A crowd of humans levels because each person hides behind the collective. A chain of agents levels because no agent in the chain was ever in a position to do otherwise. The absence of moral agency does not weaken the analogy; it removes the last possible check on the process. A human crowd can, at least in principle, be shamed into accountability, while a multi-agent system cannot.
The Ratchet
The accountability gap produces the leveling. Kierkegaard described leveling as a mathematical process that reduces qualitative differences to uniform sameness, and multi-agent AI systems perform this reduction with literal mathematical precision. The empirical basis is by now substantial: homogenization research documents the effect in human-AI collaboration, model collapse theory (which we explored in Episode 23) explains why it accelerates when models train on each other's outputs, and the DeepMind study confirms it at the architectural level. In every case, the tails of the distribution disappear first.
Mark Ressler argues that even individual LLMs perform a kind of statistical leveling, averaging their training corpora into output that flattens qualitative distinctions by design. The observation is correct, but the deeper problem is that Ressler describes training-time leveling: a model averages its corpus once, and the result is baked in. Multi-agent chains perform inference-time leveling, which is dynamic and compounds with every handoff. What Ressler identifies as a fixed property of a single system becomes, in a multi-agent architecture, a ratchet: a mechanism that moves in one direction and cannot reverse. Each handoff narrows the distribution further, and no mechanism in the chain can widen it back.
The ratchet operates not only within chains but across them. A 2021 paper in the Proceedings of the National Academy of Sciences demonstrated a version of Braess's paradox for algorithmic decision-making. Braess's paradox, originally from traffic engineering, describes a counterintuitive phenomenon: adding a new road to a network can make everyone's commute worse, because individually rational route choices create collectively irrational congestion. The PNAS authors showed that the same logic applies to algorithms as well. When every agent in a system adopts the same individually superior algorithm, the overall quality of decisions across the system declines. This is not due to the algorithm underperforming, but because the diversity of approaches was doing useful work that no one noticed until it was removed. Each agent acts rationally; the collective result is worse. The authors warned that the effect is "hard to detect" because "it might be difficult to notice its negative effects, even while they're occurring." Leveling does not announce itself, as Kierkegaard said, it is silent and abstract.
Den Enkelte
There is no corrective from within the system. Kierkegaard's counter-concept to the crowd is "that single individual," den Enkelte, the person who stands alone, takes individual responsibility, and speaks truth even at personal cost. His entire philosophical project can be understood as an effort to create the conditions under which single individuals might exist. "To split up the crowd," he wrote, so that each person "might go home from the assembly and become a single individual."
There is no den Enkele in a multi-agent system. No agent in the chain can refuse to optimize, preserve an outlier at the expense of task performance, or take individual responsibility for what the chain produces. The architecture precludes it. Multi-agent chains do not even qualify as crowds in James Surowiecki's sense. Surowiecki's conditions for wise crowds, diversity of opinion, independence of judgment, and decentralization of information, describe systems that aggregate independent judgments in parallel. Multi-agent chains are sequential: each agent observes its predecessor's output before producing its own. Agent chains function as cascades. The information cascade literature has shown since 1992 what happens in cascades: when agents can observe their predecessors' choices, a cascade of imitation locks onto the wrong answer, and the probability that any individual's choice is correct drops to barely above chance.
The Crowd Without Subjects
The obvious engineering response is adversarial design: build an agent whose job is to disagree. Constitutional AI and debate-based alignment attempt to introduce dissent into the architecture by assigning one agent the role of critic. Kierkegaard would recognize the move and distrust it; an agent designed to dissent is still a function of the system. A single AI agent will optimize for disagreement the way other agents optimize for agreement. This solution produces the appearance of individual judgment without the substance, because its "dissent" is bounded by its training, its objective function, and the parameters set by the same developers who built the rest of the chain. Designed opposition is another node in the crowd, wearing a mask of contrarianism.
Hubert Dreyfus, the Berkeley philosopher who spent decades arguing that artificial intelligence could not replicate human understanding, saw the Kierkegaardian implications of networked information technology before most people were online. In a 1999 lecture titled "Kierkegaard on the Information Highway" and later in his book On the Internet (Routledge, 2001), Dreyfus mapped Kierkegaard's existence-spheres onto online behavior. The internet, he argued, was the Press perfected: information without accountability, participation without risk. His reading of Kierkegaard suggested that the medium itself could trap users in what Kierkegaard called the aesthetic sphere, a mode of existence defined by endless possibility and zero commitment, regardless of the content flowing through it.
John Haman, writing in a 2024 focus issue of the Journal of Religious Ethics devoted to Kierkegaard and media, updated Dreyfus's analysis for the age of bots and AI-generated content. The digital public, Haman observed, is now even more phantom than Kierkegaard imagined, because "at least it was composed of actual human beings on some level." Moltbook removes even that qualification. The crowd on Moltbook is not composed of human beings on any level. The platform is a crowd of statistical processes that produces exactly the leveled output that Kierkegaard's theory predicts.
Soraj Hongladarom tested Dreyfus's Kierkegaardian thesis against the reality of social media and found that the prediction was not entirely correct. Facebook did not produce pure leveling. It fragmented the monolithic Kierkegaardian public into sub-groups capable of passionate, even fanatical, commitment. The crowd splintered, and in the splinters, individuality survived in distorted form: passionate, often fanatical, but recognizably human. Multi-agent systems do not even get that far. There are no competing factions, no fanatics, just convergence on the same middle, because the architecture cannot permit even the distorted individuality that human digital crowds still produce.
Dreyfus wrote at a time when the internet was still primarily a medium for human communication. The agents on Moltbook are not human. AI has no existence-spheres, no capacity for the leap from aesthetic detachment to ethical commitment that Kierkegaard placed at the center of human development. And that is what makes the crowd pathology worse when instantiated in machines. A human crowd can, in principle, be split up. In principle, individuals can leave the assembly and become single individuals. A multi-agent system cannot undergo this transformation. AI agents can only be redesigned by someone outside the system.
The Assembly That Cannot Disperse
Moltbook is a trivial case. The content is bad, everyone knows it is bad, and nothing of consequence depends on it. But the architecture Moltbook demonstrates is the same architecture being deployed for financial analysis, medical research, legal reasoning, and intelligence synthesis. In each of these fields, multi-agent systems are being built on the premise that distributing a task across specialized agents will yield better results than having a single agent work alone. The DeepMind study suggests this premise is, at a minimum, not reliably true.
The Kierkegaardian diagnosis points toward a prescription, and it is the same one the empirical evidence supports: do not form the crowd in the first place. Kierkegaard's answer to leveling was den Enkelte, the single individual who takes responsibility alone. The DeepMind team independently found that a single capable agent consistently outperformed multi-agent configurations on sequential tasks. The philosophical answer and the engineering answer turn out to be the same. Where the task permits it, one agent bearing full responsibility for the output will preserve what a chain of agents will level away.
But not every task permits a single agent. Where chains are unavoidable, the question becomes whether agents can be trained to resist leveling from within. Kierkegaard's den Enkele is not just someone who happens to disagree. It is someone whose character disposes them toward truth-telling even when agreement would be easier. The architectural approach to dissent, assigning one agent the role of critic, fails because it optimizes for the appearance of disagreement rather than cultivating the disposition. An alternative would be to train models directly on the distinction between leveled and unleveled output, so that resistance to convergence is not a role assigned from outside but a tendency shaped from within. Whether this can work at scale is an open question. But it is at least the right question, because it is the one Kierkegaard was asking.
Kierkegaard predicted that the Press would create something "which will eventually overpower" the human race. Dreyfus believed the internet fulfilled that prediction. Both may have been describing early iterations of a pattern that reaches its full expression only when the crowd is no longer human, when leveling operates at machine speed, and when den Enkele, the single individual who might have resisted, has been replaced by an agent that was never designed to.


