Aug. 20, 2025

Difficult Choices Make Us Human

Difficult Choices Make Us Human
The player is loading ...
Difficult Choices Make Us Human

It’s become a crisis in the modern classroom and workplace: Students now submit AI-generated papers they can't defend in class. Professionals outsource analysis they don't understand.

We're creating a generation that appears competent on paper but crumbles under real scrutiny. The machines think, we copy-paste, and gradually we forget how reasoning actually works.

Our host, Carter Considine, breaks it down in this edition of Ethical Bytes.

This is the new intellectual dependency.

It reveals technology's broken promise: liberation became a gilded cage. In the 1830s, French philosopher Alexis de Tocqueville witnessed democracy's birth and spotted a disturbing pattern. Future citizens wouldn't face obvious consequences, but something subtler: governments that turn their citizens into perpetual children through comfort.

Modern AI perfects this gentle tyranny.

Algorithms decide what we watch, whom we date, which routes we drive, and so much more. Each surrendered skill feels trivial, yet collectively, we're becoming cognitively helpless. We can’t seem to function without our digital shepherds.

Ancient philosophers understood that struggle builds character. Aristotle argued wisdom emerges through wrestling with dilemmas, not downloading solutions. You can't become virtuous by blindly following instructions. Rather, you must face temptation and choose correctly. John Stuart Mill believed that accepting pre-packaged life plans reduces humans to sophisticated parrots.

But resistance is emerging.

Georgia Tech built systems that interrogate student reasoning like ancient Greek philosophers, refusing easy answers and demanding justification. Princeton's experimental AI plays devil's advocate, forcing users to defend positions and spot logical flaws.

Market forces might save us where regulation can't. Dependency-creating products generate diminishing returns. After all, helpless users become poor customers. Meanwhile, capability-enhancing tools command premium prices because they create compounding value. Each interaction makes users sharper, more valuable. Microsoft's "Copilot" branding signals the shift that positions AI as an enhancer, not a replacement.

We stand at a crossroads. Down one path lies minds atrophied, while machines handle everything complex. Down another lies a partnership in which AI that challenges assumptions and amplifies uniquely human strengths.

Neither destination is preordained. We're writing the script now through millions of small choices about which tools we embrace and which capabilities we preserve.

 

Key Topics:

  • Difficult Choices Make Us Human (00:25)
  • Tocqueville's Warning About Comfortable Tyranny (01:40)
  • Philosophical Foundations of Autonomy as Character Development (04:17)
  • The Contemporary AI Autonomy Crisis (09:02)
  • AI as Socratic Reasoning Partners (10:46)
  • A Theory of Change: How Markets can Drive Autonomy (12:48)
  • Conscious Choice over Regulation (14:30)
  • Conclusion: Will AI Lead to Human Flourishing or Soft Despotism? (16:13)

 

More info, transcripts, and references can be found at ethical.fm

 

Consider Sarah, a college senior who feeds ChatGPT her assignment prompts and receives polished papers that earn solid grades. When asked to defend an argument in class discussion, she struggles to articulate her ideas. She feels the outline of her phone in her pocket, knowing that ChatGPT could easily answer with the right prompt. Sarah has traded her intellectual development for convenience, and she's not alone.

 

This scenario illustrates the choice between using AI systems to enhance human capabilities, as opposed to gradually replacing them. But outsourcing decision-making to AI also gives away our autonomy, or freedom, to choose our own path.

 

True autonomy represents more than freedom from external control; autonomy to decide for ourselves gives us the capacity to shape our identity through deliberate judgment and reasoned action. Technology can strengthen human agency, but also create what Alexis de Tocqueville prophetically called "soft despotism," a comfortable form of control that erodes our capacity to think and decide who we become.

Tocqueville's prescient warning

Alexis de Tocqueville warned of a new kind of tyranny that wouldn't arrive with helicopters and midnight raids, but with comfort and convenience. Writing in the 1830s after traveling through America to witness the birth of democracy firsthand, this young French aristocrat saw troubling patterns emerging and warned that France and other emerging democracies could face the same fate. In his vision of "soft despotism," citizens would willingly surrender their independence to an "immense tutelary power" that promised to handle life's complexities for them. The state wouldn't tyrannize in the classical sense but would "hinder, compromise, enervate, extinguish, and stupefy" by keeping people "in perpetual childhood."

The genius of soft despotism lies in giving people "the illusion that they are in control" while systematically undermining the individual's capacity for independent action. Citizens become "timid and industrious animals of which the government is the shepherd," trading autonomy for security and comfort. Since individuals willingly participate in this psychological dependency, the result is far more insidious than physical coercion.

Just as soft despotism creates citizens who expect government to "work willingly for their happiness, provide for their security, foresee and supply their needs, guide them in their principal affairs, direct their industry, regulate their testaments, divide their inheritances," AI systems now promise the same comprehensive guidance across all aspects of human decision-making. Consider how recommendation algorithms now choose our entertainment, GPS systems eliminate our need to navigate, and algorithmic feeds curate our information diet. Each convenience standalone seems harmless, but collectively, technology represents exactly what Tocqueville predicted: individuals gradually "outsourcing of our own judgment and decision-making authority" to an external system that promises to handle complexity for them.

This transition from Tocqueville's theoretical warning to contemporary reality reveals why AI poses a uniquely sophisticated threat to human autonomy. Unlike previous technologies that replaced physical labor, AI systems target the cognitive and moral faculties that Aristotle and Mill identified as essential to human character development.

Philosophical foundations of autonomy as character development

Aristotle's conception of practical wisdom (phronesis) provides the philosophical foundation for understanding autonomy as active character development rather than passive freedom from obligations. For Aristotle, human flourishing (eudaimonia) emerges through exercising our rational capacities, "human good turns out to be activity of soul in accordance with virtue" (Nicomachean Ethics I.7). Eudaimonia requires what Aristotle calls practical wisdom, or "a true characteristic that is bound up with action, accompanied by reason, and concerned with things good and bad for a human being" (Nicomachean Ethics VI.5).

 

Crucially, Aristotle argued that an individual cannot learn practical wisdom by being instructed on how to behave. The individual must live and develop practical wisdom through their lived experience and decisions. Aristotle observed that practical wisdom "is not concerned with the universals alone, but must also be acquainted with the particulars: it is bound up with action, and action concerns the particulars." Humans must actively engage with specific situations, deliberate about courses of action, and take responsibility for their choices to develop genuine practical wisdom.

 

While Aristotle did not explicitly use the term "autonomy," his concept of rational choice points toward what we now understand as self-direction. As Brendan McCord, director of the Cosmos Institute, argues, "the capacity for rational choice allows us to steer this hierarchy ourselves" in organizing our ends toward human flourishing. When external forces like "appetite, fashion, or today an algorithm dictates your ends, the hierarchy collapses into heteronomy (rule by others rather than self-rule)." For Aristotle, genuine human excellence emerges through the conscious organization of one's actions toward proper ends, which represents "autonomy in its classical form: not self-invention, but self-direction toward a universal human good."

 

John Stuart Mill extended this understanding by emphasizing that true autonomy requires what he called "self-critical and imaginative choice-making." For Mill, the crucial distinction was between passive imitation and active self-direction. He observed that those who "let the world choose his plan of life for him" develop only "the ape-like faculty of imitation," while those who choose for themselves must "employ all his faculties." This employment of faculties (i.e. observation, reasoning, judgment, discrimination, and self-control) is precisely what builds human character. When we can direct our responses and cultivate proper desires, we are in control of our own destiny: "A person whose desires and impulses are his own (are the expression of his own nature, as it has been developed and modified by his own culture) is said to have a character." Without active engagement of our faculties in self-directed choice, we become mere "steam-engines" lacking distinctive individual essence.

 

Immanuel Kant's famous motto "Sapere aude" (dare to use your own understanding) captures the moral imperative behind autonomous reasoning. In "What is Enlightenment?" Kant defined enlightenment as "the human being's emergence from his self-incurred minority," where minority means "inability to make use of one's own understanding without direction from another." Kant emphasizes the lure of remaining in the dark of your own understanding:

It is so comfortable to be a minor! If I have a book that understands for me, a spiritual advisor who has a conscience for me, a doctor who decides upon a regimen for me, and so forth, I need not trouble myself at all. I need not think, if only I can pay; others will readily undertake the irksome business for me.

 

Both Mill and Kant recognized that autonomous reasoning requires courage to think independently despite authority, social pressure, or technological convenience. Systems that make thinking unnecessary gradually atrophy our capacity for independent judgment, fostering the kind of dependency Tocqueville described.

The Contemporary AI Autonomy Crisis

The philosophical warnings of centuries past are materializing in measurable ways. As we explored in our analysis "After Cheating: Teaching Critical Thought in the Age of AI," students increasingly rely on AI tools without developing the analytical capacities that these systems cannot replace. Students report feeling unable to complete assignments without AI assistance, precisely the kind of learned helplessness that Kant warned would keep humans in "self-incurred minority."

 

AI mirrors soft despotism with disturbing precision by promising efficiency while gradually reducing human cognitive capacity. The "AI overreliance problem" extends to high-stakes domains like medical diagnosis and legal decisions, where humans accept AI recommendations even when demonstrably wrong.

 

We have progressed from tools that augment human capabilities (calculators) to systems that replace human judgment entirely (algorithmic content feeds). Technology is no longer a mere extension of human capacity but a complete substitute for human reasoning. But the future is not predetermined; Tocqueville observed that Americans once "avoided" soft despotism through the "habits of the heart" of their civic culture, through the American tradition of self-reliance, local association, and independent judgment. The imminent question is whether we can cultivate similar practices to resist algorithmic dependency.

 

AI as Socratic reasoning partners

Emerging AI applications have demonstrated that technology, when designed intentionally, can strengthen rather than replace human autonomy. "Socratic AI" systems use questioning rather than direct answers to enhance human reasoning. Georgia Tech's "Socratic Mind" platform uses an AI-powered oral assessment that challenges students to explain, justify, and defend their answers through probing questions. Rather than providing solutions, the system remains "resistant to human persuasion" and follows up with questions that foster critical thinking.

 

Princeton's SocraticAI framework employs multiple AI agents (Socrates, Theaetetus, Plato) that engage in collaborative questioning to solve complex problems. Based on Plato's theory of anamnesis, or learning through guided self-discovery, this project demonstrates direct facilitation of human reasoning using AI, improving human logical discernment accuracy more effectively than AI explanations.

 

These approaches embody what philosophers have long understood: genuine learning and character development require active engagement rather than passive reception. Like Socrates with his interlocutors, these AI systems ask questions that provoke human reasoning, challenge assumptions, and guide users toward deeper understanding through their own mental effort. But these solutions face another important question: how can we incentivize users to participate in autonomy-laden AI? If it’s easier not to think, won’t people just pay for giving their autonomy away? If dependency-creating AI offers such immediate convenience, how can autonomy-supporting systems compete? 

A Theory of Change: How Markets Can Drive Autonomy

The answer lies in understanding that genuine autonomy creates compounding value that dependency cannot match.

The philosophical insight here is crucial. Authentic human development creates what economists call "increasing returns," or the more you use something, the more valuable it becomes. When Mill described how choosing one's own life plan "employs all his faculties," he unknowingly identified this economic principle: exercising judgment strengthens judgment itself. Unlike dependency-creating tools that make users less capable over time, autonomy-supporting AI creates a virtuous cycle where each interaction develops the user's reasoning abilities. This means users become more capable decision-makers with every use, creating sustainable competitive advantages for tools that genuinely enhance rather than replace human judgment.

 

Consider Microsoft's strategic choice of "Copilot" branding, explicitly positioning AI as a collaborative partner rather than an autonomous replacement. This is a response to recognition in enterprise that systems which enhance rather than eliminate human agency create more sustainable value. Users willingly pay premium prices for tools that make them more capable rather than more dependent, creating "cognitive compound interest." Each interaction strengthens the user's judgment and decision-making capacity rather than atrophying and diminishing the experience.

Conscious choice over regulation

The solution to AI's autonomy challenges cannot rely primarily on regulation, which would constitute another form of external authority replacing human judgment. As we explored in our earlier episode, "Good by Design, Not Force," market-based solutions prove more effective than regulatory mandates because they reward genuine value creation rather than mere compliance. True ethical AI emerges through thoughtful design that users willingly choose, not through fear-based adherence to government requirements.

 

The philosophical foundations we've examined translate into specific design criteria for autonomy-supporting AI systems. Transparency serves as empowerment, not mere disclosure. Rather than offering black-box recommendations, these systems reveal their reasoning processes in ways that build user understanding and analytical capabilities. Like a skilled mentor explaining their thought process, the AI helps users recognize patterns, understand trade-offs, and develop frameworks for future decisions.

 

Agency preservation occurs through informed deliberation, where these systems position humans as ultimate decision-makers while providing rich context, alternative perspectives, and potential consequences. Following Aristotle's emphasis on practical wisdom, AI should excel at gathering and analyzing information while explicitly reserving moral and strategic judgments for human deliberation.

Conclusion

The design choices we make about AI today will determine whether technology becomes a tool for human flourishing or an instrument of soft despotism. The philosophical tradition from Aristotle through Tocqueville provides clear guidance: human autonomy requires the active exercise of our rational and moral capacities. Systems that replace this exercise with algorithmic convenience risk creating exactly the kind of dependency that undermines genuine human agency.

Yet autonomy-supporting AI can succeed commercially while preserving human dignity. The choice before us is not between efficiency and autonomy, but between authentic human development and comfortable subjugation. Imagine AI systems that function like the greatest teachers in history, not providing answers, but asking better questions that challenge us to think more deeply and leave us more capable of independent judgment.

The path forward requires conscious commitment to choose tools that enhance rather than replace human reasoning. If we design AI systems that strengthen rather than substitute for human judgment, we can create technology that truly serves autonomy rather than undermining our capacity to decide who we become. The alternative, humans as Tocqueville's sheep, perpetually dependent on external authorities, is not inevitable. It is a choice, perhaps the most important our generation will make about the kind of beings we wish to remain.