After Cheating: Teaching Critical Thought in the Age of AI
Nearly 90% of college students now use AI for coursework, and while AI is widely embraced in professional fields, schools treat it as cheating by default. This disconnect became clear when Columbia student Roy Lee was suspended for using ChatGPT, then raised $5.3 million for his AI-assisted coding startup. Could we say that the real issue is not AI use itself, but rather how we integrate these tools into education? Our host, Carter Considine, breaks it down in this episode of Ethical Bytes.
When students rely on AI without engagement, critical thinking suffers. There have been countless accounts by teachers of students submitting AI-written essays that they clearly never even read through.
It’s telling that a 2025 Microsoft study found that overconfident AI users blindly accept results, while those confident in their own knowledge critically evaluate AI responses. The question now is how teachers can mold students into the latter.
Early school bans on ChatGPT failed as students used personal devices. Meanwhile, innovative educators discovered success by having students critique AI drafts, refine prompts iteratively, and engage in Socratic dialogue with AI systems. These approaches treat AI as a thinking partner, not a replacement.
The private K-12 program Alpha School demonstrates AI's potential: students spend two hours daily with AI tutors, then apply learning through projects and collaboration. Results show top 2% national performance with 2.4x typical academic growth.
With all this in mind, perhaps the solution isn't banning AI but redesigning assignments to reward reasoning over mere information retrieval. When students evaluate, question, and refine AI outputs, they develop stronger critical thinking skills. The goal could be to teach students to interrogate AI, not blindly obey it.
This can prepare them for a future where these tools are ubiquitous in professional environments–a future in which they control the tools rather than are controlled by them.
Key Topics:
- Generative AI in Classrooms in the Past Two Years (00:28)
- The Cost of Convenience (02:22)
- Between Resistance and Reinvention (05:28)
- Inside a High-Performance, AI-Driven Classroom (07:21)
- AI and Critical Thinking (09:29)
- Conclusion (12:56)
More info, transcripts, and references can be found at ethical.fm
In the past two years, generative AI has swept into classrooms, challenging old assumptions about how students learn and what counts as original work. Tools like ChatGPT can draft essays, solve equations, summarize texts, and generate code, often more fluently than the students submitting them. Nearly 90 percent of college students now use AI for coursework, according to New York Magazine. While AI is embraced across professional fields, in classrooms AI is unilaterally treated as dishonest by default.
The case of Columbia student Chungin “Roy” Lee highlights this tension. After relying on ChatGPT for much of his coursework, Lee developed a tool to help peers cheat on technical interviews. In response, Columbia University suspended him for an academic year. Shortly after his suspension went viral, Lee raised $5.3 million for his startup that helps programmers cheat during coding interviews. If students are being suspended for using tools their future employers expect them to know, then perhaps the problem isn’t misconduct; it’s misalignment.
This podcast argues that framing AI use in education as inherently wrong misses an essential, underlying problem: the erosion of critical thinking. Losing the ability to challenge ideas and create one’s own worldview is not caused by AI itself, but by the educational system’s failure to integrate the technology well. Drawing on recent research and real-world case studies, such as the experimental Alpha School, this podcast examines how to integrate AI into education in ways that preserve meaningful intellectual engagement. Students are already using these tools; the question is whether educators will integrate AI to strengthen learning or let the system, through omission, undermine the development of critical thinking.
The Cost of Convenience
What happens when AI is easier to use than your own mind? Many educators and researchers warn that excessive reliance on AI tools for writing, problem solving, or ideation may undermine the core capacities education is meant to foster: analytical reasoning, creativity, and independent thought. In classrooms across the country, teachers are seeing signs of a shift. Gina Parnaby, an Advanced Placement English teacher in Atlanta, says she has already seen students using AI as a way to outsource their thinking. Her concern is that if teens rely on ChatGPT to compose arguments or interpret texts, they may not develop the writing and analysis skills they need. Another teacher, Alexa Borota of New Jersey, observes that frequent AI use can worsen short attention spans and erode the stamina needed to tackle longer tasks.
A 2025 Microsoft survey reinforces the concern that uncritical reliance on generative AI diminishes students’ capacity for independent reasoning. The study found that participants who had high confidence in AI tended to blindly accept its outputs, while those confident in their own knowledge were more likely to verify and critically engage with AI-generated content. Microsoft’s study indicates an underlying assumption about how people should engage with AI: passive reliance erodes reasoning, while active, critical use strengthens it.
Real-world examples, such as those documented in New York Magazine, illustrate this distinction. A high school English teacher in Los Angeles described a student who submitted an AI-written essay on The Great Gatsby that referenced characters who didn’t exist. When asked about the mistake during a follow-up quiz, the student admitted they hadn’t even read the AI’s output. Similarly, a community college professor observed that students using AI to generate STEM answers often struggled in labs and problem sets. These students were not engaging with AI as a tool for deeper learning but using the tool as a replacement for original thought.
Brendan McCord, founder and chair of the Cosmos Institute, a nonprofit dedicated to cultivating philosophical depth in technology leaders, argues in Reason that AI doesn’t degrade thinking by default. The danger, he writes, lies in treating AI as a tool to automate rather than stimulate critical thinking. Students who defer entirely to AI miss the opportunity to sharpen their own judgment. In contrast, those who treat AI as a sparring partner engage more actively. As McCord emphasizes, the future of learning depends not on whether AI is present, but on whether students and educators develop the habits of mind needed to use it well.
Between Resistance and Reinvention
Many educators responded to the initial ChatGPT wave with panic. In early 2023, New York City’s Department of Education temporarily blocked access to ChatGPT on school networks and devices, citing concerns that it “does not build critical‑thinking and problem‑solving skills.” Other institutions followed suit. But bans proved porous, with students using mobile hotspots and personal devices to work around the restrictions. Teachers began to sense that AI wasn’t going away; it was already deeply embedded in student life.
Yet even as AI transformed how students approach assignments, most teachers didn’t adapt. A 2024 EDUCAUSE survey found that over 60% of faculty hadn’t integrated generative AI into their teaching, largely due to lack of training or clarity. Meanwhile, innovative educators found success by incorporating AI into pedagogy itself: asking students to critique AI-generated drafts, compare chatbot summaries with their own, or use it to generate questions rather than answers.
These early experiments suggest that teachers are not powerless in the face of change but may be encouraged through a range of support, from technical to institutional. Professional development tailored towards meaningful AI integration could help more educators transition from resistance to experimentation. Clearer policies and case studies could also equip teachers to discern when AI use enhances learning and when it undermines it. But without the correct incentives, classrooms will continue to wage war against the disruptive technology.
Inside a High-Performance AI-Driven Classroom
While traditional schools treat AI as a threat, some institutions use it to push learning further. At Alpha School, a private K-12 program, students spend just two hours a day with AI tutors and the rest of their time on hands-on projects, public speaking, coding, and collaboration. The results are startling: Alpha students perform in the top two percent nationally, with 2.4 times the academic growth of their peers.
Alpha’s model uses AI not as a shortcut but as a personalized tutor. Software adapts to each student’s pace and skill level, offering tailored lessons. Teachers, instead of lecturing, act as coaches and mentors. Students receive targeted instruction from AI, then apply what they learn in collaborative settings. AI becomes a tool for deeper engagement and not just a way to avoid thinking.
Still, Alpha’s success may not be replicable at scale. Alpha is a private school with selective admissions and significant resources, including a student body already primed to excel. Access to small class sizes, parent involvement, and cutting-edge tools may contribute as much to outcomes as the AI itself. Moreover, the novelty of the model may skew early results. As with any educational reform, what works in a well-funded pilot may look very different in under-resourced public schools.
Nevertheless, Alpha offers a glimpse of what is possible when AI is used thoughtfully. Students there are not memorizing answers handed to them by a machine, but are learning to move faster, think harder, and stretch their autonomy further. Alpha’s approach underscores a broader principle: when educators use AI to amplify inquiry rather than replace it, students become more active participants in their own intellectual growth.
Practical Strategies for Using AI to Foster Critical Thinking
Emerging research suggests that generative AI, when used well, can support rather than sabotage critical thinking. The key is not the presence of AI itself, but if assignments require active engagement rather than passive consumption.
One effective strategy is having students evaluate AI-generated drafts instead of accepting them at face value. A 2024 study titled “The Metacognitive Demands and Opportunities of Generative AI” finds that interacting with AI outputs, such as analyzing, questioning, or refining them, requires users to actively monitor and regulate their own thinking. In classroom settings, teachers can ask students to identify logical flaws, evaluate the credibility of claims, or rework AI-generated paragraphs to improve clarity and coherence. Editing AI-outputs pushes students to detect gaps in reasoning and come to their own rhetorical choices.
A second approach emphasizes iterative prompt refinement. Students begin with a basic question to an AI model and revise it repeatedly to improve the quality, precision, or tone of the answer. Research into “prompt literacy” shows that this process deepens students’ understanding of how language guides thought. By refining inputs, learners must clarify their objectives, detect ambiguity in their phrasing, and adjust based on the AI’s interpretations. This method makes students more aware of the assumptions embedded in their questions and gives them a more nuanced grasp of how framing influences response. It’s important to note that while prompt engineering supports critical foresight now, its long-term relevance is uncertain. The way information is elicited from models is constantly evolving. For example, models are increasingly capable of generating effective prompts on their own, which suggests that the need for human-crafted prompt engineering may decline over time.
Another promising practice involves using Socratic dialogue with AI systems. A 2024 study on Socratic LLMs found that students who interacted with these models exhibited improved critical reasoning and self-reflection. Socratic LLMs are designed to mimic the method of Socrates, who taught through probing dialogue rather than direct instruction. By posing open-ended questions, the models challenge students to explain their thinking, justify their claims, and consider counterarguments. The AI functions less as a source of authority and more as a heuristic nudge that slows students down and encourages deeper reflection.
These methods are united through an emphasis on active cognitive engagement. Generative AI is not inherently antithetical to deep thought but assignments must be revised to reward reasoning over information citation. With thoughtful design, educators can ensure that AI strengthens the habits of inquiry, skepticism, and reflection that lie at the heart of critical thinking.
Conclusion
Generative AI is already embedded in how people work, learn, and create. Treating its use as inherently dishonest creates a disconnect between educational institutions and the professional environments students are preparing to enter. The question is no longer whether students will use AI, but how they can do so in ways that support genuine learning.
Rather than banning emerging tools or relying on outdated assessments, educators can shift their focus toward the quality of students’ engagement with ideas. That shift involves aligning teaching practices with the habits of inquiry, skepticism, and reflection that AI cannot automate.
Integrating AI into education is not endorsing shortcuts but designing learning experiences that treat AI as a resource to be interrogated, not obeyed. AI can sharpen reasoning, expand perspective, and offer new ways to think. The opportunity is not to reject the future, but to teach students how to meet it with discernment. If the classroom becomes a space where AI is neither banned nor blindly accepted, but questioned and shaped by human insight, then education may not only survive the AI age but evolve into something deeper.