The debate over AI in education is almost always framed as a moral and pedagogical crisis, cheating, integrity, authenticity, learning outcomes. These concerns recur with ritual predictability. They are not wrong, but they miss the issue entirely.
Every few weeks, this framing reappears in the same form: a New York Times profile contrasting a professor who has never touched ChatGPT with another proudly building a custom chatbot for their course. The opposition is familiar, and beside the point.
Two facts, obvious and structurally inconvenient, are consistently omitted.
First: most student work is not produced to learn anything. It is produced to receive a grade.
Second: the primary function of academia is not to produce practitioners. It is to produce academics.
Once…
The debate over AI in education is almost always framed as a moral and pedagogical crisis, cheating, integrity, authenticity, learning outcomes. These concerns recur with ritual predictability. They are not wrong, but they miss the issue entirely.
Every few weeks, this framing reappears in the same form: a New York Times profile contrasting a professor who has never touched ChatGPT with another proudly building a custom chatbot for their course. The opposition is familiar, and beside the point.
Two facts, obvious and structurally inconvenient, are consistently omitted.
First: most student work is not produced to learn anything. It is produced to receive a grade.
Second: the primary function of academia is not to produce practitioners. It is to produce academics.
Once these facts are acknowledged, the panic around AI largely evaporates.
In theory, student work exists to develop understanding. In practice, it exists to satisfy an evaluation mechanism.
Students optimize for the reward structure presented to them. This is not corruption; it is rational behavior. When the reward is a grade, students produce whatever maximizes the probability of receiving that grade at the lowest possible cost.
This has always been true.
The essay, the problem set, the discussion post, these are not intrinsically meaningful activities. They are inputs into an assessment pipeline. The student does not ask, “Is this insight valuable?” but “Will this satisfy the rubric?”
AI does not introduce this logic. It merely exposes it.
If an AI system can complete an assignment in seconds and the system cannot distinguish that output from student work, the conclusion is not that AI is cheating. The conclusion is that the assignment had near-zero epistemic value to begin with.
High-value intellectual work does not collapse under automation. Low-value ritualized labor does.
The second omission is more uncomfortable.
Most academic programs are not designed to prepare students for the world as it exists. They are designed to prepare a small fraction of students to remain inside academia.
Curricula privilege:
citation over synthesis
theoretical compliance over practical utility
symbolic rigor over experiential feedback
This makes sense if the goal is to reproduce academic norms. It makes little sense otherwise.
Outside the university, nobody asks how the work was produced. Nobody audits the process. Nobody cares whether a tool was used. They care about whether the outcome works.
When you check your email, you do not ask whether the spam filter was AI-generated. When you use software at work, you do not care whether parts of it were written by humans or machines. When you turn on Netflix, you do not interrogate the provenance of the recommendation algorithm.
You evaluate the experience. Nothing else.
Academia is one of the last domains insisting that process legitimacy matters more than outcome quality. That insistence survives only because academia largely evaluates itself.
Defenders of the current system argue that students must struggle in order to learn. This is true, but deeply misapplied.
Struggle is only meaningful when it is causally linked to value creation. Most academic labor is not. It is simulated difficulty designed to justify grading.
If AI “nukes” a course, renders its assignments trivial or obsolete, the problem is not AI. The problem is that the course mistook effort for value.
Making something that people will actually read, use, or rely on is an extraordinarily high bar. It is the bar the world enforces automatically.
No one gives partial credit for effort. No one rewards intent. No one asks how hard it felt.
They ask one question: Does this work?
Education structured around artifacts that must survive real use would be largely immune to AI. Most current coursework is not structured that way.
The anxiety around AI in education is not about learning. It is about control.
Grades require surveillance. Surveillance requires legible effort. AI destroys legibility.
When effort can no longer be reliably observed, grading loses its moral authority. And when grading loses authority, the academic hierarchy begins to wobble.
This is why the debate focuses on banning tools rather than redesigning incentives. Tool bans preserve the fiction that the system was working.
It was not.
AI has not broken education. It has revealed what education was already optimizing for.
If a task collapses under AI, it was never a measure of understanding, only compliance. If learning requires the absence of tools, it was never preparation for reality. If the value of work depends on how it was produced rather than whether it holds up, it was never valuable.
The future of education is not about preventing students from using AI.
It is about deciding whether education intends to produce grades and academics, or people capable of producing work that survives contact with the world.
Only the first objective is incompatible with AI.
Yes, education should be a protected space: a place to practice, to fail safely, to receive feedback. But protection is not the purpose. Education exists to prepare people to enter institutions as they are. And those institutions do not grade effort or reward sincerity. They evaluate outcomes.
What happens when students learn only to satisfy rubrics no one else uses, to perform labor no one else recognizes, to confuse effort with value?
AI forces education to choose between being a training ground for reality or a theater of evaluation.
It cannot be both.
No posts