Skilled At Work



Teaching in the Age of AI Is Breaking Me

I've spent years helping students learn. Now I spend most of my time trying to figure out if they did.

I fell into teaching college Earth science courses the way people fall into most things they love — gradually, without fully realizing what was happening. The pay is bad. The job security is nonexistent. I've been doing it alongside other work for years because there's a particular kind of satisfaction in watching someone finally understand something difficult. That satisfaction is hard to find elsewhere.

I'm not sure I'd make the same choice today.

The problem isn't the students. It isn't even really the technology, though that's where it lives now. The problem is that everything that made teaching feel worth doing — building trust, designing meaningful challenges, watching someone struggle toward an answer — is being steadily dismantled. Not by policy. Not by budget cuts. By a chatbot.

Let me be precise about what's actually happening.

I teach asynchronous online courses — recorded video, flexible pacing, no scheduled room. These classes have always demanded more from students in terms of self-direction, and more from me in terms of creative engagement strategies. A student can drift without anyone noticing. That was the challenge before. I was used to it.

What I wasn't prepared for was the moment when the question shifted from will students do the work to did anything I'm looking at actually come from a student.

A recent College Board survey found that 84 percent of high school students had used generative AI for schoolwork. College numbers aren't friendlier. When a substantial majority of students in any given class have a tool that can produce a plausible, structured, grammatically clean answer to nearly any question in seconds, the instructor's role quietly changes. You're no longer just teaching. You're investigating.

That investigation is what's killing me.

Each suspected case requires documentation. Evidence. A defensible paper trail that can survive an appeal to multiple layers of institutional review. A single instance can eat four to eight hours — hours I spend feeling not like an educator but like a low-grade prosecutor building a case I'll probably lose anyway, because there is no definitive test for AI-generated work. None. Which means guilty students sometimes walk, and occasionally an innocent one can't prove they didn't cheat.

I used to grade a student's essay in twenty minutes. Now that same assignment, if I suspect something, becomes a half-day ordeal. I cut the assignment.

The thing people keep missing is what learning actually is.

It's not the output. It's the process of getting there. When I ask students to figure out how you'd study wind erosion without waiting centuries for a boulder to wear down, I'm not looking for the right answer — I'm watching them think. The struggle toward the answer is the whole point. That's what builds the mental architecture for scientific reasoning.

I've been tracking responses to that question since 2019. Before ChatGPT, about one in three students figured it out independently. Impressive, actually, for a genuinely hard conceptual leap. In the last two years, the success rate jumped past fifty percent. The language students now use to explain their answers closely mirrors how ChatGPT phrases its response to the same prompt.

They solved the problem. Nothing was learned.

A popular analogy compares using an LLM to write your essay to driving a forklift through a weight room. The weights technically get moved. No one gets stronger. That's right as far as it goes, but I'd push further: the student doesn't just miss the workout. They walk away convinced they've already done it.

The adaptive strategies instructors are turning to are largely worse than what they're replacing.

Oral exams. Handwritten in-class work. Supervised sessions with no devices. These are all reasonable responses, but they come with real costs — in time, in fairness, in access. The written exam, where every student gets the same prompt under the same conditions, became standard for good reasons. Oral exams require enormous time and introduce new vectors for bias. Neither is available to me at all in an asynchronous format.

And online classes are not a luxury. They serve students with disabilities, students in rural areas, and students working full-time while trying to finish a degree. If the only LLM-resistant pedagogy is in-person and synchronous, those students pay the price for a problem they didn't create.

Meanwhile, writing genuinely good assignments — creative, rigorous, engaging — is simply going away. I used to ask students in a natural disasters course to write a Hollywood disaster movie pitch that combined real science with intentional implausibility. They loved it. It required mastery and imagination simultaneously. Now an LLM produces something like it in ten seconds. I cut that too.

The advice instructors used to get about plagiarism was to design better assignments — more reflective, more personal, harder to outsource. Fine advice for Wikipedia-era cheating. Completely useless now. Prompting ChatGPT to fabricate a personal reflection is no harder than prompting it to define a term. No assignment type buys you safety. There are just varying levels of effort required to fake it.

The institutional response has been, in a word, insulting.

Administrators are signing enterprise AI contracts and instructing faculty to teach students how to "use AI effectively." The model assignment they keep circulating: have students generate an essay with AI, then critique it. The stated goal, when I ask, is usually to help students understand why they shouldn't rely on AI to write for them.

The assignment designed to teach students not to use AI begins by having them use AI.

I've stopped trying to explain the contradiction. No one upstairs seems particularly interested in what instructors think about this. We're offered AI grading tools to evaluate AI-generated submissions for AI-designed assignments, and told that's progress.

The thing I keep coming back to is a conversation I overheard a few months ago.

Two students are talking about their workloads. One mentioned an assignment due that night. The other asked why they wouldn't just have ChatGPT do it. The first one said — and I've thought about this a lot since — "This is my major. I actually need to learn stuff in this class. I use AI for my other classes."

Nobody using an LLM to complete their coursework thinks they're learning. Students know the difference. They're not confused about what's happening. It's workload management. Survival math. And the classes that get the real effort are the ones where they've decided the knowledge will matter to them later.

Which means the rest of us — the instructors of the courses they've written off — are not actually part of their education anymore. We're just boxes to check.

I don't know what happens if AI access becomes more limited, or if the economics of these systems eventually shift. But the version of this that's true right now is not the one being sold: AI is not enhancing learning. It is not democratizing education. It is making it almost impossible to do the things that have always worked — the slow, effortful, sometimes frustrating process of actually understanding something — while everyone responsible for the mess congratulates themselves on being forward-thinking.

I'm tired. And I'm not sure the mountain is worth climbing if I'm the only one trying to get there.