Everyone’s Using AI To Cheat at School. That’s a Good Thing.
It’s not just the college students. I’m a professor—and these AI models offer keener, smarter, and more thorough suggestions than I do.
There’s an epidemic of cheating in American education right now. The cheaters are the students of course, but the names of the cheating aids might be familiar to you: ChatGPT, Claude, Gemini, Grok, and Llama.
These characters are capable of doing extraordinary things: writing your persuasive essay in under a minute; knowing virtually all of history; and performing first-rate synthetic analyses of complicated questions. They are not yet geared up to do mathematics, but the best of these programs already can pass medical and bar exams. Oh, and they can do it for millions of students all at once, even sometimes all the way up to graduate-level work.
I’m talking, of course, about large language models (LLMs).
Accurate data is hard to come by, but one estimate suggests that up to 90 percent of college students have used ChatGPT to do their homework. Rather than debating the number, professors and teachers simply ought to assume (and I do) that your students have an invisible, very high-quality helper. As current norms weaken further, more students learn about AI, and the competitive pressures get tougher, I expect the practice to spread to virtually everyone.
This state of affairs has set off a crisis among educators, parents, and students. There has been a flurry of recent stories capturing how the cheating is done, how hard it is to catch, and how it is wrecking a lot of our educational standards.
Unlike many people who believe this spells the end of quality American education, I think this crisis is ultimately good news. And not just because I believe American education was already in a profound crisis—the result of ideological capture, political monoculture, and extreme conformism—long before the LLMs.
These models are such great cheating aids because they are also such great teachers. Often they are better than the human teachers we put before our kids, and they are far cheaper at that. They will not unionize or attend pro-Hamas protests. But in the meantime, the doomers are right about at least one thing: It will feel very painful.
The first problem the LLMs expose is that our evaluation systems are broken, inefficient at sorting, and also unfair. If one student gets an A and the other a B, do we know that reflects anything other than a differential willingness to use LLMs? We never will, yet decisions for fellowships, graduate school admissions, and jobs all will be made on this basis. It stinks.
This isn’t just a modest problem. It is an out-of-control one and it will only get worse.
The second problem is that the current proposed solutions will make things worse. For instance, I commonly hear the following as potential remedies: Enforce anti-AI rules through the honor code; grade based only on proctored, closed-book, in-class exams; and give oral exams.
But if the current AI can cheat effectively for you, the current AI can also write better than you. In other words, our universities are not teaching our citizens sufficiently valuable skills; rather we are teaching them that which can be cloned at low cost. The AIs are already very good at those tasks, and they will only get better at a rapid pace.
Whatever you think of the intrinsic merits of the proposed solutions—can a tougher honor code really work?—they are missing the point. The trouble with all of these remedies is that they implicitly insist that we must do everything possible to keep wasteful instruction in place. The current system is misleading students about the skills they will need to succeed in the future, and providing all the wrong incentives and rankings of student quality.
And the list of problems does not stop with the students. It includes professors. Including me.
Lately I have been using the o3 model from OpenAI to give my PhD students comments on their papers and dissertations. I am sufficiently modest to notice that it gives keener, smarter, and more thorough suggestions than I do. One student submitted a dissertation on the economics of pyramid-, tomb-, and monument-building in ancient Egypt, a topic about which I know virtually zero. The o3 model had plenty of suggestions. How about: “Table 6.5’s interaction term ‘% north × no-export’ is significant in model 3 but not 4. Explain why adding period FE erodes significance; maybe too few clusters? Provide wild-bootstrap p-values.” Of course I would have noticed that point as well.
Maybe they are not all on-target—how would I know?!—but the student, who has studied the topic extensively, can judge that for himself. In any case the feedback was almost certainly much better than anything I might come up with.
Suddenly we are realizing that the skills we trained our faculty for are also, to some degree, obsolete. Would it be so crazy to put o3, or some other advanced AI model, on the dissertation committee, in lieu of the traditional “outside reader”?
It gets worse yet. I have, at times, proposed that we devote one-third of the college curriculum to teaching students about AI and how to use it. Imagine what could be taught. What are the strengths and weaknesses of the various models? How do you spot and also minimize AI hallucinations? What makes for a good prompt? How can you check the work of your AI agents? How can you use multiple AI models to get a better answer yet? And so on. After all, those are the skills that will be required for the jobs of the future, or in some cases the jobs of the present.
I remain excited by the prospect. Yet a major problem remains—and one I didn’t originally foresee. I have observed that college and university students, on average, know more about LLMs than do their professors. Much of that knowledge likely comes from cheating (or what we currently regard as cheating) with AI. But it is knowledge nonetheless. And it is knowledge driven by incentives, which can be a pretty powerful method of learning, whether we like it or not.
Given that reality, my suggestion for a one-third AI curriculum is a nonstarter. We do not have the personnel to teach it—which is yet another condemnation of our sluggish professors and the state of our current system.
So here we have this entire system of higher education, stuck out on a limb with no easy way out.
How did this happen?
My interpretation of the history is simple, and it does not require the postulation of major villains. The system evolved in the 20th century around what it is possible to teach, measure, and test with ease. It was the most direct path to stability and success. The professors could teach from standardized notes and grade by repetitive formula, the students could follow a hard-work recipe for advancement, the administrators had clear standards for who could graduate or not, and the parents understood the tactics for getting their kids into better schools. The deans stood above it all, happy that everyone else was happy enough.
It’s true that the system was biased in favor of those who could afford SAT prep courses and tutors—or, for that matter, fancy high schools and prep schools. But overall this system allowed for a level playing field and relative meritocracy because when the time came to hand in your work, most (not all!) of the time it had to be done by you.
It all worked. Until it didn’t.
We are now in a situation where everyone is continuing to go through the motions, and probably will do so until the students and the tuition-paying parents rebel. Because college is fun, and parental wealth is rising, I do not think that will happen soon. We will all continue to march just a little bit further toward the edge of the cliff.
Maybe the system will ultimately blow up. Or perhaps it will just slouch on this way, becoming increasingly unfair and pathetic. Employers will learn to disregard grades. Graduate schools will rely all the more on letters of recommendation. Personal networking will continue to rise in importance.
Higher education will persist as a dating service, a way of leaving the house, and a chance to party and go see some football games. It also will become all the more important as a path to building out a personal network of peers.
The ostensible mission of college—learning—will become ever more optional. Many students will seize the opportunity to study with their AI models, liberated from the onerous demands of having to write all those “A quality” papers themselves . A few “rebels” will do their classwork on their own, but everyone else will wonder what exactly they are planning on doing with the writing skills they develop.
Enrollments will shrink, and conditions for faculty will deteriorate. But the enterprise will just keep on chugging along, a sign of how much of it was based on a big dose of illusion in the first place.
In the meantime, some will fast-forward ahead to what is to come. Soon enough, or perhaps even today, the real learning will come from those who treat colleges as the annoyances they are becoming. Those students will either skip college entirely, as increasing numbers of hyperdriven achievers do, or go for fun and do their real learning from AIs, groups of sharp peers, and inspirational mentor-professors. The latter two elements may not sound so different from past practice, but what is different is that the curriculum itself is now radically obsolete.
Not long ago I had lunch with a friend of mine in Mexico. He is in his early 20s, and did not go to college. He picked up five different jobs as a software engineer, using the o3 model from OpenAI to do most of the work. He shows up for meetings for the different jobs, when that is required. He is earning a very comfortable living, and still has plenty of time to read and explore the world. Am I supposed to believe he would have done better by sticking it out and getting an MBA? Instead, you could say he has a self-taught PhD in multitasking, programming, and of course using AI models.
So that’s the students.
What about the professors?
The most ambitious professors will learn how to use the AI models themselves, and give their students the exact same quality feedback the students will be able to get for themselves. The best among the professors will learn to be inspirational mentors, coaches, and networking connectors. They will very directly help their students get somewhere in the real world. Those are some skills the AIs cannot copy, at least not anytime soon.
I asked the o3 model, one advanced form of GPT available with full access at $200 a month, about these problems, and what we should do. It gave a lengthy and detailed answer, along with a timeline for proposed changes. Here is one excerpt:
Upskill and realign the faculty—and let students help
Mandatory AI boot camps. Only 37 percent of institutions now offer systematic upskilling. A weeklong, stipend-supported boot camp before each semester, benchmarked to the Educause landscape survey, can push that above 80 percent in three years.
Reverse-mentorship studios. For every boot-camp cohort, pay 4–6 advanced undergraduates or graduate students (who already cheat/learn with AI) to coach faculty on workflow hacks, data cleaning, and prompt refinement. This directly exploits the incentive-driven student knowledge the essay highlights. AAC&U
AI as committee member. Insert a model such as o3 into dissertation reviews as an “outside reader,” with its report released to the whole committee. Pilot data from three U.S. PhD programs show faster iteration cycles and no drop in rigor.
Do read the whole thing, as I could not do better myself, which is part of the problem. The final difficulty? I asked o3 if university faculty are likely to support such reforms. The answer was “mostly not.”
https://www.thefp.com/p/ai-everyones-cheating-thats-good-news?utm_source=substack&utm_medium=email
For more coverage on the Artificial Intelligence revolution, read here:
AI will change what is is to be human. Are we ready?
This is the
most important essay we have run so far on the artificial intelligence
revolution. I’m excited for you to read it.
https://www.thefp.com/p/ai-will-change-what-it-is-to-be-human
Post a Comment