I’ve been editing college application essays for about 10 years now. Every year, I encounter a broad array of writing abilities, ranging from high school seniors who submit fifth-grade-level essays to seventeen-year-olds who write better than their teachers. The fact is that “smart” kids have always been the minority, and despite never-ending claims that our students are getting dumber every year, I have nonetheless caught glimpses of the next generation’s most brilliant minds in every graduating high school class. These are the students who, whether matriculating into Ivy League schools or more modest universities, will go on to shape our society through medical discoveries, technological advancements, and bold ideas. They may approach problems in unexpected ways, but they have never been significantly duller than their older counterparts.
Not until this past year, however.
The graduating class of 2026 is the first cohort of students who have gone through high school with ready access to artificial intelligence (AI); as a result, they have never had to write an entire essay from scratch or map out a cohesive argument on their own. And while the return of the Blue Book exam structure has somewhat restored a baseline of individual accountability, it is now virtually impossible, with Large Language Models (LLMs) such as ChatGPT and Claude at students’ fingertips, to simulate the experience of having to compose original argumentative essays from the ground up. The result is a sharp decline not only in student writing skills but also in the general capacity for critical thinking , as manifested in the cohesive formulation of written ideas.
But the most curious component of the AI phenomenon is that the approximate distribution of writing ability itself has not changed much over the past several years—there are still the same number of bad writers, mediocre writers, and good writers in every graduating senior class—at least when it comes to command of grammar and syntax. What’s changed, instead, is the prevalence of students who possess a high degree of technical writing fluency yet a low level of intellectual competence, resulting in a greater number of students who can produce perfectly structured sentences that say absolutely nothing.
How is that possible?
It’s simple: The same number of students with a natural aptitude for writing will still learn how to write, but they will no longer learn how to write well . Where previous generations learned to write from books, newspaper articles, and other written materials, the latest generation of students will be most influenced by their new primary source of information: LLMs.
In other words, because students now use ChatGPT and other AI tools to outline essays, skim readings, and solve homework problems—to perform nearly every assigned task—the majority of the writing they encounter will be AI-generated.
The problem is not just quantity, but quality: ChatGPT and other LLMs produce language that often says very little.
Here is an example of a paragraph it generated when I asked it to predict the next section of my essay:
What this reveals, more than anything, is that we have mistaken fluency for thought. A student who can produce a clean, grammatically sound paragraph—complete with varied sentence structure and the occasional well-placed em dash—now gives the impression of intelligence without having engaged in the difficult, often uncomfortable labor of actually forming an idea. But writing, in its truest sense, has never been about polish; it has been about resistance. It is the act of pushing against one’s own vagueness, of confronting half-formed intuitions and forcing them into clarity.
Read the first two sentences. What can you deduce from ChatGPT’s argument? We learn that a) students have mistaken “fluency” for “thought” and that b) students can now write clean sentences without having gone through the “labor of actually forming an idea.” There is truth to both claims, but the first claim is too broad to communicate anything substantial. The second claim simply repeats an idea I’ve already established without deepening it in any meaningful way. ChatGPT is a pro at regurgitating surface-level ideas without actually saying anything of substance.
Its next two claims, however, are the most egregious offenders. ChatGPT goes on to tell us that writing is not about “polish” but “resistance.” Garbage political bias aside, it is bad enough that this statement by itself means absolutely nothing—what follows is somehow even worse: an explanation that wastes an entire sentence on buzzwords and does little to elucidate the meaning of this so-called “writing as resistance.”
All of this is to say that ChatGPT likes to spew nonsense.
So what happens when students read this bad writing, and only this bad writing, daily?
For one, their own writing begins to resemble ChatGPT’s circumlocutious prose. This year, for instance, I’ve received an overwhelming number of student essays that feature the formulation “It’s not just this, it’s that”—one of ChatGPT’s signature writing moments that appears in the sample paragraph above. While many of these students admit to using AI in their writing, some vehemently insist that their writing is their own—even if their essays sound almost wholly AI-generated.
It might be tempting to assume that these students are simply lying through their teeth. But what is most remarkable is that when asked to produce their own writing on the spot, many of these students will recreate ChatGPT-sounding sentences without resorting to their AI sidekicks.
What this means is that students are beginning to write exactly like AI.
You are what you read, after all, and AI writing is the only writing that they have ever known.
As a result, an increasing number of students begin to sound like one another, and an increasing number of student writings become indistinguishable from robot prose.
But is this the end of critical thinking in our society?
Not necessarily. After all, the rise of the electronic calculator in the 1970s convinced an entire generation of pedagogues that students would grow dumber, but the result was simply a shift in intellectual priorities. While the general public is veritably worse at mental math today compared to 50 years ago, the handful of individuals who can multiply large numbers in their heads are now infinitely more valuable in certain fields of our society. Whenever a technology automates a basic function, therefore, the small minority who retain it gains a disproportionate advantage.
I predict that we will soon see the same phenomenon in writing.
After all, few students could really write well before the advent of ChatGPT. With that number now dwindling further, writers are about to become a valuable societal commodity.
As Peter Thiel said in a recent interview , the future looks bright for the “word people” because good writers may very well define our future.
Follow Liza Libes on X.
Image by Queenmoonlite Studio on Adobe; Asset ID#: 821569128
https://mindingthecampus.org/2026/04/07/ai-making-students-dumber-chatgpt-writing-critical-thinking/
Post a Comment