Faculty are right that AI output is mediocre. They’re wrong about why

If AI amplifies what you bring to it, the liberal arts mission of developing critical thinkers becomes not nostalgia but practical necessity, says Nicholas Creel

Published on
March 24, 2026
Last updated
March 24, 2026
A man lifts a boulder with a lever, illustrating the force multiplier effect of AI
Source: duncan1890/Getty Images

Earlier this month, a viral Substack post by political scientist Alexander Kustov issued a blunt challenge to his fellow academics: wake up on AI.

Kustov, an associate professor at the University of Notre Dame, argued that AI can already produce publishable social science research and that most faculty opposition to the technology is “status protection dressed up as principle”. The piece ricocheted across academic social media, generating the predictable mix of enthusiastic agreement and defensive outrage. Both reactions missed something important.

Kustov is largely right about the disruption coming for academic research and publishing. But his focus is squarely on how professors produce research. More urgent but less examined is the question of what universities are supposed to be developing in their students – and whether they still know how to do it in an era when AI can generate a passable essay in seconds. That question has a more uncomfortable answer than most AI optimists are willing to give.

Ask a roomful of faculty members what they think of “AI-assisted” student work and the overwhelming reaction is that it is mediocre. Formulaic. Hollow. The kind of writing that fulfils the technical requirements of an assignment while signalling, to any trained reader, that no genuine thinking occurred. They’re not wrong to notice this. However, they’re wrong to draw the conclusion they typically do.

ADVERTISEMENT

The anti-AI academic consensus has hardened into something close to doctrine: that AI produces “slop” and universities that tolerate it are abetting the degradation of intellectual life. Experienced faculty can spot work created with and without genuine student effort immediately, and their frustration with being drowned in slop is legitimate.

But frustration is not analysis, and the faculty consensus has confused the symptom with the disease. AI does not produce mediocre work. Mediocre thinkers produce mediocre work. AI just lets them do it faster and at higher volume. That is a real problem – but it is a problem with the operator, not the tool.

ADVERTISEMENT

AI is already highly capable. It has passed the bar exam by a comfortable margin, outscored medical students on complex clinical reasoning exams and has a diagnostic accuracy comparable to non-expert physicians across a wide range of conditions. Whatever one thinks of AI aesthetically or pedagogically, the claim that it inherently produces “slop” is simply not defensible.

Imagine AI as a force multiplier. It will genuinely improve the writing quality and speed of a mediocre student. But that writing will still be below average because the mediocre student cannot recognise the gap between what AI gave them and what excellent output actually looks like. And it is important to note that they were unable to evaluate the quality of their own argument before AI existed either: removing the tool would not develop that judgement.

Now place an expert in front of the same tool: a senior scholar, a seasoned journalist, an experienced attorney. AI will not write at their level unprompted. But that is not how experts use tools. They prompt with precision. They provide intellectual architecture. They specify the argument, the structure, the things the output cannot get wrong. AI drafts; they direct. Then they edit, interrogate, revise, effectively coaching the output up to their standard – which is something only they can do because recognising that standard requires expertise they have spent years building. The result is work produced at their level of quality but with greater velocity. The multiplier is real and, for the expert, genuinely powerful.

Research on AI in the workplace bears out this asymmetry, showing that novices gain productivity from AI, performing at levels that previously required significantly more experience. But novice gains in productivity are not the same as gains in quality.

Kustov’s own piece inadvertently illustrates the point, too. He revealed in a postscript that the article was generated by agentic AI working from his social media posts and notes. The output was sharp and persuasive precisely because Kustov is an expert. The ideas were his, the judgements were his, the decade of domain knowledge was his. The AI executed. That is not a diminishment of the piece: it is the clearest possible demonstration of the multiplier in action. A novice feeding the same tool their half-formed social media posts would not have produced that article.

ADVERTISEMENT

The right response to AI for higher education is therefore to redouble our commitment to developing the capacities that make the multiplier powerful. This is, or should be, the core purpose of a liberal arts education.

The case for liberal arts as the model for AI-era education is not new. Universities that produce generalists who can think across disciplines and navigate novel problems are far better positioned for a world where AI handles routine cognitive tasks. But the AI mediocrity debate adds a sharper edge to that argument. If AI amplifies what you bring to it, then the liberal arts mission of developing critical thinkers, not just credentialed producers, becomes not a nostalgic aspiration but a practical necessity. The student who arrives at the workforce with deep expertise in how to think and judge will wield AI with power. The student who arrives with surface knowledge and borrowed competence will be undone by it.

This does not mean AI belongs everywhere in the university. It doesn’t. There are courses and assignments where the struggle to think without a scaffold, to write badly before writing well, is precisely the point. Developing expertise requires doing the cognitive work that builds it. AI multiplication only works if there is something to multiply.

ADVERTISEMENT

Some assignments should therefore be AI-free by design. Others should actively engage AI, teaching students to evaluate, direct and improve its outputs – a sophisticated intellectual skill that requires genuine domain knowledge.

AI does not make expertise obsolete. It makes expertise matter more. The gap between what a novice and an expert can produce with AI is wider than the gap between what they could produce without it. The task of universities is to produce people who are on the right side of that gap.

That is not a technology problem. It is a mission problem. And it is entirely ours to solve.

Nicholas B. Creel is associate professor of business law at Georgia College & State University.

ADVERTISEMENT

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Please
or
to read this article.

Related articles

Reader's comments (4)

Whilst accepting the argument that students using "AI"should need the skills and knowledge to steer it competently : what does it have to do with the "liberal arts"? show me a discipline that does not require students to develop skills in critical thinking and I'll show you a discipline that does not belong on university campus. Possibly you have a different view of what "critical thinking" mean in which case you need to explain it and justify why the reader should accept it.
Nonsense. Yes, it has uses and can help if you are prepared to risk it- but if you are will you verify and falsify it? Probably not. The point is that AI is unpredictable in terms of quality so it can never be trusted. Mostly, we have already seen it produces oceans of flashy slop: it introduces errors, laziness, and undermines autonomy and independent thinking. It's biased towards authoritarianism, oligarchs, tech and capital. It multiplies those things. Who could trust it? I think most students are now fully aware of these risks too- let's hope they act on them.
In my opinion, artificial intelligence (AI), especially large language models, is widely misunderstood. People often project human qualities onto it—intuition, creativity, intention—when in reality it functions much more like mathematics than like consciousness. Just as numbers can be arranged into equations that reveal patterns, language models arrange words into patterns that reflect the data they were trained on. They don’t “think”; they calculate. They don’t “intend”; they predict. What they produce is not intelligence but the sum of human linguistic activity fed into them. In that sense, AI is the linguistic equivalent of a calculator. A calculator doesn’t understand algebra; it mirrors the rules we programmed into it. A language model doesn’t understand narratives; it mirrors the structures, biases, and brilliance embedded in human writing. Both systems are binary at their core—numbers or words, formulas or stories—yet both can be wielded with extraordinary intelligence depending on the user. Financial markets have long understood this about mathematical models: the output is only as insightful as the assumptions built into it. Linguists understand the same about rhetoric: words can persuade or mislead depending on how they’re shaped. AI sits at the intersection of these two worlds. It is clay, not sculptor. Its power comes not from its own “mind”, but from the human minds that design it, train it, and prompt it. If anything, AI reveals more about us than about itself.
new
Excellent argument and great comments!!!

Sponsored

Featured jobs

See all jobs
ADVERTISEMENT