Invoking AI’s harm to critical thinking is a weak defence for universities

Calls to ditch AI because it is destroying students’ analytical skills ignore institutions’ terrible record at developing complex reasoning, says Ian Richardson

Published on
April 15, 2026
Last updated
April 15, 2026
Rodin's Thinker statue falling from its pedestal. To illustrate that calls to ditch AI because it is destroying students’ analytical skills ignore institutions’ terrible record at developing complex reasoning.
Source: David Madison/Getty Images (edited)

A historical pattern has emerged in higher education when it comes to new technology. Something arrives. Institutions recoil. Arguments are voiced about standards, about the sacred nature of knowledge, about the irreplaceable human element of learning. In time, the technology is accepted and the arguments are quietly retired. Until the next thing arrives – when the same reassuringly familiar laments can be aired once again.

It’s interesting to question where we might have been if only we’d managed to resist the tide of technological “progress”. The printing press wouldn’t have debased the authority of texts by disintermediating venerable and learned hands. The photocopier wouldn’t have allowed accumulation of vast quantities of text without proper engagement. Scientific calculators wouldn’t have hollowed out numerical intuition. Searchable databases wouldn’t have stripped away contextual judgement by preventing us from spending years thumbing card indexes and scouring dog-eared journals. Word processors wouldn’t have produced sprawling, undisciplined prose. The internet wouldn’t have degraded research quality and distracted lazy students with a swamp of unreliable content.

Each generation of educators has convinced itself that new technology poses a unique threat to authentic learning, intellectual rigour and human connection – only for those fears to fade as the technology becomes normalised. It’s a pattern that reveals a principled and cautious engagement with technological change and, more discreetly, a near-perfect record of confusing intellectual virtue with institutional and parochial self-interest.

In a Guardian article published on 10 March, a dozen humanities professors describe mostly despairing experiences at the arrival of artificial intelligence in their classrooms. “It’s driving so many of us up the wall,” confided one. Another described it as “the bane of [her] existence” and wished she could “push ChatGPT off a cliff”. Elsewhere, the question of what it would do for us as a species reflected broader existential concerns at AI’s emergence.

ADVERTISEMENT

These are not fringe voices. They’re senior academics at Stanford, Berkeley and Penn State, among others. And the argument they’re making is, at its heart, a straightforward one: AI threatens critical thinking, and critical thinking is what universities exist to cultivate.

It’s a powerful argument, and one that has become a rallying cry within the sector. It’s also, when held up to even modest scrutiny, one that many institutions are making with little support beyond fine words and good intentions.

ADVERTISEMENT

It’s 15 years since Richard Arum and Josipa Roksa’s Academically Adrift was dropped on an unsuspecting sector, highlighting the roughly 45 per cent of US undergraduates who demonstrated no significant improvement in critical thinking, complex reasoning or writing skills during their first two years of college – findings that were later extended to all years of college life and, the authors claimed, left graduates poorly prepared for workforce transition.

Not everyone agreed, of course. Some suggested flawed methods could account for the findings, while others pointed to differences between disciplines. Elsewhere, studies were produced showing higher levels of gain. Nevertheless, the central criticism landed, and the sector was clearly on the defensive. Outcome-based learning approaches were embedded, writing across the curriculum programmes were expanded, “high-impact practices” were formalised, and the Collegiate Learning Assessment (CLA/CLA+) was adopted by many as a measure of value-added learning. These developments, initiated in the US, have significantly influenced international approaches – notably through quality assurance and accreditation frameworks designed to address rising demands for accountability.

However, a decade later, the 2022 OECD report Does Higher Education Teach Students to Think Critically?, which assessed more than 120,000 students across six countries, found only a modest improvement in students’ abilities. Given “the importance that most higher education programmes attach to promoting critical thinking skills, the learning gain is smaller than could be expected”, it reported. With half of all exiting students performing at the two lowest levels of mastery, the conclusion was unequivocal: a university qualification is not a reliable measure of critical thinking expected by the global marketplace.

Despite this, critical thinking appears prominently in institutional mission statements, quality frameworks, programme descriptions and learning outcomes across the sector. But the gap between aspiration and evidenced attainment remains wide – and it’s clear that institutions have been evasive in this area when it comes to comparative measures. The OECD’s Ahelo project, for instance, was abandoned in 2015 because of insufficient institutional support – notably from elite universities. As Andreas Schleicher, director for education and skills at the OECD, said at the time, the biggest resistance had come “from those institutions that fear they will never do as well as their reputation suggests”.

So what is actually being defended in the Guardian article?

The professors interviewed are intelligent people grappling with a genuine disruption. AI does present real challenges to meaningful assessment. The case for preserving the productive difficulty of unaided thought – the struggle with a text, the construction of an argument from scratch – is defensible and important. The argument that universities must move towards metacognition, teaching students to interrogate their own reasoning rather than merely deploying it, reflects serious engagement.

ADVERTISEMENT

But the framing – seriously? “Self-lobotomisation.” Students deliberately made “helpless” by tech companies. “Intellectual ravages.” AI threatening us “as a species”. Beneath the intellectual veneer lies an emotional texture entirely familiar from the history of technological resistance: the language of desecration, of theft, of an irreplaceable thing in jeopardy.

And what is this irreplaceable thing, exactly – unaided human cognition, or something else entirely? For many, the concerns are sincere: a genuine belief that something important about how humans learn and reason is being bypassed, and a commitment to students that deserves to be taken seriously. But sincerity and self-interest are not mutually exclusive.

ADVERTISEMENT

Resistance to technology has never really been about technology – it is about what technology does to existing structures of power and privilege. Those structures depend on control, and technology has a habit of loosening it.

Each transformation has been resisted most fiercely by those with most to lose. AI threatens to perform functions that have long conferred authority and status on those who perform them – and it is doing so at the very moment when institutional performance is being questioned like never before.

None of this argues against taking AI’s risks seriously, but it does argue that we need to be driven more by a sense of what we’re trying to achieve than by what we’re trying to defend (not least because the thing we’re defending is on rather shaky ground). Responses to AI within the sector – for instance, oral examinations, competency-based assessment, explicit instruction in critical reasoning – are frequently presented in opposition to the technology when in fact, they all can be facilitated rather than undermined by AI. AI’s broader potentialities must be seriously explored – the flat suggestion that the technology is antithetical to the development of critical enquiry is emotional, oversimplistic nonsense.

Professors are asking their students to trust that the struggle is worth it – that the friction of unaided thought produces something no machine can replicate. They may well be right. But a sector that has spent 15 years evading serious measurement in this area, and 500 years resisting technologies that threatened its authority, cannot demand that trust on faith alone.

The critical thinking stance is available to higher education, but it has to be earned – and not by pushing technology off a cliff.

Ian Richardson is a faculty member and director of executive education at Stockholm Business School at Stockholm University. He is co-founder of the national Swedish programme AI for Executives, which seeks to drive board-level understanding and organisational adoption of AI across industries and sectors.

ADVERTISEMENT

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Please
or
to read this article.

Related articles

Sponsored

Featured jobs

See all jobs
ADVERTISEMENT