Reaction online to my recent opinion piece in Times Higher Education on universities’ failure to strategically engage with artificial intelligence (AI) has been both fierce and illuminating.
Some criticisms were measured and thoughtful; others were reflexive, polemical or rooted in deeply held convictions about what universities are – and what they must never become. Together, however, they inadvertently reinforce the point that I was making: that resistance to change in the sector is so entrenched that it has become part of its identity. And that resistance now poses a genuine threat to its long-term well-being.
A number of responses centred on definitional nit-picking. Why refer to higher education as a “sector”? Why invoke “Enlightenment principles”? Such procedural questions, while valid, emphasise a particular challenge associated with criticism of higher education. It’s tempting to be drawn down this rabbit-hole, and become deflected from the larger issue: why is the sector so reluctant to interrogate its own structures, norms and assumptions?
Elsewhere, critics asserted that AI is over-hyped and may be another phlogiston – an intellectual dead-end or passing chimera. Why must universities engage, they ask. Shouldn’t they resist fads, as they have so correctly done in the past?
This argument, popular among faculty, invokes the precautionary principle, but in practice represents an abdication of adaptive responsibility. It assumes that the status quo is safe, neutral and inherently more virtuous than the unknown. Yet universities themselves have long taught that knowledge – and society – advance through enquiry, experimentation and engagement, not by entrenchment.
What makes this line of reasoning particularly problematic is that it is often forcefully espoused by those with least technology literacy. Many such criticisms demand a “rigorous case” for AI, but such a case is difficult to recognise if you lack understanding of data, machine learning or emerging practices. Contrary to some assertions, academic integrity and technological adoption are not mutually exclusive; on the contrary, preserving a meaningful conception of academic integrity now requires an informed understanding of the technologies that challenge it.
Another set of responses framed my argument as morally suspect – an endorsement of extractive digital oligarchies, a capitulation to marketisation. This is familiar territory. For some academics, technology adoption is indistinguishable from the neoliberal creep they perceive to be “hollowing out” universities. But such a framing, again, obscures more than it clarifies.
When I noted that banks have embraced AI in a way that universities have not, I wasn’t suggesting that banks are paragons of virtue. I was merely noting that even these most conservative of institutions have been able to reconfigure themselves in response to technological change, and the existential crisis it represents. That universities, with their vast intellectual resources and claimed devotion to societal progress, lag so far behind should give us all pause.
Concerns were also expressed about the harms of AI: the ecological costs, the erosion of critical thinking, the risk of over-dependence. These are important issues and deserve serious attention, but they do not justify strategic disengagement. Indeed, universities must simultaneously explore AI adoption in all corners of institutional practice, while proactively leading ethical, pedagogical and ecological responses to AI. Ignoring the transformational possibilities of the technology for practice will do little to preserve the sanctity of the student experience; it simply cedes leadership to actors outside the academy.
Some reactions to the article were openly dismissive: “naive”, “hyperbolic”, “written by a bot”, “a black plague [of mass adoption]”. These comments are emotionally revealing and suggest that a deeper objection relates not so much to the technology as to the perceived threat it poses to identity, expertise and authority. Such anxieties, while understandable, are not evidence against the argument. Rather, they are evidence in favour of it.
There were some more constructive responses. Several noted that universities are more proactively involved with AI than suggested in the article, and I’m very happy to acknowledge examples of thought leadership in the sector. But exceptions do not make a rule. Resource constraints, governance gaps and a disinclination to challenge existing practices, all conspire to impede strategic consideration of how technology aligns with institutional purpose and design. AI is treated as an ad hoc add-on, delegated to committees and bolted on to legacy systems.
When AI transformation begins with use-cases that build on existing logics, rather than pilot projects that challenge present institutional design, the result is inevitable: more of the same, with the promise of faster and cheaper (if you’re lucky).
Crucially, some commentators highlighted a deeper cultural problem: the fact that universities talk about preparing students for the future but rarely treat students as partners in shaping it. This failure of “reverse mentoring” reflects a custodial mindset at the heart of institutional resistance. For some, challenging the underlying logics of how universities organise knowledge, learning and governance is not only uncomfortable but sacrilegious. But clinging to rituals of transcendental purpose will not preserve the university’s social value.
The irony is that the pursuit of truth and understanding is a dynamic process that demands constant questioning of existing beliefs and a readiness to revise ideas based on new evidence. It is this process that, over many decades, has contributed to the development of AI – and universities have been central to that development. But when it comes to internal transformation – rethinking curricula, governance, research practice, pedagogical models – the will and curiosity are mysteriously absent.
To restate: the greatest threat to higher education is not AI. It is institutional inertia supported by reflexive criticism that mistakes resistance for virtue. AI did not create this problem, but it is exposing dysfunctionalities and contradictions that have accumulated over decades.
Whether universities engage with AI enthusiastically or reluctantly is ultimately less important than whether they do so strategically, imaginatively and with a willingness to question their own design. Because if they don’t, others will.
Ian Richardson is a faculty member and director of executive education at Stockholm Business School, Stockholm University. With a background in technology media, he is co-founder of the national Swedish programme AI for Executives, which seeks to drive board-level understanding and organisational adoption of AI across industries and sectors.
Register to continue
Why register?
- Registration is free and only takes a moment
- Once registered, you can read 3 articles a month
- Sign up for our newsletter
Subscribe
Or subscribe for unlimited access to:
- Unlimited access to news, views, insights & reviews
- Digital editions
- Digital access to THE’s university and college rankings analysis
Already registered or a current subscriber?








