AI has moved into universities’ engine room, but no one is at the controls

Ever more processes rely on artificial intelligence, yet our governance is still stuck at the level of ‘is it OK for students to use ChatGPT in essays?’, says Tom Smith

Published on
January 19, 2026
Last updated
January 19, 2026
A ship's engine room, illustrating university governance
Source: ES3N/iStock

By now, most universities have an artificial intelligence policy. It probably mentions ChatGPT, urges students not to cheat, offers a few examples of “appropriate use” and promises that staff will get guidance and training.

All of that is fine. But it misses the real story.

Walk through a typical UK university today. A prospective student may first encounter you via a targeted digital ad whose audience was defined by an algorithm. They apply through an online system that may already include automated filters and scoring. When they arrive, a chatbot answers their questions at 11pm. Their classes are scheduled by algorithms matching student numbers with lecture theatre availability, and their essays are screened by automated text-matching and, increasingly, other AI-detection tools. Learning analytics dashboards quietly classify them as low, medium or high risk. An early-warning system may nudge a tutor to intervene.

At every step, systems powered by machine learning and automation are making recommendations, structuring choices and sometimes triggering decisions. But it might not be clear to students – or to staff – which parts of this journey are “AI-enabled”, how those systems work, or who is accountable for their outputs.

We talk about AI as if it were an app. In reality, it is becoming the operating system of the university.

ADVERTISEMENT

That shift matters. Apps are optional extras: you download them, try them, delete them. Operating systems are different. They sit under everything else. They determine what can be installed, who has permissions, which processes are possible, and what data is collected along the way. When you change the operating system, you change the whole machine.

As well as student recruitment, learning, assessment and support, AI also touches professional service areas, such as estates planning, financial forecasting and all manner of procurement. Yet our governance is still stuck at the level of “is it OK for students to use ChatGPT in essays?”.

ADVERTISEMENT

Three uncomfortable truths follow.

First, most institutions do not have a reliable map of their own AI infrastructure. Ask a vice-chancellor, a PVC for education or a chair of council for a single document listing every significant use of AI or automated decision-making across the institution and you are likely to get a polite silence.

Responsibility is fragmented. IT manages some systems. Registry manages others. Teaching and learning committees see bits related to assessment. Data protection officers look at some risk assessments. Procurement teams sign contracts that bake in AI-enabled features nobody has really interrogated. Vendors offer “smart” functions as part of standard upgrades that glide past existing oversight mechanisms.

If you cannot map your AI, you cannot govern it. Universities need a live map of their AI infrastructure: not a glossy digital strategy, but a working inventory of where AI or significant automation is in play, what decisions it shapes, whose data it uses, and which groups are responsible for it. This is basic due diligence, not a nice-to-have.

Second, accountability is fuzzy. When a plagiarism detector produces a spurious match, who is responsible: the software provider, the central team that configured the thresholds, or the academic who is told to “use their judgement” but is under pressure to be consistent? When an analytics system flags a student as “at risk” and nobody contacts them – or flags them falsely and triggers an unnecessary welfare intervention – whose fault was that?

In high-stakes domains such as academic progression, allegations of misconduct or pastoral concerns, universities have legal and regulatory obligations around fairness, due process and equality. It is no longer adequate to shrug and say “the system suggested it”. If we outsource parts of judgement to machines, we need a much sharper understanding of where human responsibility begins and ends.

ADVERTISEMENT

Governing bodies should insist on clear lines of accountability for any AI system that materially affects students’ progression, classification, or welfare. That means explicit answers to simple questions: Who owns this system? Who can change its parameters? What recourse do students and staff have if it appears to be wrong?

Third, AI is quietly redistributing power and reshaping professional roles. Data and analytics teams, central services and external vendors gain influence. Academic autonomy is constrained by template workflows and automated feedback systems. Students experience judgements that are increasingly mediated by unseen code.

This redistribution is not necessarily bad. Used well, AI can free staff time for higher-order tasks, support more consistent decisions, and surface patterns of inequity that would otherwise remain invisible. But without deliberate design, the risk is of a kind of quiet de-professionalisation: academics become supervisors of automated marking rather than assessors; tutors become implementers of nudges generated elsewhere; professional staff become responsible for outcomes they did not design.

ADVERTISEMENT

We need cross-cutting oversight, not just scattered committees. A serious AI governance group should bring together academic staff, professional services, students, IT, data protection and equality specialists. Its remit should not be to rubber-stamp tools, but to scrutinise the architecture: how these systems interact, which values and assumptions they encode, and what risks they create.

And we must take student voice seriously on such matters. If a student’s educational experience is being shaped by an invisible operating system, they have a stake in how it is designed.

The AI revolution in higher education will not arrive with humanoid robots giving lectures. It is already here, humming quietly in the background systems that allocate attention, label risk and shape opportunity. The challenge now is making sure someone is actually in charge of the engine room – and that the people whose lives are affected by it know where the controls are.

Tom Smith is the academic director of the Royal Air Force College and an associate professor of international relations at the University of Portsmouth.

ADVERTISEMENT

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Please
or
to read this article.

Related articles

Reader's comments (8)

Time tab.ing has nothing to do with AI! There are algorithms and heuristic methods that are used to generate timetables. By all means have an oversight of how and where "AI" has infected your processes, but do understand that AI and IT are not the same thing. Many of the functions of a university that you describe might well makre use of IT systems without recourse to "AI".
I've just delivered a talk to Computer Science final year project students talking about the use of generative AI in their work. I won't bore you with all of it but the gist was on learning how to give clear and effective prompts and to critically analyse the results, and the all-important point that it's great at doing the boring repetitive stuff thus freeing the human brain for doing the creative, compassionate, innovative thinking that sets it apart from artificial intelligences... no replacement for the real deal, but a useful assistant.
I tend to disagree with you respectfully. It's a very nice idea, but the boring repetitive work (such as reading substantially for your literature review to give you breadth and expertise) is actually part of the process. By way of analogy, a musician spoends many hours learning the skills to play or to compose. Likewise an artist. It's the same for academic work. What we are taliking about here is taking "short cuts" simple as that. And as for the Creative, AI programs can produce creative pieces which are frightenly good and authentic. The irony of all this is it looks like in the end AI will actually end up doing all those high end tasks and we'll end up out of work sweeping the floors and cleaning the toilets.
new
Well yes AI will replace all the call assistants etc, but what then will they do? We were sold this lie about how automation etc would free human beings from the drudgery of labour, and now we have all those sweatshops around the world paying subsistence wages. All that happens is that wages are driven down in the area that are not automated and wealt is increasingly concentrated in a few people.
This article has been a great read and is genuinely thought-provoking, Tom. It taps into something that has been playing on my mind for some time, particularly given the recent acceleration of AI and other emerging technologies in Higher Education. I think the dominant idea of AI as an operating system of the university is especially compelling. What stands out for me is, as you note, the “quieter hum” of AI and emerging technologies already permeating institutional infrastructure, processes, and systems, some explicitly, but much of it implicitly. This feels like an area that warrants far closer scrutiny. For me, the article raises deeper questions about whether the building blocks that currently bind the university together are still fit for purpose, and whether it is time to revisit more fundamental questions about what actually constitutes a university today. Discussions about unbundling, reimagining, and reassembling the university have surfaced many times before, only to fizzle out. It feels as though these debates now need to be rekindled and re-examined in light of this new technological context, particularly if universities are to remain resilient in the face of future, and increasingly inevitable, shocks.
Controls... of engine room.... come on, man!
When all is said and done, AI is just another form of commodification of human life and labour. I think the author's point about timetabling etc is actually a very valid one as these things are inter connected. Of course the substantive issue is the use of generative AI in academic work but there is the larger context to consider and especially the collection of data, data protection and surveillance. Students are now monitored when they enter the library or when they use their passes to go to the classrooms and this is described as in their interests.
As a Neo-Marxist in methodology, AI is just another form of capitalism which endlessly seeks to push new frontiers and establish new territories to accumulate and to drive down costs of production (usually subsidized by the state). So these are all areas where it moves forward. Areas such as learning, writing and human creativity and also human relationships with chatbots as romantic partners or friends. Previously private and emotional areas of human life are thus commodified, marketed and sold in a way that that was unthinkable a few years ago. These software products (including student assessment aids) are marketed via social media and clickbait. Of course it drives inequality. Its OK to celebrate AI as an assistant, but this leads to certain human labour becoming obsolete. If you want to know what occupations I think are at serious risk, I guess if you are reading this, then yours is one of them.

Sponsored

Featured jobs

See all jobs
ADVERTISEMENT