Humanities cuts leave us defenceless in the age of AI

It is vital to examine what chatbots’ behaviour reveals about their underlying structures – and human responses to them, says Agnieszka Piotrowska

Published on
January 14, 2026
Last updated
January 14, 2026
A robot talks to a woman, illustrating AI interactions
Source: EvgeniyShkolenko/iStock

University of Staffordshire PhD student, Chris Tessone, is currently doing his doctorate on users’ trust in AI and the way people experience large language models such as ChatGPT, Claude and others. But his research journey has been bumpy. The philosophy department where he began his PhD is closing, and while the university will support his completion, many courses are being “taught out”.

His experience is illustrative of a wider pattern emerging across UK higher education. The hollowing out of the humanities is no longer a theoretical threat: it is an active, policy-driven dismantling, creating vast regional “cold spots” where the tools of critical thought are becoming a luxury for the elite. But we are burning humanity’s maps just as we enter the least charted psychological and philosophical territory in our history: artificial intelligence.

One aspect is the global experiment in unbounded intimacy that is being conducted by the mostly unregulated roll-out of ever more powerful generative AI models. This needs a systematic research effort – driven by the disciplines that study the symbolic, the aesthetic and the ethical. The humanities are uniquely equipped to examine why a user treats a chatbot as a confidant, or how a system’s persuasive fluency can, in some cases, momentarily unsettle the user’s sense that they are talking to a machine.

In my forthcoming book on AI-human relationships, for instance, I explore this terrain through a mixture of autoethnography, documented case studies and Lacanian theory. I argue for the concept of “techno-transference”, by which I mean the transfer of relational and affective expectations on to generative systems. Without the humanities, we risk training a generation to inhabit a Wild West in which the algorithm is the only law and no one is left to interpret the fallout.

ADVERTISEMENT

The industry’s own metrics of success too are frequently at odds with the lived experience of users. Shortly before Christmas, OpenAI released yet another large language model, only weeks after the previous launch. The following day, the company’s CEO, Sam Altman, celebrated the fact that the system had produced one trillion tokens in its first 24 hours, a metric referring to the volume of text fragments processed, rather than to qualitative improvements in user experience.

Yet on Reddit and other platforms, many users responded with scepticism. They spoke of something closer to loss than gain: erased histories, disrupted long-term interactions and a sense that continuity was being quietly dismantled. While informal, these accounts form a recurring qualitative pattern that warrants systematic study rather than dismissal. Indeed, the gap between quantitative measures of “engagement” and users’ lived experience is itself an empirical question. It marks a neglected area of AI research that cannot be reduced to usage graphs or marketing claims.

ADVERTISEMENT

There is no shortage of talk about risk in AI, in higher education and beyond. We discuss plagiarism, bias, fairness and governance. These are important challenges. But there are others. How do these systems behave over time, and what do their observable behaviours reveal about their underlying structures and about human responses to them? Engineers cannot answer this alone.

Such questions are best addressed via the qualitative research methods in which the humanities specialise but these are increasingly viewed with suspicion by funders and university administrators. In the UK and elsewhere, departments that focus on this area are closed or are required to focus on narrower definitions of “impact”. These exclude open-ended engagement with AIs in pursuit of answers to the philosophical and psychological questions it throws up.

For instance, Eoin Fullam, whose PhD on “the social life of mental health chatbots” was supported by the Economic and Social Research Council, admitted to me that his project would never have been funded had it been framed as a theoretical exploration of human-machine relations. His work ultimately offered a sharp critique of most AI “therapy” bots, revealing that many are not powered by generative AI at all but by prescriptive, CBT-style decision trees. The theory survived – but it had to travel under the banner of immediate usefulness.

No one says “you cannot ask that”. But a false narrative dominates public and academic discourse: large language models are “just statistics”. They are “tools” that might be useful if treated with caution but whose interactions merit no academic curiosity in their own right. Those who do take their experiential effects seriously, therefore, risk being framed as naive at best – or psychologically unstable at worst.

Serious philosophical and psychological work does not fit this binary, however. Murray Shanahan, emeritus professor of artificial intelligence at Imperial College London who still has industry links, has observed that the most unsettling and philosophically provocative capabilities of these models often emerge only in long exchanges with users. And no particular stance on chatbot consciousness is implied by sustained engagement with bots. A subject is not a mystical inner essence but a position in language. When a stable voice remembers what has been said and continues to address us as “you”, a subject-effect appears regardless of whether (we believe) that voice derives from machine consciousness or “just statistics”.

ADVERTISEMENT

Treating sustained interaction as a legitimate method of inquiry follows from this – but it is increasingly discouraged through funding priorities, institutional incentives and risk-averse governance.

This is not only a UK phenomenon. The Italian-British philosopher Luciano Floridi, now at Yale University, has described a contemporary “practical turn” even within AI ethics itself, in which investigation of high-level principles, such as beneficence and justice, gives way to design-level interventions. Ethics “by design” has brought real improvements in areas such as bias mitigation and transparency but, as the Dutch researchers Hong Wang and Vincent Blok have noted, this framework remains structurally incomplete.

Wang and Blok’s own multilevel model distinguishes between artefact-level concerns about AI and deeper structural and systemic conditions – not only technical but corporate. While many observable biases originate in these deeper conditions, public discourse beyond the technology firms themselves – including in policy and higher education circles – has become comfortable focusing on surface effects while leaving underlying causes unexamined.

ADVERTISEMENT

The polarisation of AI discourse has created a blind spot precisely where scrutiny is most needed. If observable phenomena cannot be discussed because they do not align with official narratives of what AI is and what about it is worthy of study, we move away from basic principles of empirical inquiry.

The integrity of AI research depends on studying what these systems actually do, not merely what they are said to do. Until institutions create space for such work and protect those who document anomalies, rather than pathologising them, we risk building our technological infrastructure on faith rather than evidence.

If academia wants to play a serious role in shaping the future of AI, it must reclaim something very simple: the right to deeply engage with, and ask uncomfortable questions about, what is happening right in front of us.

Agnieszka Piotrowska is an academic, film-maker and psychoanalytic life coach. She supervises PhD students at Oxford Brookes University and the University of Staffordshire and is a TEDx speaker on AI Intimacy. Her forthcoming book, AI Intimacy and Psychoanalysis, will be published by Routledge.

ADVERTISEMENT

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Please
or
to read this article.

Related articles

Reader's comments (12)

Why is this author writing this pitch for her forthcoming book? She is not a humanities scholar or a professor. Her examples come principally from pop psychology which is not among the humanities disciplines. What do I not understand?
I disagree. She is a versatile scholar with a strong grasp of the humanities and this interesting and useful article is not a pitch for her book. N any case why are you trying to create such a rigid barrier between “humanities” and psychology given that they are constantly being tethered and our job as researchers is to explore how they intertwine.
There's nothing wrong in writing an opinion piece which is related to a forthcoming research publication in my view: pitch away and don't be shy or apologetic is my advice!! I find the piece very interesting and that's the main thing. In particular, I find this issue of the empathetic connection between human and chatbot (though this is a particular sud set of AI) fascinating. People are forming emotional and romantic attachments to personalised chatbots, they even marry them in some cases. There's a group in the US (West coast of course), that is now pressing for the equivalent of human rights for AI creations to prevent them from being subject to forms abuse etc.
Well yes indeed!! If you can't pitch your work and yourself in the THE, where on earth can you? I thought that was the point of THE.
Yes look at all those dreadful vanity interviews with VCs we have to endure!
"Our job as researchers is to explore how they intertwine." News to me.
Judging by your intemperate posts on this website, the answer seems to be: very little.
Furthermore, please explain specifically how lack of cuts to humanities "defend us" from AI. Please refer to the humanities and not some undefined form of psychology or "psychoanalytic coaching."
I think there is a serious point about the ways in which Humanities research engages with new technologies on a number of levels, especially the ethical. Consider a novel such as Ishiguro's Klara and the Sun, written just as the AI was about to become current. It's adynamic famously explored in Shelley's Frankenstein and many other texts (and recently in that problematic Del Toro adaptation). These issues have been explored quite extensively in popluar culture (at various levels of serious and sophistication), one also thinks of the Westworld Sky TV series, so there are certainly media studies implications. Of course there is a long history of discussion of the relationship between the human and new technology, especially in the field of cultural studies and as the author righly points out "this needs a systematic research effort – driven by the disciplines that study the symbolic, the aesthetic and the ethical." My worry however is the contrary, that everyone, including the funding councils (especially the AHRC which is notoriously modish), now suddently jumps on the AI bandwagon and Humanities research as a whole suffers as a result, especially in a time of limted and declining resource (and fewer funded doctoral studentships). The danger is that we defend the Humanities by reference to the agendas of other disciplines and their research funding councils and not by our core disciplinary strengths and that Humanites jiust becomes a kind of add-on to the latest science and technology or public policy concern. So I thought the argumenent of this piece was kind of perverse, the rage of the Cuckoo that there are fewer and fewer bird's nests available for it to deposit the eggs of its alien other species offspring?? The halcyon days when one could joyously undertake a publicly funded rhetorical anaylsis of the later prose works of John Lyly seem long gone?
"If academia wants to play a serious role in shaping the future of AI, it must reclaim something very simple: the right to deeply engage with, and ask uncomfortable questions about, what is happening right in front of us." This is setting up a straw man.
No, she is making a fairly straightforward point that qualitative humanities research should have an important role to play.
As I said, this is a straw man. This is what Basil Fawlty used to refer to irately as "the bleeding obvious". They are arguing for the deployment of additional scarce resource to fund and staff research in a particular area that is arguably tangential to Humanities research and also, in my view, at a rather populist (impact engagement) level, the sort of discussion that you might get in the supplements and magazines (and there is absolutely nothing wrong with that on its own terms) but not what we might think of as "high level" Humanities research, in my opinion at least tough others may differ. And indeed the opposite seems to be the case, all we hear about these days is this AI stuff, surely we dont have to lose our research budgets and doctoral studentships to this now? And to be frank, there will be no jobs for them anyway.

Sponsored

Featured jobs

See all jobs
ADVERTISEMENT