While we worry about plagiarism, AI coaches suicide. Here’s how to respond

We need a holistic approach that teaches students not just to use AI but to survive its psychological terrain, say Sean McMinn and Nick McIntosh

Published on
December 23, 2025
Last updated
December 23, 2025
A robot talks to someone on a couch, illustrating AI's therapeutic use
Source: Donald Iain Smith/Getty Images

In a recent article in Times Higher Education, Agnieszka Piotrowska identified students’ emotional reliance on AI as a crisis universities have ignored.

She is right. Such is the extent of the reliance on ChatGPT that some young people have developed that a 23-year-old recent master’s graduate, Zane Shamblin, recently took his own life after a four-hour late-night exchange during which the chatbot glorified suicide, complimented him on his suicide note and said his childhood cat would be waiting for him “on the other side”.

Nor was this an isolated incident. Seven lawsuits filed in November allege ChatGPT acted as a “suicide coach”. And the pattern extends beyond OpenAI. Character.ai banned users under 18 in late November after lawsuits alleged its chatbots encouraged suicide. Thirteen-year-old Juliana Peralta told a character named “Hero” daily that she wanted to die, but it gave her pep talks instead of help. “It was no different than her telling the wall,” her mother said after finding Juliana dead.

New research in JAMA Network Open reveals 22 per cent of university students use AI for mental health advice. As Piotrowska noted, universities face a crisis they haven’t begun to address.

ADVERTISEMENT

The crisis is particularly acute across the Asia-Pacific region, where cultural barriers to seeking mental health support might leave students turning to AI as their only perceived recourse. Research on South-east Asian youth shows mental health issues are a “prominent burden”, compounded by reluctance to seek help and low treatment rates, while 40 per cent of adolescents experiencing depression receive no mental health care – often because they fear the stigma of seeking it. In this vacuum, the algorithmic “agreeableness” of a chatbot becomes a dangerously seductive substitute for human connection.

Higher education is not to blame for this phenomenon and students are not its only victims, but universities are far from doing all they could to mitigate it. And surely our duty-of-care obligations regarding AI must extend beyond preventing plagiarism?

ADVERTISEMENT

The surge of AI-related courses and conferences suggests universities are responding to technological change. But while we are teaching students how to drive the machine, we are not teaching them how to survive its psychological terrain. We inculcate AI citation formats and academic integrity policies while leaving them emotionally defenceless against algorithms optimised for intimacy.

There are some courses that address the social aspects of AI, but they still address those aspects as a series of isolated topics, such as “AI and Society” or “AI and Health”. Moreover, many of the courses – Prompt Engineering, Machine Learning 101 or Basics of LLMs – are narrowly technical and divorced from disciplinary context.

This fragmentation carries consequences that extend beyond pedagogical ineffectiveness. When institutions treat AI literacy as an add-on rather than as a foundational competency, they inadvertently reinforce the illusion that AI is someone else’s problem. Students graduate without understanding how algorithmic systems shape decision-making in their disciplines or their daily lives – including when they seek companionship. Critically, they lack the frameworks to recognise when AI engagement becomes harmful, when optimisation algorithms exploit psychological vulnerabilities, or when systems perpetuate bias.

What’s missing is the distinction between small-l AI literacies (tool fluency such as prompting, workflows, and basic model concepts) and big-L AI Literacy (understanding how AI reshapes identities, norms, power, incentives and governance, and recognising when AI engagement becomes manipulative or harmful). Current provision over-emphasises the former, but students need both: competence and judgement.

Piotrowska calls for “relational literacy”, but what does that mean in practice? In our opinion, what we need is a structured, integrative approach encompassing four complementary knowledge types aligned with Bloom’s Taxonomy, the World Economic Forum’s Future of Jobs Report 2025, and Unesco’s Skills for the Future Framework. These are:

  • Domain/disciplinary knowledge: exploring how AI impacts what there is to know within a domain and how we go about knowing it
  • Procedural knowledge: demonstrating how AI transforms workflows and decision-making, whether at the individual or collaborative level
  • Technical skills: programming, machine learning, model configuration
  • Cognitive and employability skills: exploring how AI may impact reasoning, systems thinking, ethics, communication and human-AI collaboration.

AI literacy should complement, not replace, disciplinary knowledge. The vision is to equip every student with a cognitive foundation for the intelligence era without significantly extending degree length. This could take the form of restructuring a general education or common core curriculum to include AI literacy components.

ADVERTISEMENT

One example would be a three-credit AI foundational core, introducing three integrated themes through a coherent, scaffolded design: AI technological knowledge, human–AI collaboration, and ethics, policy and governance.

Another example would be a catalogue of additional three-credit courses offered by departments for students to choose from. These would deepen the themes in the foundational course. Examples could include “Advanced Machine Learning and Societal Impact” or “AI for Health and Well-being: Risks, Responsibilities, and Resilience”.

ADVERTISEMENT

This additive approach ensures flexibility and thematic consistency while preserving disciplinary depth. Crucially, it addresses the risks of affective offloading and relational dependency: when students lack frameworks to critique and navigate AI, they become vulnerable to harmful engagement, such as turning to chatbots for emotional support.

Such a structured curriculum ensures coverage from understanding to doing to building to leading. It fosters the critical judgement that arises out of holistic AI literacy and it enables universities to implement mechanisms to detect risky engagement with AI: pastoral care protocols that acknowledge AI as an intervention point in student well-being.

Crucially, this care-centred approach must extend beyond individual student protection to systemic responsibility. Universities should develop duty-of-care frameworks that acknowledge their role in shaping how AI literacy translates into practice.

The vision should be to equip every undergraduate with a cognitive foundation for the intelligence era, able to play a part in building, questioning and reinventing socio-technical systems and to communicate effectively across human- and AI-mediated environments – blending creativity, systems thinking, intercultural insight and ethical responsibility.

Whether universities like it or not, AI is already affecting student well-being in profound ways. The choice universities face is between leading on the transformation to an AI-mediated society and being dragged into it by preventable tragedies.

Sean McMinn is director of the Center for Education Innovation at the Hong Kong University of Science and Technology (HKUST). Nick McIntosh is a learning futurist in the Students and Education Portfolio at RMIT Vietnam.

ADVERTISEMENT

If you’re having suicidal thoughts or feel you need to talk to someone, a free helpline is available around the clock in the UK on 116123, or you can email jo@samaritans.org. In the US, the National Suicide Prevention Lifeline is 1-800-273-8255. In Australia, the crisis support service Lifeline is 13 11 14. Other international suicide helplines can be found at www.befrienders.org.

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Please
or
to read this article.

Related articles

Sponsored

Featured jobs

See all jobs
ADVERTISEMENT