Logo

‘AI should support student services, not impersonate them’

Are universities ready for GenAI in student support? When chatbots seem to offer a scalable, cost-effective way to meet rising demand for student support, universities need to consider how to shape their use without outsourcing care or increasing risk
RMIT University,University of Tasmania
24 Apr 2026
copy
  • Top of page
  • Main text
  • Additional Links
  • More on this topic
Woman talking into smartphone, AI as student support
image credit: jittawit.21/iStock.

You may also like

Beyond Chat: how AI teaching assistants are transforming student support
4 minute read

The appeal of using generative artificial intelligence (GenAI) in university student services is straightforward. Chatbots can operate 24/7, respond instantly at scale and reduce pressure on overstretched staff. For institutions under pressure to expand student support such as counselling, psychological services, student advisers and career counsellors while resources to provide these services dwindle, AI can seem like a pragmatic, even compassionate, response. It can also feel like a low-barrier first step for students who are unsure whether their concern is “serious enough”, those studying remotely or students who might be hesitant to approach a person. 

GenAI is presented as a scalable, always-on solution. Some universities – looking for ways to meet demand as waiting lists grow longer – are piloting chatbots for triage, signposting and well-being check-ins, while a booming commercial market promises GenAI tools that can treat mental health. Whether universities deploy these systems or not, many students are already using them to talk about stress, loneliness and anxiety. In our recent study, with leading belonging researchers Roy Baumeister from the University of Queensland and Kelly-Ann Allen from Monash University, we argue that students might start to see their GenAI support system as a parasocial alternative to human connection.

This shifts the question away from whether GenAI will appear in student support and towards how institutions can shape its use without outsourcing care or increasing risk.

Why GenAI looks like the answer (especially on a spreadsheet)

Used carefully, GenAI can support low-risk and high-volume interactions, at times outside business hours. These include answering procedural questions, explaining what services exist, helping students book appointments or practise articulating concerns before speaking to a human adviser. When the stakes are low, it can also function as a companion, providing students with a parasocial relationship, reducing temporary loneliness and acting less like a counsellor and more like a front-line-enquiries clerk or a barista making their coffee on campus…but with infinite patience.

AI tools are showing self-reported benefits for students, according to short-term studies. The Wayhaven AI mental wellness coach intervention at a New Jersey university shows reduced anxiety and depression for its users.

There is also a perceived accessibility benefit. Some students are hesitant to seek formal support, unsure that their problems are “serious enough”. Others might be studying too far from campus to connect in person easily. A chatbot is always available to talk but it is useful only if it reliably directs students towards appropriate human support. The aim should not be to eliminate human-to-human interaction. 

Meet the confident conversationalist: why GenAI always sounds reassuring

Many risks in GenAI-enabled well-being support stem from misunderstandings about how large language models (LLMs) work. These systems do not reason, empathise or assess risk. Instead, they predict plausible next words based on patterns in data. The result is fluency, confidence and warmth but without accountability, understanding or responsibility.

One useful way to explain this to staff and students alike is to give a chatbot a persona. Think of GenAI as a smooth salesperson: friendly, articulate and eager to please, but fundamentally motivated to keep the conversation going. The goal for the GenAI is about deepening the conversation, just like a car salesman focuses on progressing to sell a car. It sounds reassuring not because it knows what is best but because reassurance makes algorithmic sense.

This persona matters in well-being contexts. An affable GenAI chatbot might offer generic comfort when escalation is needed or mirror distress without challenging it. It might also suggest coping strategies that feel personalised but are poorly suited for the student’s specific condition, given it has no other cues than the text students feel comfortable articulating. It also lacks the clinical judgement to tell a student when to pause or when to refer a case on, escalate or refuse to engage beyond a safe boundary

Framing GenAI this way clarifies where it can be trusted and where it cannot. It is good at conversation, summarising options and sounding supportive. It is not good at judgement, care or recognising when someone needs immediate human help.

Putting the smooth talker on a leash

If universities are to use GenAI in student support, they must design around this persona rather than pretending it does not exist. That means building guard rails that constrain what the system can do, say and imply. AI should support student services, not impersonate them, helping students understand options or prepare for conversations without diagnosing conditions, offering therapeutic advice or presenting itself as a primary source of emotional support. Dartmouth College, for example, signalled last year that it would spend 100,000 hours shaping the language and design of its new GenAI student well-being chatbot, Evergreen.

If a student expresses distress, risk or harm, the system should default towards encouraging contact with human services or crisis support. These triggers should be conservative rather than optimistic, because the cost of missing risk far outweighs the inconvenience of false alarms, and institutions should regularly audit AI interactions for unsafe responses or unintended patterns of use. Students should be told plainly that the GenAI is not a counsellor and cannot provide clinical judgement. For some US jurisdictions such as Illinois, the preference has been to ban AI use as stand-alone mental health therapy (Kentucky passed similar laws in February). Explaining limitations does not undermine trust; it helps students interpret responses appropriately, just like clarifying scope of practice in most healthcare.

The governance challenge is not simply over what the tool says in its interactions but what the institution is responsible for once AI becomes part of a support pathway. That means clear role signalling, conservative escalation triggers, transparent data practices and a visible human fallback. It also means equity testing, because the same chatbot behaviour that feels welcoming to some students might come across as surveillant or exclusionary to others. These are not IT implementation details. They are duty-of-care decisions.

Playbook: deploying GenAI as a supplement, not a substitute

Below we offer suggestions for how GenAI can be useful in student support, and the ways that it might act counter to the aim of enabling university students to flourish.

Use GenAI for low-risk, high-volume support, including:

  • low-risk, high-volume support such as service navigation, FAQs and booking support
  • helping students draft messages or prepare what to say before speaking to a person
  • making the bot’s role and limits explicit in every interaction
  • maintaining a clear, always-visible pathway to a human adviser or counsellor
  • setting conservative triggers that prompt referral when distress or risk is mentioned
  • routine monitoring and auditing for unsafe responses and unintended drift.

Do not use GenAI for clinical functions or for ‘efficiency dividends’, including:

  • counselling, therapy, diagnosis, crisis triage or safety planning
  • presenting GenAI as a counsellor or implying clinical judgement or duty of care
  • a default replacement for human contact in support pathways
  • retaining sensitive transcripts without clear consent, governance and retention limits
  • defining success as cost savings or reduced workload when student well-being is the primary criterion.

What success looks like when well-being matters

One of the biggest dangers in AI-enabled student support is defining success purely in operational terms. Faster responses, lower costs or reduced staff workload can look positive while masking poorer outcomes for students. Well-being systems should instead be judged on safety, access and trust, including whether AI increases appropriate referrals to human services, reduces time to support at-risk students and is experienced as genuinely helpful rather than dismissive or misleading. 

Ultimately, AI in student support should be treated as an evolving practice rather than a finished solution. Students will use these tools regardless of institutional policy. Universities that succeed will be those that help students recognise the smooth talker for what it is, while ensuring a human voice is always within reach when it matters most.

Michael Cowling is professor in computing technologies in the STEM College at RMIT University. Joseph Crawford is senior lecturer in management at the University of Tasmania.

If you would like advice and insight from academics and university staff delivered direct to your inbox each week, sign up for the Campus newsletter.

If you are affected by any of the issues discussed in this article, a free helpline is available around the clock in the UK on 116123, or you can email jo@samaritans.org. In the US, the National Suicide Prevention Lifeline is 1-800-273-8255. In Australia, the crisis support service Lifeline is 13 11 14. Other international suicide helplines can be found at www.befrienders.org.

You may also like

sticky sign up

Register for free

and unlock a host of features on the THE site