Logo

What your students are actually doing with GenAI

A student survey delved into how and why they use GenAI tools in their studies, and how they felt about it. Find out how this can inform teaching
2 Feb 2026
copy
  • Top of page
  • Main text
  • More on this topic
A woman stands in front of a class, holding a laptop
image credit: iStock/Drazen Zigic.

Created in partnership with

Logo

You may also like

Harness human and artificial intelligence to improve classroom debates
3 minute read
A robot holds up a sign showing two people talking

Popular resources

When generative AI tools first swept through higher education, the dominant narratives were strikingly polarised. Either GenAI would revolutionise learning or it would destroy academic integrity as we know it. What has been largely missing from this debate is the voice of students themselves. How do they actually use tools such as ChatGPT, Google Gemini or Grammarly? How confident are they in what these systems produce? And how is their thinking about GenAI shifting as the tools become embedded in everyday study practices?

A new institution-wide study at our university offers some insight. Drawing on survey data from 441 students across three academic colleges, our research sought to understand not only which tools students use, but how and why they use them and with what consequences. The findings challenge prevailing assumptions and suggest the need for a more nuanced and more constructive sector conversation.

A multilingual, multicultural GenAI user base

The survey responses paint a clear profile of the typical GenAI user on campus. Most respondents were undergraduate or master’s students, multilingual, internationally diverse and experienced in using more than one AI platform. While ChatGPT dominated, students also used Grammarly, Google Gemini and tools such as QuillBot, Midjourney and Kimi AI.

Notably, GenAI use was not evenly distributed across the student lifecycle from foundation to doctoral level. Several doctoral students voiced concerns about hallucinated references and epistemic risks. This echoes recent studies that show lower adoption among researchers wary of compromised scholarly rigour.

For the majority, however, GenAI tools were now part of the everyday academic toolkit. Most students had one to two years of experience and used AI several times per week for academic tasks: evidence of mainstream integration rather than novelty-driven experimentation.

What students actually do with GenAI

Across the dataset, academic writing emerged as the dominant area of engagement. The emerging picture is more complex than students simply using the tools to draft assignments. Students reported using GenAI most frequently for:

  • Brainstorming and idea generation
  • Clarifying unfamiliar concepts
  • Planning and structuring assignments
  • Editing and improving clarity
  • Summarising sources or readings

Only a small proportion of those students used GenAI for checking references or verifying factual content. Some still attempted to do so, highlighting awareness of the risk of generating misinformation.

When asked which stage of the writing process they used GenAI for, students overwhelmingly positioned the tools at the earliest stages: generating ideas, planning and producing a first draft. Comparatively few used AI to critique or refine a completed piece of writing, a missed opportunity given emerging research showing the value of AI–human hybrid feedback models.

Students also linked GenAI use to non-writing tasks, such as preparing for presentations, revising lecture notes or supporting reading comprehension, suggesting its integration into a broader ecology of learning practices.

From trust to criticism

One of the most striking findings concerns students’ shifting perceptions of GenAI. While 44 per cent reported feeling confident or very confident in using AI tools, trust in the outputs was far more cautious. Most students insisted on checking, verifying or cross-referencing AI-generated content.

Qualitative comments reveal why. Several students described starting out overly trusting AI, only to later realise its limitations:

  • “I never trust what the AI says and always do further research.”
  • “It’s useful for ideas, but I check everything myself.”
  • “After reviewing and assessing, I feed the feedback back to the AI to go deeper.”

In other words, experience often leads to reduced trust but increased strategic use. This evolution from initial uncertainty to critical and reflective engagement counters the assumption that students unthinkingly rely on AI. Many described developing sharper awareness of hallucinations, bias and inaccuracies, and adjusting their practices accordingly.

In this sense, GenAI is inadvertently fostering a new form of critical literacy: GenAI Literacy (GenAIL) in which students scrutinise, triangulate and justify their use of technological tools, rather than accept outputs at face value. This emerging literacy is epistemic as well as ethical. Students recognise that AI can support learning, but only if approached as a collaborator, not a surrogate for thinking.

Satisfaction is highest for language support, lowest for research tasks

Students expressed the highest satisfaction when using GenAI for low-stakes, language-focused tasks such as grammar checking, structure or summarisation. Their satisfaction declined sharply when the tool was used for higher-stakes or fact-dependent tasks like research, literature searches or data interpretation.

This pattern highlights an important gap in student expectations. When AI is framed as a smart writing assistant, satisfaction is high and frustration low. When framed as an alternative to academic research, dissatisfaction quickly follows.

This has implications for how universities guide students: we must be clear about appropriate uses of GenAI, if students are to avoid false expectations and epistemic risk.

GenAI as a mediator of learning – not a threat to it

Across the dataset, a broader shift was visible. Many students now view GenAI not as a danger, but as a mediator of their learning. Several positioned AI as a tool that helps them articulate ideas, organise thoughts or manage academic labour more effectively. One student captured this emerging mindset:

“I use it as a tool to help, not to replace my thinking. I have a mind, and I use it.”

This resonates with sociomaterial perspectives on learning, which view technologies not as neutral add-ons but as active participants in shaping academic practices. Students’ identities as learners appear to be evolving in tandem with their use of GenAI towards more reflective, agentic and digitally literate positions.

What educators should do next

Our findings suggest the need for an urgent shift away from fragmented or punitive approaches to GenAI and towards institutionally supported literacy-building. Specifically, educators should:

  1. Embed GenAI literacy across curricula – not as optional training, but as a core academic competency.
  2. Develop discipline-specific guidelines, recognising that GenAI use in nursing, architecture or law will differ substantially.
  3. Train students to critique AI outputs, not simply generate them.
  4. Support staff to mediate student use, particularly around ethical, epistemic and assessment-related questions.
  5. Address equity of access, ensuring all students, not only the confident or digitally experienced, can use AI tools responsibly and effectively.
  6. Regularly review and update institutional policies and teaching practices to keep pace with rapid developments in GenAI, ensuring guidance remains current, practical and pedagogically aligned.

GenAI is not going away. The question is whether universities will shape their use proactively or reactively struggle to catch up with student practice.

A turning point for the sector

Far from the narrative of students as reckless or naïve adopters, our evidence shows a cohort of learners developing increasingly mature, reflective and critical relationships with AI tools. Their practices are evolving, their awareness is growing, and their expectations of their universities are rising.

If the sector wants to meet this moment, institutions must move beyond alarmist discourse and design pedagogies that harness GenAI’s potential while safeguarding academic values. Students are already navigating this terrain. The challenge now is for us to catch up.

Julio Gimenez is principal lecturer, Katherine Mansfield is a senior lecturer and Richard Paterson is senior lecturer, all at the University of Westminster.

If you would like advice and insight from academics and university staff delivered direct to your inbox each week, sign up for the Campus newsletter.

You may also like

sticky sign up

Register for free

and unlock a host of features on the THE site