Generative AI is not just a tool for learning. It shapes how students think

Framing this technology as a helpful ‘tool’ disguises how much it guides every element of critical thinking that academics seek to cultivate, says James Garvey

Published on
February 24, 2026
Last updated
February 24, 2026
Man holding a hammer with AI chip, and circuit covering the man's face. To illustrate that framing AI as a helpful ‘tool’ disguises how much it guides every element of critical thinking that academics seek to cultivate.
Source: iStock montage

We might not agree about what artificial intelligence (AI) is or what it means for the future of higher education but we have settled on how to talk about it. In university policies and course handbooks, AI is mostly framed as a tool, a tutor or a helpful assistant. These metaphors seem to bring clarity to the discussion. They feel neutral and pragmatic. They are not.

Metaphors have well-known and carefully researched influences on the way we think about complex things. This is partly why they are used so effectively by political speechwriters, public relations firms, spin doctors and advertisers. Metaphors open up some avenues of thought and close down others, and they do this mostly without us noticing.

In a famous study by Stanford psychologists Paul Thibodeau and Lera Boroditsky, participants were given a short paragraph about crime in the fictional city of Addison. For one group, crime was framed as a beast and, for the other, as a plague. When asked to recommend solutions, those who read about beasts suggested policing and punishment, while those who read about plagues were significantly more likely to suggest social reform and rehabilitation.

As the authors put it, “far from being mere rhetorical flourishes, metaphors have profound influences on how we conceptualise and act with respect to important societal issues”. What is striking is that participants did not recognise the role metaphors had in shaping their thinking. They explained their answers by pointing to the same crime statistics – just 2 per cent mentioned metaphors at all.

ADVERTISEMENT

Like picturing crime as a beast, thinking of AI as a tool opens some lines of thinking and closes down others. It suggests moral neutrality – tools can be used for good or ill – and a high degree of control. A hammer is just a hammer. This way of thinking places moral responsibility squarely on the user. When things go sideways, the problem is misuse or misapplication. So we talk about the responsible use of AI, the importance of the critical evaluation of outputs, AI ethics – all under the banner of AI literacy.

This framing makes AI’s role in actively shaping our thoughts harder to see. We stop asking important questions about the moral responsibilities of the companies and programmers that design and market these systems.

ADVERTISEMENT

We are less likely to wonder how AI is influencing our interpretations, pushing the synthesis of our ideas and manipulating our judgement. What kinds of intellectual habits does it encourage or undermine? It is hard, after all, for creatures who think of themselves as tool users par excellence, to imagine a tool as participating in our thinking, perhaps even partly constituting our understanding of the world. But AI systems increasingly make our thoughts what they are. 

Is it alarmist, then, to talk about the emergence of a second hidden curriculum? The assistant metaphor suggests a hierarchy in the classroom, with faculty and students in control and AI offering help when asked. This obscures those many moments when an AI tutor is not simply assisting but directing learning – structuring explanations, foregrounding interpretations, modelling ways of thinking about new topics and suggesting what to think about next.

This is a remarkable shift in how we come to know – an AI-mediated curriculum running alongside our intended lesson plans and learning objectives. I’m not sure we, or our students, always recognise when this is happening. Again, metaphor makes it harder to work out what is really going on.

Metaphors can mislead us in less dramatic ways too. When AI produces troubling outputs, we are quick to say, for example, that “Grok is racist” or the system has “gone rogue”. Anthropomorphising AI does not just misdescribe what these systems are and what they are doing. It makes it easier to look away from our own role as users, and from the institutional and corporate choices embedded in the systems we lean on. Responsibility drifts, and with it our capacity for critical judgement.

ADVERTISEMENT

Both the tool and assistant metaphors invite trust where there should be more scrutiny, hesitation and care. What follows from all this?

Metaphors can be helpful when we are trying to get a handle on something unfamiliar. But for the slower, more careful work of lesson planning and policy-writing, they are now getting in the way. At this point, it helps to consciously shift into more technical vocabulary, choosing words carefully, not conveniently.

When we do this, some important things come into better view. We can stop talking about AI “suggesting”, “brainstorming”, or “answering questions” and instead talk about algorithmic outputs produced under particular constraints. We are no longer “asking AI” but engaging in probabilistic text generation.

That shift makes our own interpretive and evaluative role harder to ignore. If we move away from talk of “hallucinations” and instead speak of "predictive text failure", verification becomes ordinary practice, not an optional step. Prompting begins to resemble what it actually is – experimental language design, not using a tool such as a typewriter or having a conversation with a tutor.

ADVERTISEMENT

Insisting on technical precision when it matters can help us resist the easy idea that AI is a helper, ready to assist. Instead we must keep our focus on the uncomfortable thought that these systems partly and covertly structure what we say, think and learn.

We do not need better or different metaphors for artificial intelligence. We need fewer, along with a more disciplined way of thinking, talking and writing that keeps moral responsibility, judgement and pedagogy in our human hands.

ADVERTISEMENT

James Garvey is chair and professor in liberal arts at the College For Creative Studies in Detroit.

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Please
or
to read this article.

Related articles

Sponsored

Featured jobs

See all jobs
ADVERTISEMENT