Careful, collective deliberation is key to educational success with AI

Some of the most important questions around accuracy, bias, authorship and appropriate use might not be immediately visible, says Santa Ono

Published on
April 9, 2026
Last updated
April 9, 2026
Two people scrutinise a robot
Source: demaerre/iStock

The central question facing higher education today is not whether artificial intelligence will shape our future. It already is. The question is how.

When and under what conditions should we choose to use it? By what standards should we judge its contribution to learning? How do we ensure that it deepens understanding rather than simply making it easier to produce answers? And how do we meet our responsibility to anticipate misuse and ethical risks?

These are not technical questions. They are human ones. They go to the heart of what a university is for.

At a recent lecture at Saïd Business School at the University of Oxford, I returned to these questions with renewed urgency. We are living through a moment in which the creation and transmission of knowledge are being reshaped in real time. That demands not only innovation but judgement – and humility.

ADVERTISEMENT

I have had the privilege of observing this moment from within institutions willing not only to experiment with AI but to question its implications for teaching and research: the University of Michigan, the University of Oxford and now the Ellison Institute of Technology Oxford. Each has approached this challenge with a mix of openness and caution, recognising that the task is not simply to adopt new tools but to understand what they mean for the future of knowledge itself.

At Michigan, we began not with technology but with governance. The provost convened a faculty-led committee to examine how AI might affect teaching, research and academic life. From there, we moved to implementation. AI tools were made broadly available – not so that we would be a first mover but so that we would better understand, through experience, whether these tools could deepen learning, support teaching and reduce unnecessary burdens on faculty.

ADVERTISEMENT

The early results were encouraging. Students used AI to explore ideas more fully and extend learning beyond the classroom. Faculty found that certain routine tasks could be eased, allowing greater focus on mentorship and intellectual engagement. There were early signs of movement toward a more individualised model of education.

But universities are not defined by their tools. They are defined by trust. And trust is not built quickly.

Even with a thoughtful process, concerns emerged. Faculty raised serious questions about authorship, academic integrity and the possibility that these tools might weaken habits of thinking that education is meant to cultivate. Some of these concerns took formal shape in faculty governance, with calls for deeper assessment of AI’s impact on teaching and standards. Other faculty members pointed to risks of bias, uneven adoption and a pace of change that exceeded understanding. These responses were not resistance. They were a sign of institutional health. A university at its best moves thoughtfully rather than quickly.

In response, an additional committee was convened – not to defend earlier decisions but to listen, learn and examine consequences. That also reflected something fundamental about academic life: progress is iterative. It depends on a willingness to revisit assumptions and adjust course.

The conclusion was that if we are to use AI well, we must define success clearly. Does it improve learning in ways that endure? Does it strengthen the relationship between teacher and student? Does it expand opportunity without introducing new inequalities? And are we attentive to risks that may only become visible over time?

Emerging research suggests a mixed picture. AI can enhance engagement and support more personalised learning, meeting learners where they are and making education more flexible and responsive. But without sufficient structure, it can foster dependence and weaken independent thinking. Students might perform better with assistance in the moment but struggle when that support is removed.

ADVERTISEMENT

These findings do not argue against the use of AI. They argue for careful design and thoughtful integration.

Moreover, if AI becomes a primary way that students encounter knowledge, the integrity of that encounter becomes central. Questions of accuracy, bias, authorship and appropriate use cannot be secondary concerns. Yet some of the most important effects might not be immediately visible.

ADVERTISEMENT

During my time at Michigan, this reflective work unfolded alongside a broader national effort. The American Council on Education brought institutions together to share what they were learning, and the Department of Education encouraged a more coordinated and thoughtful national conversation. That kind of collaboration remains essential. No institution will navigate this transition perfectly on its own.

In my current role at the Ellison Institute of Technology Oxford, I see these same questions being taken up within the Oxford ecosystem.

At Oxford, AI is accelerating discovery, particularly in areas such as human health, where the stakes are immediate and global. Through its partnership with the institute and its own academic programmes, the university is integrating AI into research and education while examining questions such as whether we can be sure that what the AI is telling us is true and whether we have enough safeguards in place to mitigate the risk of hallucinations and to address any ethical issues in AI-generated query responses.

What stands out is not simply the scale of activity but the posture: a willingness to engage, paired with a commitment to reflection. A recognition that technological progress must be accompanied by intellectual seriousness.

If we approach this moment with that level of care – with a willingness to listen, learn and adjust – I feel certain that AI can strengthen higher education.

Santa J. Ono is president of the Ellison Institute of Technology Oxford, visiting professor of ophthalmology and immunology at the University of Oxford and senior research fellow at Worcester College, Oxford. He is former president of the University of Michigan and the University of British Columbia and served on a US Department of Education task force on AI in higher education.

ADVERTISEMENT

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Please
or
to read this article.

Related articles

Sponsored

Featured jobs

See all jobs
ADVERTISEMENT