AI in higher education: dystopia, utopia or something in between?

To understand how HE can incorporate AI successfully, we need to think about how humans will interact with the technology and change their behaviour, says Ben Swift

Ben Swift's avatar
13 Oct 2022
bookmark plus
  • Top of page
  • Main text
  • More on this topic
Artificial intelligence: a dystopian future for higher education?

Created in partnership with

Created in partnership with

Australian National University

You may also like

AI has been trumpeted as our saviour, but it’s complicated
A robot marks a book. Aritificial intelligence has been trumpeted as our saviour, but the truth is much more complicated.

AI applications are already part of the higher education experience for students, instructors and administrators. Some of them are chatbots and intelligent tutoring systems, some are auto-grading and feedback apps, and some are used in academic integrity breach detection and exam proctoring.

We’re also on the crest of a wave of new AI apps for text/image synthesis, where you give the AI a prompt such as “What role did Sir John Kerr play in the 1975 Australian constitutional crisis?” or “Draw a picture of a red unicorn playing a Fender Stratocaster” and it will spit out a “response” which, while not always perfect, in most cases could pass for something hacked together by a harried student in the few hours before an assessment deadline.

I’m writing as someone with 10 years’ experience as a lecturer and course convener in computer science and cybernetics. I’ve taught both large (400-student) compulsory courses and 10-student special interest courses. I’ve also built software tools for automating some aspects of these courses, although they more commonly use normal “if-then-else” programmes rather than AI ones.

However, as an AI researcher, I also build AI-powered tools – and I can certainly see the convergence between the “AI research and tool-building” part of my job and the teaching part.

To understand the way in which AI will transform higher education, it’s useful to consider the interactions between human and AI parts of the system, rather than focusing on individual AI tools in isolation. For example, will the AI essay generators stay ahead of the AI plagiarism-detection bots? Will the AI tutoring apps lighten the workload of our teaching assistants, or will the workload just shift to helping the students use the AI tutoring apps?

To understand what is happening with the introduction of AI into the higher education experience, it’s crucial to realise that so much of the student and instructor experience in a course is about flows of information. For example, an instructor creates an assignment spec, which is sent to the student. In response, the student (synthesising many sources of information, from both the course curriculum and elsewhere) produces and creates an assignment artefact (such as an essay). This artefact is graded by an instructor, and both a numerical mark and qualitative feedback are sent back to the student – another information flow, which will inform the students’ work in subsequent assignments.

Don’t get me wrong, I’m not saying that this is all there is to participating in a university course – crucially, the human community aspect is missing in the above description, for a start. However, thinking about the above information flow gives us a helpful perspective for considering where AI might amplify or dampen the different information flows within the system, or where it may give rise to new ones.

There are three potential “system dynamics” I’m on the lookout for as AI becomes more deeply integrated into higher education.

First, while it’s less clear whether the aforementioned AI text and image synthesis tools will make the best student work even better, it’s pretty clear that they will allow students who only care about passing without actually attaining the course learning outcomes to do so with much less effort. The implication for instructors is that if you’re grading a text/image artefact it’s now much harder to tell whether the artefact is the work “only” of the student or whether they had the help of an AI tool to create it. In other words, if the question of whether AI was involved in the creation of an artefact really matters, it will be increasingly hard to give a definitive answer, especially without specialised expertise and under the time pressures that instructors have to complete grading.

Second, there are going to be feedback loops involved. For example, a big selling point of AI chatbot products is that you can teach larger classes (or create new classes) than you would otherwise have the instructors to support. AI text summarisation tools could also help with grading/triaging, especially given the limits on budgets for teaching assistant hours. One potential endgame for this dynamic is that instead of having to cap places on high-demand degree programmes, class sizes could grow until student demand is satisfied.

The risk here is that such a class would become incredibly reliant on those AI tools to handle its teaching workload without burning out all the humans involved in the process. And humans will still be involved, since (almost) nobody is proposing that we have purely automated classes in higher education.

Third, human-AI co-creation isn’t going anywhere, so make it part of your assessments. Get students to design new front ends and workflows that other students can try out. How about an essay-writing assignment where the students are encouraged to write the topic sentences for each paragraph and use AI to complete the rest? The students could then critically reflect (and be assessed) on their process of iteratively poking the AI (via the topic sentence prompts) to ensure a coherent overall argument for the essay. Alternatively, using the “reverse assignment” approach, the instructor could enlist the help of AI to write an assignment spec and have the students come up with a rubric and suggested improvements to their assignment spec as their deliverable.

Finally, I do wonder whether (and hope that) some of these AI tools might make contract cheating less profitable as a business, because the humans that provide those services will be automated away as well – although, admittedly, the cheating-industrial complex is well positioned to take advantage of the AI-enabled future of higher education, as those involved have probably got the best databases of instructor- and student-created content on the planet.

The main takeaway here is that AI tools in higher education won’t operate in isolation; they’ll become part of the system, where students can churn out passable essays faster, but instructors can also grade them faster. It’s unclear which “side” of this transaction will win out, or which balancing mechanisms (natural or regulatory) will be required in response, so it’s important to design your class so that such “AI content arms races” aren’t so likely.

There’s a dystopian “future of AI in education” scenario in which AI-generated assignments are graded by AI grading and feedback bots, with dull-eyed human teachers and students who are largely disconnected and disenfranchised. But I’m not in this dystopian camp. I am, however, keeping an eye out for how human students and instructors change their behaviour in response to the changes in the information ecosystem in which we exist.

Ben Swift is educational experiences lead and associate director (education) at the ANU School of Cybernetics. The ANU School of Cybernetics is activating cybernetics as an important tool for navigating major societal transformations through capability building, policy development and safe, sustainable and responsible approaches to new systems. 

If you found this interesting and want advice and insight from academics and university staff delivered direct to your inbox each week, sign up for the THE Campus newsletter.


You may also like

sticky sign up

Register for free

and unlock a host of features on the THE site