As artificial intelligence compels workers to prioritise skills that robots cannot replicate, it will also force academics and other teachers to find unoccupied professional niches, a new report argues.
The report says that universities and other education providers will increasingly have to specialise in areas not catered to by AI. The “horizon scanning” report, published by the Australian Council of Learned Academies, lists existing and future AI systems capable of teaching “well-defined” subject areas, particularly in science, technology, engineering and mathematics.
They include individualised tutors, “virtual pedagogical agents” and “language-learning AI systems” specialising in “experiential digital learning driven by virtual roleplay”.
AI has particular applications in special needs education, the report adds. They include voice-activated interfaces for people who cannot use keyboards and “intelligent tutoring systems” to help stop anxious students becoming “confused or overwhelmed”.
AI can also be used to enhance virtual reality-based teaching and make educational apps more flexible and accessible. “Many of these technologies can be used by educators to augment a more traditional learning experience,” the report says.
“If AI-based learning tools begin to displace some aspects of teaching, it will be important for teachers to focus on areas of knowledge acquisition and learning where AI is ineffective, such as meta‑intelligence.”
The 233-page report, commissioned by chief scientist Alan Finkel and the federal government’s National Science and Technology Council, was penned by a who’s who of Australian academic minds.
They include UNSW Sydney AI professor Toby Walsh, Australian National University cultural anthropologist and technologist Genevieve Bell, University of Western Australia reconstructive surgeon and former Australian of the Year Fiona Wood, and University of South Australia social theorist Anthony Elliott.
The report says widely divergent views about AI, which range from “extreme optimism” about the benefits to pessimism about the risks, tend to obscure its “wide-ranging and perhaps less obvious impacts”.
While AI is here to stay, the report says, its future role “will be ultimately determined by decisions taken today” in areas including regulation, governance, access and education.
The report calls for “an independently led AI body” bringing together people from government, academia, industry and the public sector to lead the development of new technologies, promote engagement with international initiatives and “develop appropriate ethical frameworks”.
The body should include an institute where researchers, developers and policy experts can address “issues spanning human rights, psychology, regulation, industrial relations and business”.
The report says that curricula at all levels – but particularly in higher education – must evolve so that students can develop the skills and capabilities required for “changing occupations and tasks”.
“Education systems [should] focus on elements of human intelligence and how to protect basic human rights, dignity and identity,” it insists, saying “ethics should be at the core of education for the people developing AI technology”.
The report says that AI specialists often lack specific knowledge about industries where AI has obvious application, such as agriculture, energy, health and mining. “Similarly, those in specific sectors do not necessarily have the technical knowledge to apply AI to their area. Education and training programmes may help address this gap.”
The report also says that AI could also be harnessed to develop mini courses “given that micro-credentials are typically certified through online platforms”.