Using AI in university admissions ‘could reverse equity progress’

Departing Australian human rights commissioner expects v-cs to be among the students of his new AI ethics initiative

July 29, 2021
Source: Getty

Using artificial intelligence for student admissions could undo years of work by universities to be more inclusive, warns Australia’s outgoing human rights commissioner, who expects to see university leaders on the AI ethics training he will run.

Edward Santow, who from September will lead a new AI ethics initiative at the University of Technology Sydney (UTS), said AI was “reawakening old forms of discrimination” in areas such as job recruitment, banking, social services and justice. Women have been denied mortgages by machine learning systems trained on “decades of previous home loan decisions”, while men have been parachuted into senior executive appointments based on 40-year-old employment data.

Similar things could happen in universities – and possibly already were – as machine learning systems concluded that applicants with private school educations and university-educated parents were best placed to succeed, he warned. “That’s precisely what we want to avoid,” Mr Santow said.

But sidestepping such pitfalls might be difficult, despite funding incentives favouring applicants from under-represented groups. Mr Santow said there were many examples of organisations turning to AI to eliminate entrenched prejudices, only to achieve the “exact opposite” of what they had intended.

“[There is a] growing awareness that the problem exists, but no one has cracked a really good solution,” he said. “Developing these algorithms and setting them loose is quite a complex process. You really need to know what you’re doing.”

At UTS, Mr Santow will develop three types of training in his capacity as “industry professor – responsible technology”. The first aims to help chief executives and senior government figures understand AI well enough to know when and how to deploy it.

The second level of training is for middle managers who implement AI systems, while the third targets the general workforce.

Mr Santow said he expected vice-chancellors and deans to feature among his students as universities considered AI for proctoring, recruitment, assessment and other purposes. “Unless you have at least a baseline understanding of the technology and how to use it, you’re much more likely to make a mistake,” he added.

Notorious AI failures include its use by US courts to assess the probability of recidivism. Research suggests that this has left black defendants up to twice as likely as their white counterparts to be misclassified as high risk, leading to longer sentences and denial of bail.

Australia’s Robodebt scheme, an automated debt collection system that illegally extracted more than A$720 million (£390 million) from almost 400,000 social security recipients, has been linked with at least one suicide. “We need to make sure that whoever is developing and implementing that tech has a clear sense where problem areas might arise,” Mr Santow said. This was particularly important in universities because their size meant “you can cause harm at scale quite quickly”.

Training for people who implemented AI programmes was just as important, he continued. “They may be working with a tech company but don’t know what risks to look out for. Procurement processes tend to go wrong really frequently.”

AI is increasingly prominent in university operational areas such as customer service and automated assessment. Some institutions are moving away from using agents for international student recruitment and marketing, instead favouring AI-powered services.

Text-matching company Turnitin employs AI for tools used by academics to detect plagiarism and mark papers, and by students to revise assignments. AI is harnessed by Blackboard, Pearson and other widely used edtech services for student engagement, recruitment and retention.

Mr Santow is a former solicitor and academic who ran the Public Interest Advocacy Centre, an independent legal assistance body, before being appointed human rights commissioner in 2016. Earlier this year, he released a report on the human rights and social implications of AI, with UTS providing expertise and resources.

His term as commissioner ends this month. Unlike predecessor Tim Wilson, who proceeded into federal politics, Mr Santow said he had decided to focus on AI because he regarded it as “one of the three biggest challenges of our times”, along with climate change and population.

“The moment is now to go deep on these issues [to] give us the future we want and need, and not the one that we fear,” he said. “If we don’t get it right now, we’ll be paying the price for generations to come.”

john.ross@timeshighereducation.com

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Register
Please Login or Register to read this article.

Related articles

Artificial intelligence will soon be able to research and write essays as well as humans can. So will genuine education be swept away by a tidal wave of cheating – or is AI just another technical aid that teaching and assessment will evolve to take account of? John Ross reports

8 July

Sponsored