AI advocates ‘not angry enough’ about university funding

Panel stresses importance of ethics training for field that generates moral questions ‘on steroids’

九月 2, 2020
artificial intelligence panel

Governments are overthinking the regulatory challenges of artificial intelligence and neglecting its skill needs, Times Higher Education’s World Academic Summit has heard.

AI experts told the summit that the field promised solutions to some of the world’s “toughest problems”, so long as ethical hazards could be navigated. But capacity constraints in universities could prove a deal-breaker.

Elisabeth Ling, senior vice-president of analytics company Elsevier, said universities were experiencing huge demand for AI education. “Friends who are professors tell me so many people [are] applying for ethics [and technology] courses. They’re banging on the door to be better equipped,” she said.

But Toby Walsh, professor of AI at UNSW Sydney, said universities were not equipped to satisfy the demand. “In Australia, in the US, in Europe and probably many other places…universities are often last in line for the handouts that have been given to support businesses through the pandemic,” he said.

“If we are going to equip young people with the skills to grow our economies again, university is the place where that’s going to happen. We really need to push back against politicians who seem to have put universities at the end of the line. We’re not angry enough, and we’re not vocal enough.”

Professor Walsh stressed the need to ensure that people were trained for the right tasks. An ethical focus on transparency, for example, was “overrated” because candidness was undesirable in many situations – for reasons of privacy, security or commercial protection.

He suggested that AI professionals did not necessarily need the skills to build transparent systems. The greater need was for people capable of constructing “the policy, formal and informal, that allows us to trust decisions taken by machines”.

Professor Walsh said that while new frameworks were constantly being developed to regulate AI, many of these efforts duplicated each other – and previous endeavours. AI “isn’t so different” from other technologies, he insisted. “It’s not magic, and many of the concerns are ones that we had to address when we introduced medical technologies, electricity or anything else.”

AI generated ethical questions “on steroids” because of its scale and speed. “We see that with questions around things like surveillance and privacy. But most are questions that thousands of years of philosophy have prepared us to answer, so most of the frameworks repeat all the same things,” Professor Walsh said.

Swathi Young, chief technology officer with Virginia-based consultancy Integrity Management Services, said AI ethics was “the need of the hour”. She cited New York research into using AI to decide bail applications. “We are going to a place where we might automate things that in the past were human decision-making.”

Tao Zhou, a big-data specialist at the University of Electronic Science and Technology of China, said bias was a major ethical problem. He cited an AI system developed to evaluate candidates for top-tier jobs such as chief executives and company partners – 90 per cent of which traditionally went to men.

The AI skewed recommendations even more heavily towards males, Professor Zhou said. “The computer is very fast to learn this bias.”

Professor Walsh said AI systems were often trained “on historical data that captures the biases of…society. This goes beyond just being a business problem,” he added. “This is a societal problem. We don’t want to be perpetuating these biases into the future.”

john.ross@timeshighereducation.com

请先注册再继续

为何要注册?

  • 注册是免费的,而且十分便捷
  • 注册成功后,您每月可免费阅读3篇文章
  • 订阅我们的邮件
注册
Please 登录 or 注册 to read this article.

相关文章