Korean university ‘killer robots’ boycott called off

Robotics experts cancel boycott after KAIST denies developing autonomous weapons

四月 9, 2018
A robot wearing a sign saying 'Campaign to Stop Killer Robots’
Source: Getty

Artificial intelligence experts have called off a threatened boycott of one of Korea’s top universities, after it undertook not to use a new research centre to develop so-called killer robots.

Fifty-six AI and robotics researchers said they would maintain academic contacts with the Korea Advanced Institute of Science and Technology (KAIST) after its president, Sung-Chul Shin, gave public assurances about the activities of its Research Centre for the Convergence of National Defence and Artificial Intelligence.

Korean media reports had suggested that the centre was working on autonomous weapon projects including “AI-equipped unmanned submarines and armed quadcopters”.

Researchers also had concerns about KAIST’s partner in the centre, Korean arms company Hanwha Systems, which they said had developed cluster munitions and an autonomous “sentry” robot called the SGR-A1.

The experts aired their concerns in an open letter scheduled for release on 4 April. However, KAIST vehemently denied any intention to work on autonomous weapons systems after it was contacted by Times Higher Education.

The researchers said they had now accepted KAIST’s guarantees. “Given this clear commitment, the signatories to the boycott have rescinded the action,” organiser Toby Walsh, Scientia professor of artificial intelligence at the University of New South Wales, said in a statement.

“They will once again visit and host researchers from KAIST and collaborate on scientific projects.”

KAIST was ranked 95th in the world, and the second top Korean institution behind Seoul National University, in this year’s THE World University Rankings. Professor Walsh described it as “the MIT of Korea”.

He added that AI had legitimate military applications. “No one, for instance, should risk life or limb clearing a minefield – this is a perfect job for a robot,” he said.

“But we should not hand over the decision of who lives or who dies to a machine. This crosses an ethical red line and will result in new weapons of mass destruction.”

A United Nations group is meeting in Geneva this week to discuss the humanitarian and international security challenges posed by lethal autonomous weapons systems. Twenty-two participating nations have backed a call for a pre-emptive ban on such weapons.

john.ross@timeshighereducation.com

请先注册再继续

为何要注册?

  • 注册是免费的,而且十分便捷
  • 注册成功后,您每月可免费阅读3篇文章
  • 订阅我们的邮件
注册
Please 登录 or 注册 to read this article.