Call to reimagine artificial intelligence for developing world

Higher education institutions who are developing AI have a responsibility to ensure that it is not biased against developing nations, conference hears

January 15, 2019
artificial-intelligence

Universities must lead efforts to ensure that the evolution of artificial intelligence tools does not discriminate against developing nations, a conference has heard.

Speaking on a panel at Times Higher Education’s Emerging Economies Summit, Sarah Anyang Agbor, commissioner for human resources, science and technology at the African Union, said that the need for global diversity had been overlooked in the development of machine learning technology.

“It is defined by the perspective of the West…the language of the machine is not the language of Africa,” she told the event at Qatar University. “Who is designing the machines that will dictate the future of all of us and do they come from a position of bias?”

Professor Anyang Agbor, who was previously a deputy vice-chancellor of the University of Bamenda, in Cameroon, said that universities must play a key role in correcting the omission of perspectives from the developing world.

“It is the job of the university to make [AI] more inclusive and participatory. It can be done because we have the intellect to do it [and] because we have the vision of what our society is supposed to be,” she said. “We should our responsibility seriously…and create timely and appropriate policies that will involve more stakeholders.”

Tshilidzi Marwala, vice-chancellor of the University of Johannesburg, cited reports – and his experience of – facial recognition software not recognising Africans. He added that the development of AI must “take into account…economic disparities [between nations] so that machines do not reflect the imbalances in the world”.

Mikhail Strikhanov, rector of the National Research Nuclear University MEPhI in Russia, added that “society should not let [AI] develop without a plan”. Universities must work with policymakers to create new ethical frameworks, he added.

“Universities should be prepared to meet the challenge with faculty and research,” Dr Strikhanov said.

anna.mckie@timeshighereducation.com

登录 或者 注册 以便阅读全文。

请先注册再进行下一步

获得一个月的无限制地在线阅读网站内容。只需注册并完成您的职业简介.

注册是免费的,而且非常简单。一旦成功注册,您可以每个月免费阅读3篇文章。:

  • 获得编辑推荐文章
  • 率先获得泰晤士高等教育世界大学排名相关的新闻
  • 获得职位推荐、筛选工作和保存工作搜索结果
  • 参与读者讨论和公布评论
注册

相关文章

The entanglement of the university and tech worlds faces increased scrutiny following the Cambridge Analytica scandal. Could joint positions in industry and academia offer a workable and ethically defensible way forward? David Matthews reports

19 July

Reader's comments (1)

Any AI system like facial recognition is only as good as the 'training material' it is supplied with. People who see flaws with a system need to step up and ensure that it meets their perceived needs, that the AI is equipped with appropriate tools to do the job they would like it to do. How do Africans peceive one another? I've been told that the way one recognises another is different to the cues that other ethnicities use, so their input would be helpful in training facial recognition AIs to be able to 'see' an African face appropriately...

欢迎反馈

Log in or register to post comments

评论最多

赞助