Emerging Economies Summit: Call to reimagine artificial intelligence for developing world

Higher education institutions who are developing AI have a responsibility to ensure that it is not biased against developing nations, conference hears

January 15, 2019
artificial-intelligence

Universities must lead efforts to ensure that the evolution of artificial intelligence tools does not discriminate against developing nations, a conference has heard.

Speaking on a panel at Times Higher Education’s Emerging Economies Summit, Sarah Anyang Agbor, commissioner for human resources, science and technology at the African Union, said that the need for global diversity had been overlooked in the development of machine learning technology.

“It is defined by the perspective of the West…the language of the machine is not the language of Africa,” she told the event at Qatar University. “Who is designing the machines that will dictate the future of all of us and do they come from a position of bias?”

Professor Anyang Agbor, who was previously a deputy vice-chancellor of the University of Bamenda, in Cameroon, said that universities must play a key role in correcting the omission of perspectives from the developing world.

“It is the job of the university to make [AI] more inclusive and participatory. It can be done because we have the intellect to do it [and] because we have the vision of what our society is supposed to be,” she said. “We should our responsibility seriously…and create timely and appropriate policies that will involve more stakeholders.”

Tshilidzi Marwala, vice-chancellor of the University of Johannesburg, cited reports – and his experience of – facial recognition software not recognising Africans. He added that the development of AI must “take into account…economic disparities [between nations] so that machines do not reflect the imbalances in the world”.

Mikhail Strikhanov, rector of the National Research Nuclear University MEPhI in Russia, added that “society should not let [AI] develop without a plan”. Universities must work with policymakers to create new ethical frameworks, he added.

“Universities should be prepared to meet the challenge with faculty and research,” Dr Strikhanov said.

anna.mckie@timeshighereducation.com

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Register
Please Login or Register to read this article.

Related articles

The entanglement of the university and tech worlds faces increased scrutiny following the Cambridge Analytica scandal. Could joint positions in industry and academia offer a workable and ethically defensible way forward? David Matthews reports

Reader's comments (1)

Any AI system like facial recognition is only as good as the 'training material' it is supplied with. People who see flaws with a system need to step up and ensure that it meets their perceived needs, that the AI is equipped with appropriate tools to do the job they would like it to do. How do Africans peceive one another? I've been told that the way one recognises another is different to the cues that other ethnicities use, so their input would be helpful in training facial recognition AIs to be able to 'see' an African face appropriately...

Sponsored