It’s the ethics research institute facing ethical questions of its own.
Last month, the Technical University of Munich, which bills itself as one of the world’s leading clusters of artificial intelligence expertise, unveiled a new centre to investigate how AI can be deployed in the 21st century without compromising safety or privacy.
The idea is that engineers and informatics experts will work alongside academics specialising in law, medical and digital ethics, governance and accountability to come up with guidance for policymakers and tech firms.
What has raised eyebrows, however, is that the institute itself is being funded by a $7.5 million (£5.7 million) grant from arguably the most controversial user of AI in the world: Facebook.
The social network has faced questions, for example, over what algorithm lies behind its newsfeed, the never-ending scroll of photos, comments and articles accused of being at best deliberately addictive – and at worst of helping spread genocidal propaganda in Myanmar.
Universities the world over are questioning how close they can get to big tech – which has money and data that universities can only dream of – without losing their independence, and TUM is no exception.
Christoph Lütge, a TUM professor in business ethics and leader of the new institute, told Times Higher Education that there were “clear statements” on paper between the institute and Facebook that it is “entirely free” to “use the money on any kind of research that we would like”.
“This is very important to me otherwise I would not have done it,” he said.
The money from Facebook (an “initial” grant spread over five years) is not meant to be spent on new permanent professorships; rather it will be used to hire temporary doctoral and postdoctoral students, meaning less pressure to get repeat funding, he argued – and so less pressure to avoid antagonising Facebook.
The institute is also looking for other backers, and its advisory board will be free of Facebook employees, he said.
Facebook’s announcement of the institute says it will “share insights, tools, and industry expertise related to issues such as addressing algorithmic bias, in order to help institute researchers focus on real-world problems that manifest at scale”. Researchers will indeed share “opinions and views” with Facebook, said Professor Lütge, “but with others too”.
There are other dangers aside from direct corporate influence on research. Universities also need to be “very aware” of any kind of “ethics washing” initiative, according to Virginia Dignum, an associate professor at Delft University of Technology who has worked with Professor Lütge on developing guidelines for socially responsible AI.
“Ethics washing” is when companies try to stave off regulation by claiming to act ethically (Professor Dignum stressed she did not know enough about the TUM institute to judge its appropriateness).
It is “quite legitimate and positive” that industry encourages and invests in research into ethical issues, said Raja Chatila, director of the Institute of Intelligent Systems and Robotics at Pierre and Marie Curie University in Paris. But “funding an ethics institute should not be considered a kind of ‘absolution’ for industry of their possible responsibilities”, he added.
Sometimes companies do indeed fund academic guidance to help them avoid regulation, acknowledged Professor Lütge, although he argued that Facebook has made clear it is happy to accept government oversight.
More broadly, his work has led him to conclude that there is nothing wrong with companies funding research in order to buff their reputation. “If there is an advantage, eventually, in terms of reputation [for Facebook], but we at the same time manage to come up with improvements that will work to the advantage of many people – improve the fairness of algorithms, improve safety, security – this is OK, and this should not be regarded as unethical,” he said.
However, Facebook cannot simply say “here’s the money” and carry on without acting on the institute’s findings, Professor Lütge warned.