Logo

Biased AI poses a threat to academic freedom that must be confronted

How academics can manage and guide the use of generative artificial intelligence such as ChatGPT so that it enhances learning and independent thought and does not hamper academic freedom

Shweta Singh's avatar
29 Sep 2023
copy
0
bookmark plus
comment
1
  • Top of page
  • Main text
  • More on this topic
copy
0
bookmark plus
comment
Abstract image representing free thoughts flying from a head in the form of butterflies

You may also like

Harness human and artificial intelligence to improve classroom debates
3 minute read
A robot holds up a sign showing two people talking

The conversation regarding academic freedom and free speech has reached fever pitch over the last five years. The cycle of controversy and “cancellation” seems to jump from one issue to the next at lightning speed. With academics, students and institutions clashing over polarising issues such as free speech, “cancel culture”, gender rights and a febrile political climate, academic freedom feels more topical now than in previous decades.

At the same time, the learning process itself faces disruption. Generative artificial intelligence (AI) such as ChatGPT has the potential to change the way people learn, read and write. Within a decade, AI will be judged to be as significant an advance as the transistor or the birth control pill. It is a revolutionary technology, and a double-edged sword.

AI is biased. Developers subconsciously impart their societal views as they create these tools, then the language models that make up programmes such as ChatGPT encourage word association, which reflects our societal biases back at us. The famous example of Microsoft’s short-lived AI chatbot, Tay, showed us this in 2016 after it had to be taken down when it began to deny the Holocaust less than a day after it was launched. More recently, in the US, people have been wrongfully arrested due to AI facial recognition systems, particularly African-Americans.

All this creates a real issue for higher education and academics. It’s vital that the next generation of students are taught how to think rather than what to think, as AI risks us doing. The ability of tools such as ChatGPT to provide answers and, theoretically, help us decide what’s true, is hugely convenient and must be incredibly tempting for students struggling with certain subjects.

But we must keep alive the importance of learning for its own sake, as well as critical thinking. It’s vital that students know that they must question what AI tells them and form their own opinions, rather than let biased AI models cherry pick facts to lead them to pre-determined opinions.

As academics and as a sector we need to consider carefully how to adapt to AI while maintaining our core beliefs, chiefly academic freedom and free speech. AI tools such as ChatGPT inherently feel positive for education. People can find out almost anything they want in seconds, making education accessible and more easily understandable to many. This is not some evil we must try to dodge – even if we could – but a new way of learning that we must integrate with other methods without overwhelming them.

First, we should be creative in matters of academic freedom and regarding the integrity of academic work. Discussions about in-person and oral exams or video personal statements allow individuality to be taken into account and lessen the chance of AI tools being used negatively. We must be innovative in how we measure knowledge and ensure we encourage a plurality of students with a variety of different opinions.

On a practical level, this means ensuring both academics and students are “AI literate”. We have to accept that students will use AI, much in the same way as 20 years ago there were concerns around the internet and the use of Wikipedia, for example. But by encouraging students to use AI to debate arguments, check spelling and carry out other administrative tasks, we can integrate ways of using it helpfully, while making it clear that relying on AI to form opinions is verboten.

Second, we must remember that, although AI is powerful, it’s not omnipotent. We need to continue to develop AI in ways that ensure it helps to protect and promote academic freedom, rather than becoming a tool used to suppress it.

Academics can work through their institutions to lobby business and government to request that AI development happens at a responsible and manageable level, rather than as a race to the most powerful AI programmes possible. We should urge our institutions to consider new courses and programmes in AI development and prompting so students understand in more detail exactly how AI can be used.

Moreover, AI is not perfect. The true curiosity of learning for its own sake, for debate and argument, cannot be fully replicated by AI, no matter how smart. The human touch is important, and it’s important we don’t denigrate our own chances of preserving academic freedom.

Practically, this means a few things. Academics must support any colleagues whose academic freedom is curtailed when discussing AI – such as the recent sacking of Timnit Gebru, the co-lead of Google’s ethical AI team, who was forced out after she authored papers that raised some awkward questions about Google’s advancement of AI programs.

Colleagues must rally round academics who are attacked for their research. We should go out of our way to offer platforms to academics who have been censored like Timnit Gebru. We should support our academic colleagues who are actively researching AI in other ways too, offering mentorship and giving space in academic journals to papers researching the ethics of AI and academic freedom.

We should lobby representatives in government for financial support for such research wherever possible – the recent UK government’s £54 million funding for this is welcome but a drop in the ocean for a potentially era-defining issue. We can all do our best as academics and experts in our fields to lobby MPs for further support. With the potential economic gains AI can offer, this funding should be viewed as an investment.

Finally, we should remember that, although AI is revolutionary, the battle between authority and academic freedom is not new. From Galileo’s persecution for insisting the sun was the centre of the universe through to oppression of political opinion in many parts of the world today, such disputes have been ebbing and flowing for centuries. Misinformation, suppression and the importance of free speech have a long history. AI is a new front in this dispute, which we can’t yet fully comprehend. But with the right attitude and incentives, we can work to make AI a revolutionary tool for academic freedom and education, rather than a hurdle to be confronted.

Shweta Singh is assistant professor in information systems and management at Warwick Business School, University of Warwick.

If you would like advice and insight from academics and university staff delivered direct to your inbox each week, sign up for the Campus newsletter.

Loading...

You may also like

sticky sign up

Register for free

and unlock a host of features on the THE site