In post-colonial Africa, the ethical neutrality of AI is pure fantasy

If we treat AI as a purely rational evolution of human intelligence, we risk repeating colonial erasure on a digital scale, says Agnieszka Piotrowska

Published on
March 6, 2026
Last updated
March 6, 2026
A digital image of Africa
Ketut Agus Suardika/iStock

“Knowledge production, including the production of technology, is one of the most important political questions facing higher education globally.”

When Diana Jeater, professor of African history at the University of Liverpool, frames the issue this way, she is not speaking in metaphors. A few years ago, she persuaded the British Academy to fund a project investigating African spirits in Zimbabwe. This was not a literary trope or an investigation into oral history, but an inquiry into spirits as a lived experiential reality for many communities. In the lecture theatres of London or San Francisco, this can sound eccentric or even regressive. In Zimbabwe, it is simply part of how the world is understood and navigated.

What counts as knowledge, it turns out, depends entirely on where one stands.

I know something of this friction from my own experience in Zimbabwe. More than a decade ago, I staged a theatrical performance centred on Mbuya Nehanda, the Zimbabwean spirit medium who became a primary symbol of anti-colonial resistance. The production, first staged at the Harare International Festival of the Arts, treated spirits with irreverent humour. The authorities did not appreciate it. We came close to serious consequences for engaging publicly with spiritual realities that remain politically sensitive. That experience made one thing clear: knowledge is never abstract. It is entangled with power and memory.

ADVERTISEMENT

For most of human history, and for many communities today, spirits are understood to have agency in everyday life. Yet European colonial projects imposed narrow definitions of knowledge, privileging Enlightenment rationality while dismissing other ontologies as superstition. This was not just a philosophical disagreement. It was a way of delegitimising the spirit of local resistance.

Today, as the global academy races to define the future of artificial intelligence, we are at risk of ignoring those entanglements once again. I recently returned to the University of Cape Town (UCT) after three years away, and I was reminded once again that conversations about AI sound fundamentally different in Africa than they do in the Global North. There, debates about technocratic issues such as innovation, productivity, regulation and “safety” – the anxieties of the designer and the proprietor – are inseparable from histories of extraction and epistemic violence.

ADVERTISEMENT

Technology was never historically neutral. It was the primary tool of domination, from the military hardware of conquest to the communication systems that enabled colonial administration, surveillance and the categorisation of subjects. In the post-colonial context, the neutrality of an algorithm is a fiction. When we talk about AI in an African context, we are talking about who gets to define intelligence and whose data is harvested to feed it.

At Cape Town’s EthicsLab, I spoke with Jantina de Vries, a leading bioethicist, about this very issue. We discussed the way technology is often treated as a purely technical tool, with ethical concerns bolted on afterwards like an optional safety feature.

De Vries spoke of solidarity not as slogan but as healing, a deliberate attempt to address the legacies of oppression embedded in institutions and systems. In the context of AI, solidarity means recognising that global technologies affect deprived communities, and that repair must be part of design. But big technology companies now operate as technological empires. They extract data rather than raw materials, but the logics are familiar: centralisation of power, asymmetrical benefit and governance from elsewhere.

When we discussed this, De Vries was careful not to reduce the story to moral binaries. She pointed out that debates about big tech companies and data workers in Kenya, for example, are often framed too simply. Of course workers require proper protection, psychological support and employment stability. But in many cases these roles also provide income and opportunity in contexts where alternatives are scarce. The situation is neither pure exploitation nor pure benevolence. It is structurally complicated. One has to be aware of these dangers, she emphasised, and one needs not only to critique but to imagine new systems.

ADVERTISEMENT

One of De Vries’ collaborators, Francis B. Nyamnjoh, has a concept of incompleteness: the idea that no person, culture or knowledge system is self-sufficient. This directly challenges the rhetoric surrounding AI as seamless completion – optimisation, automation, final answers. From my own psychoanalytic work, I would add the concept of techno-transference: the way people project desire, fear and authority on to conversational AI systems. In moments of institutional and cultural anxiety, those projections intensify. Machines do not contain meaning independently of context. They inherit it – and that inheritance is never neutral.

This brings us back to Jeater’s research. She is not a great fan of AI, arguing that its training data, linguistic norms and categories reflect the Global North’s historical dominance. That does not make AI inherently oppressive, but it does mean it reproduces inherited hierarchies unless consciously redesigned.

In other words, if we treat AI as a purely rational evolution of human intelligence, we risk repeating that colonial erasure on a digital scale. We risk creating a global educational future that has no room for the lived realities and the specific histories of the majority of the world’s population.

Higher education has a choice. We can continue to treat technology as a series of technical problems to be solved in a vacuum, or we can recognise that place and the solidarity found within it still matter. Knowledge is not a cloud-based commodity. It is a grounded, political and collective act.

ADVERTISEMENT

If we want an AI ethics that is more than just a colonial fantasy, we must design it with solidarity, memory and incompleteness at its core.

Agnieszka Piotrowska is an academic, film-maker and psychoanalytic life coach. She supervises PhD students at Oxford Brookes University and the University of Staffordshire and is a TEDx speaker on AI intimacy.

ADVERTISEMENT

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Please
or
to read this article.

Related articles

Sponsored

Featured jobs

See all jobs
ADVERTISEMENT