Qualitative researchers’ AI rejection is based on identity, not reason

The claim that AI can’t make meaning contradicts what researchers are finding when they put these tools to careful, critical use, says James Goh

Published on
January 16, 2026
Last updated
January 16, 2026
A robot working on a computer with "no" in big letters on the wall, illustrating the rejection of AI
Source: Donald Iain Smith/Getty Images

The letter spread quickly, forwarded from colleague to colleague, often accompanied by notes of approval or unease. It was an open letter bearing 419 signatures from qualitative researchers across 38 countries. Its language left no room for negotiation: generative artificial intelligence is “inappropriate in all phases of reflexive qualitative analysis”. Not sometimes. Not with safeguards. Always.

That absolutism matters, because qualitative research is how institutions decide which human experiences count. Shut off tools at this level and you shape who gets heard in education, labour and international development.

Among the lead signatories of the letter, published in the Sage journal Qualitative Inquiry, are Virginia Braun and Victoria Clarke, whose framework for reflexive thematic analysis has shaped how a generation of scholars thinks about interpretation. Their foundational paper has been cited nearly 300,000 times. When they speak, people listen. And what they are saying now is that the conversation about AI is over.

The letter makes three claims. First, that AI cannot genuinely make meaning because it operates through statistical prediction rather than understanding. Second, that qualitative research is fundamentally human work that should be done by humans, about humans, for humans. Third, that AI carries serious ethical harms, both to the environment and to exploited workers in the Global South.

ADVERTISEMENT

The third claim deserves serious attention. The data centres powering large language models consume staggering amounts of energy. The workers who train and moderate these systems in Kenya, the Philippines and elsewhere often labour under brutal conditions for poverty wages. Any responsible researcher must grapple with these realities.

But the letter does not ultimately rest its case on environmental ethics or labour exploitation. Beneath the methodological arguments lies something deeper: an ontological claim about what meaning is and who can make it. Reflexive qualitative analysis depends on meaning-making, an activity the authors argue is uniquely human. And since AI is not human, it is “fundamentally incapable of genuinely making meaning from language” and has no legitimate role in such research.

ADVERTISEMENT

The problem, however, is that this position contradicts what researchers are actually finding when they put these tools to careful, critical use.

As the founder of an AI platform for qualitative research, over the past year I have worked with thousands of qualitative researchers who are doing precisely what the open letter declares impossible: using AI in reflexive research. These researchers are not cavalier. They are not seduced by technological novelty. And they are certainly not expecting AI to make meaning for them.

Consider a recent peer-reviewed study by researchers at Arizona State and Penn State universities. They used AI to assist in analysing 371 transcripts of elementary students’ small-group classroom discussions. This was not clean or easily codified data, and precisely the terrain where qualitative interpretation has long been thought to require a human hand. Even so, the AI reached a 96 per cent level of agreement accuracy with expert human analysts in identifying students’ reasoning during these discussions.

Yet the study’s significance lay not in this high level of agreement but in the moments of disagreement between human and AI. In one instance, the system classified a student’s personal anecdote as legitimate evidence in support of an argument, a judgement the human analysts had wrongly dismissed. Faced with this divergence, the researchers were compelled to engage in reflexive re-examination of the very interpretive assumptions at the core of their study: What, precisely, counts as reasoning?

Or consider a meta-synthesis conducted for several United Nations agencies, including Unicef, Unesco, the International Labour Organization and the UN Population Fund – institutions not generally accused of methodological laxity or casual engagement with evidence. Using AI to analyse 298 reports on youth programming across education and employment, the researchers achieved 92 per cent agreement accuracy with expert reviewers in identifying complex thematic patterns.

ADVERTISEMENT

However, more revealing, again, were the points of friction. As the AI traced how key terms were used across hundreds of documents, it repeatedly raised important issues. What, exactly, did “leave no one behind” mean in practice, for instance? How did “gender-responsive” differ from “gender-transformative” programming? These phrases recurred across dozens of UN reports but their usage was inconsistent, elastic and often undefined. Confronted by the AI with this lack of conceptual clarity, the researchers were forced to reinterrogate their own frameworks and the assumptions underpinning them.

Seen this way, reflexivity is not something AI must possess but something it can provoke. Used critically, AI can challenge and strengthen rather than shortcut and subvert human interpretation. It helps researchers in stress-testing frameworks, challenging inherited assumptions, uncovering inconsistencies across data and rendering analytic decisions transparent and contestable.

So why does the letter engage none of this evidence? Because something more than methodology is being defended.

ADVERTISEMENT

Qualitative research emerged against a positivist, quantitative orthodoxy that defined knowledge as what could be counted, replicated and rendered independent of human interpretation. Researchers in this tradition fought hard for recognition that human interpretation matters and won those fights by insisting on the rigour and irreplaceability of their craft: the slow, painstaking labour of sitting with qualitative data for hours, reading and rereading, feeling the weight of people’s words. Over time, this manual immersion became not merely a method but an identity.

By virtue of bearing the surface features of the old quantitative threat of computation, automation and scale, AI triggers the same defensive reflex among qualitative researchers. An absolute ban protects identity yet does so by replacing enquiry with certainty – precisely what reflexivity was meant to guard against. And this absolutist rejection carries consequences the letter does not acknowledge.

The Arizona State study involved 371 transcripts and, as the authors noted, would have required “two research assistants the better part of an academic year” to analyse without AI, while still being “affected by limitations such as fatigue, overload, or selective attention when working with lengthy qualitative texts”. Few research teams have that luxury – especially at underfunded institutions, in the Global South and outside elite academic networks. As a result, vast amounts of lived experience, from patients, workers, migrants and communities, remains invisible. A letter that claims to protect human meaning-making ends up, in practice, ensuring that much of it never happens. 

The letter’s greatest irony is that it warns of AI’s threat to reflexivity and the discipline’s foundational commitment to questioning one’s own assumptions yet it demands a categorical ban without subjecting that ban to the same scrutiny. The impulse to protect what qualitative research stands for is legitimate – but it must be underwritten by a clear-sighted sense of what that is.

ADVERTISEMENT

James Goh is CEO of AILYZE, an AI platform for qualitative research.

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Please
or
to read this article.

Related articles

Reader's comments (8)

Come on! It is not AI yes or no. It is AI uses responsibly, carefully, and ethically. AI not AU or artificial unintelligence. Enough already
Where I have experimented with it, it seems AI is prone to flattery, defends neoliberalism, and of course, technology. It rather conservative, ironically. It's amazing in some ways, but it is populist in effect, and I agree that we cannot come to rely on it.
I hink they build "politeness" into the programmes, so AI tends to be like Mr Uriah Heep, in this, and other, respects. It does do the "on the one hand ...." but "on the other hand..." response to everything, even the most terrible things, where there is no real other hand.
"The problem, however, is that this position contradicts what researchers are actually finding when they put these tools to careful, critical use." It seems that the argument is a conceptual one rather than an empirical one.
I am really bored with all this AI material now. It's funny how the discussion tends to peak just as collecues are completing their end of first semester marking?
Yes THE is becomimg a bit like New Scientist or some such publication.
The author seems to make some big assumptions about the motivations of the signatories based on vibes.
new
Author does not have a PhD and thus does not understand the admission rules of the academie are threatened by deskilled AI users.

Sponsored

Featured jobs

See all jobs
ADVERTISEMENT