Like most people working in higher education these days, I’m obsessed with artificial intelligence – especially the possibility that my students might be using it to produce their assignments. I have an all-consuming fear of failing to spot that an essay wasn’t written by the person who submitted it, and I make regular referrals of suspected cheats to the academic misconduct office.
You can imagine my shock, then, when the tables were turned on me recently.
I’m a visiting lecturer at Villa College, a tertiary-level institution in the Maldives. A colleague, Aishath Nasheeda, and I conducted some research into the implementation of a life skills programme in the islands’ schools and submitted the resulting paper to what seemed to be an appropriate educational research journal. While it isn’t high-impact, it has been publishing quarterly for a few years, it’s peer-reviewed and it’s fee-free – so the Netherlands-based publisher, which has a whole stable of journals, is not in it to fleece authors.
The executive editor emailed back to say that the article aligned with the scope of the journal but that some formatting amendments were required. Also, it lacked a statement on whether AI had been used in its production.
I duly made the amendments and included the factually correct line: “No generative AI or AI-supported technologies were used at any stage of this research.”
I was surprised, then, to get a reply from the editor saying an AI detection program had judged our paper to have been mainly written using AI. Even more oddly – and ironically – he referred to the paper by the title of an entirely unrelated study examining chatbots’ very limited ability to pass scientific tests.
So I asked him if he’d sent the warning to the wrong authors. He reaffirmed that it was our paper that was computer-generated and that it needed a major rewrite.
Specifically, his AI detection software had judged that the “manuscript exhibits unusually high fluency, uniformity, and consistency”. That was because “tonal shifts and syntactic irregularities…are absent” and that there is “low variance in sentence length”. It also talked about some other sciencey-sounding things I didn’t understand, such as “near-perfect contextual continuity with minimal natural drift”, high “N-gram repetition”, and “low token entropy”.
Campus spotlight guide: AI and assessment in higher education
I told the editor that I’d attended a Scottish grammar school in the 1970s, where we were beaten if we didn’t write in a fluent, well-structured way. But he wasn’t having it and continued to insist that I rewrite it. How? To make it less well written?
However, when I ran the paper anonymously through ChatGPT, it took a very different view. It informed me that my paper was very likely written by a human because AI wouldn’t write something so “messy”. There were, among other things, “human-like imperfections in grammar and flow” and “uneven sentence rhythms”.
What a cheek! But I alerted the editor to this discrepant finding and asked again if he’d perhaps been referring to the wrong paper.
He ignored this and instead sent an email reminder to rewrite the paper. When I pushed him for his views on the contradictory analyses, he reasserted his claims and rejected our paper outright.
I then ran the journal’s AI detection analysis through ChatGPT to see what it thought of it.
The headline judgement was that “their analysis is not valid. It contains methodological flaws, misunderstandings of stylometry, and several claims that demonstrably do NOT apply to your manuscript.”
It went on at length. “Their overall approach is not scientifically credible,” it said. “No validated method in 2024–2025 can reliably distinguish polished, well-edited human academic writing from AI-assisted writing…The “computational indicators [N-grams and entropy] section is pseudoscientific”. But then, just as I was feeling triumphant, it went on to reiterate my “grammatical imperfections” and “long, meandering sentences”.
Still, I sent this verdict to the editor – but he replied only to thank us for our interest in his journal and to say that he looked forward to getting our valuable contributions in the future.
Assuming the editor really did check the right paper, this sort of misapplication of AI to check for AI use is worrying. Authors will end up being penalised for writing well, and there will be a perverse incentive to write worse or even use AI to “de-polish” articles in order to get them published.
We also risk undermining the peer-review process if human decisions about methodological soundness or intellectual contribution are replaced with automated gatekeeping by unvalidated software (what ChatGPT referred to as “black-box detectors [the editors] don’t understand”). The peril is all the greater if there is to be no right of appeal.
And while we’re trying hard to decolonise academia, blanket, unthinking application of AI detection will penalise authors for whom English is a second language, who may rely on legitimate editorial AI tools when writing for English-language journals. We’ll end up with inequity masquerading as rigour.
Such reflections have also made me re-evaluate my own approach to assessing students’ work. Maybe if I really want to weed out the AI cheats, I need to drop the illusion that I can spot AI-written text and, instead, devise assignments that AI can’t help with.
After all, perhaps the student who I was about to dob in relies on legitimate AI editorial help as reasonable accommodation for a language disability.
Or maybe their assignment is so well written because someone taught them to write well. Maybe even – just let me dream! – that someone was me.
David Mingay is a visiting lecturer at Villa University in Maldives and an associate lecturer at the Open University. He would like to make clear that he wrote this article himself.
Register to continue
Why register?
- Registration is free and only takes a moment
- Once registered, you can read 3 articles a month
- Sign up for our newsletter
Subscribe
Or subscribe for unlimited access to:
- Unlimited access to news, views, insights & reviews
- Digital editions
- Digital access to THE’s university and college rankings analysis
Already registered or a current subscriber?








