After being falsely branded an AI plagiarist, how can I accuse students?

Maybe I need to drop the illusion that I can spot AI-written text and, instead, devise assignments that AI can’t help with, says David Mingay

Published on
January 9, 2026
Last updated
January 9, 2026
Multiple fingers point at a robot, symbolising accusations of AI use
Source: nicoletaionescu/iStock

Like most people working in higher education these days, I’m obsessed with artificial intelligence – especially the possibility that my students might be using it to produce their assignments. I have an all-consuming fear of failing to spot that an essay wasn’t written by the person who submitted it, and I make regular referrals of suspected cheats to the academic misconduct office.

You can imagine my shock, then, when the tables were turned on me recently.

I’m a visiting lecturer at Villa College, a tertiary-level institution in the Maldives. A colleague, Aishath Nasheeda, and I conducted some research into the implementation of a life skills programme in the islands’ schools and submitted the resulting paper to what seemed to be an appropriate educational research journal. While it isn’t high-impact, it has been publishing quarterly for a few years, it’s peer-reviewed and it’s fee-free – so the Netherlands-based publisher, which has a whole stable of journals, is not in it to fleece authors.

The executive editor emailed back to say that the article aligned with the scope of the journal but that some formatting amendments were required. Also, it lacked a statement on whether AI had been used in its production.

ADVERTISEMENT

I duly made the amendments and included the factually correct line: “No generative AI or AI-supported technologies were used at any stage of this research.”

I was surprised, then, to get a reply from the editor saying an AI detection program had judged our paper to have been mainly written using AI. Even more oddly – and ironically – he referred to the paper by the title of an entirely unrelated study examining chatbots’ very limited ability to pass scientific tests.

ADVERTISEMENT

So I asked him if he’d sent the warning to the wrong authors. He reaffirmed that it was our paper that was computer-generated and that it needed a major rewrite.

Specifically, his AI detection software had judged that the “manuscript exhibits unusually high fluency, uniformity, and consistency”. That was because “tonal shifts and syntactic irregularities…are absent” and that there is “low variance in sentence length”. It also talked about some other sciencey-sounding things I didn’t understand, such as “near-perfect contextual continuity with minimal natural drift”, high “N-gram repetition”, and “low token entropy”.


Campus spotlight guide: AI and assessment in higher education


I told the editor that I’d attended a Scottish grammar school in the 1970s, where we were beaten if we didn’t write in a fluent, well-structured way. But he wasn’t having it and continued to insist that I rewrite it. How? To make it less well written?

However, when I ran the paper anonymously through ChatGPT, it took a very different view. It informed me that my paper was very likely written by a human because AI wouldn’t write something so “messy”. There were, among other things, “human-like imperfections in grammar and flow” and “uneven sentence rhythms”.

What a cheek! But I alerted the editor to this discrepant finding and asked again if he’d perhaps been referring to the wrong paper.

He ignored this and instead sent an email reminder to rewrite the paper. When I pushed him for his views on the contradictory analyses, he reasserted his claims and rejected our paper outright.

I then ran the journal’s AI detection analysis through ChatGPT to see what it thought of it.

ADVERTISEMENT

The headline judgement was that “their analysis is not valid. It contains methodological flaws, misunderstandings of stylometry, and several claims that demonstrably do NOT apply to your manuscript.”

ADVERTISEMENT

It went on at length. “Their overall approach is not scientifically credible,” it said. “No validated method in 2024–2025 can reliably distinguish polished, well-edited human academic writing from AI-assisted writing…The “computational indicators [N-grams and entropy] section is pseudoscientific”. But then, just as I was feeling triumphant, it went on to reiterate my “grammatical imperfections” and “long, meandering sentences”.

Still, I sent this verdict to the editor – but he replied only to thank us for our interest in his journal and to say that he looked forward to getting our valuable contributions in the future.

Assuming the editor really did check the right paper, this sort of misapplication of AI to check for AI use is worrying. Authors will end up being penalised for writing well, and there will be a perverse incentive to write worse or even use AI to “de-polish” articles in order to get them published.

We also risk undermining the peer-review process if human decisions about methodological soundness or intellectual contribution are replaced with automated gatekeeping by unvalidated software (what ChatGPT referred to as “black-box detectors [the editors] don’t understand”). The peril is all the greater if there is to be no right of appeal.

And while we’re trying hard to decolonise academia, blanket, unthinking application of AI detection will penalise authors for whom English is a second language, who may rely on legitimate editorial AI tools when writing for English-language journals. We’ll end up with inequity masquerading as rigour.

Such reflections have also made me re-evaluate my own approach to assessing students’ work. Maybe if I really want to weed out the AI cheats, I need to drop the illusion that I can spot AI-written text and, instead, devise assignments that AI can’t help with.

After all, perhaps the student who I was about to dob in relies on legitimate AI editorial help as reasonable accommodation for a language disability.

Or maybe their assignment is so well written because someone taught them to write well. Maybe even – just let me dream! – that someone was me.

ADVERTISEMENT

David Mingay is a visiting lecturer at Villa University in Maldives and an associate lecturer at the Open University. He would like to make clear that he wrote this article himself.

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Please
or
to read this article.

Related articles

Reader's comments (10)

Splendid stuff! - thanks. The only effective answer might be unseen, closed-book, 3-hour, hand-written, closely invigilated exams? - being of the same Old Codger vintage as the author I did 10 in succession M am to F pm over one week to earn my History degree back in 1975! Or go back even earlier to C18 oral public ‘disputations’?! Not that AI should not be incorporated into C21 pedagogy - the task of HE is to get the student to use this new tool in a critical-thinking way as preparation for working-life (one key aspect of which is for the student to understand that some publishers ‘protect’ material from being accessed by the AI trawlers and hence the prof’s magnum opus might not be cited in the AI essay - a clear fail!).
"hence the prof’s magnum opus might not be cited in the AI essay - a clear fail!)". Although a humorous remark I fancy, it does point to the absurdities and inconsistencies of our current practice. In this cautionary tale it is not the AI which is at fault but the character of the "Journal Editor" who is a poltroon if the anecdote is to be credited. Given this it's hard to draw any serious lessons about AI, and the fallibility of humans is all too familiar.
c'est bonne
So AI can't spot AI, and AI bots disagree over what is and is not AI? We really are in trouble! The solution is (tick one): (a) more AI (b) less AI.
Why would you expect AI to "spot" AI? That's central to the machinations (not "hallucinations") of the BOTS We need to learn--and teach--HOW to use AI responsibly. The issue is not AI yes vs. AI no. I distinguished between AI and AU--Artificial Unintelligence
I don't see why the author didn't name the journal in question. If their editors believe their policy is fair, it would allow them to publish a rebuttal. If (as I suspect) the policy is based on nonsense, they'll need to change it.
Fiction? Which BOT wrote this? I underscore #4s comment?
The main thrust of the essay is to question whether or not we are able to identify AI generated work in student's essays and thus to penalise them appropriately. I know so many of my colleagues who boast that they are able to spot such work, even though they seem entirely incaoable of proving misconduct. The charge by the journal editor is that an "AI detection program had judged our paper to have been mainly written using AI". Now this to my mind is miscondcut rather then plagiarism (a form of misconduct), or is it? The research itself may not be AI rnhnaced in any way, but just the writing of the essay, and, sadly to say, many of our student whose first language is not English will use such tools for stylstic and grammatical accuracy. I think David is probably quite correct in that this may be a red herring and the editor simply made a mistake in returning the review for a different essay and then seems to be unwilling to admit his mistake (deliberately or via denial). This essay is very well-written (for what it's worth) and the beatings at school David suffered seem to have paid off, but we don't recommend that pedagogy any more!!!
Yes indeed. I think another problem here is that the use of AI was detected (erroneously) but the authors had confirmed AI was not used in the essay in any way. The Editor (assuming he had not made a mistake) thus is requiring revisions (and ater rejecting) the essay for that reason. Therer seems to be no issue here about the quality of research (as we would not expect there to be). If the author had credited the use of AI, for argument's sake, then this essay in theory, could have been accepted as it was?.The Editor thus does not believe the author's statements that AI was not used (which is a serious allegation), and requires the essay to be re-written in the authors style. So we are not talking about plagiarism per se, if we define this as the theft of someone else's ideas, just alleged misconduct, the allegation (false) that AI has been used in the writing of the essay or report. In the case of of our students we also face the issue that they have used AI to generate the content as well as the text of their essays, which is plagiarism. We must really be precise about what we are accusing the students of doing and what is and what is not acceptable. And we must be able to evidence and demonstrate any misconduct and not rely on the "this reads to me like ..." and "I can tell this is not ..." kind of accusation. We are not going to go back to 3 hour closed book examinations, so get real it simply ain't going to happen and, when AI is fully normalised in professional work and personal life outside acadenia, we are not going to get less of it but it will become more pervasive. What is now becoming apparent is that AI produced essays are the norm and our students do not regard this as problematic. How we deal with this is the issue, but we are deploying analogue methods and procedures for a digital age. My own view, for what its worth, is that AI will simply render so much of what we have done in the past simply redundant, in the same way the power loom replaced the skilled home weavers.
I think we do take ourselves a bit too seriously at times. We are simply not empowered and have little influence outside our narrow areas of expertise and often precious little in our own Faculties and Universities. Artificial intelligence (AI) is a massive and rapidly expanding global industry, with the global market valued at approximately $244-391 billion in 2025 and projected to grow to over $4.8 trillion by 2033. This growth is driven by substantial investment, rapid technological advances, and increasing adoption across nearly all professional sectors. It's going to be transformative of our experience, especially that of the young. We might not like it (though some of us will), but it doesn't really matter if we do or we don't. We will simply have to succumb to events which are beyond our control, or detach ourselves from the process entirely.

Sponsored

Featured jobs

See all jobs
ADVERTISEMENT