Is mass cheating the inevitable result of AI’s rise?

The brief launch of an app promising to attend students’ lectures as well as write their assignments caused some academics to despair at a dystopian near-future in which learning becomes a pointless sham. But others believe the abyss can be bypassed. Juliette Rowsell reports

Published on
April 7, 2026
Last updated
April 4, 2026
Student in a lecture with a computer loading bar superimposed. To illustrate AI use among students when answering questions, leading some academics to despair at a dystopian near-future in which learning becomes a pointless sham.
Source: Getty Images/iStock montage

The launch of Einstein AI in February was seen by many academics as the sudden and premature realisation of their worst nightmare.

The edtech tool promised to log into students’ virtual learning environments every day to watch their lectures, read essays, write papers, participate in discussions and submit their “homework”.

The tool prompted an online cry of horror from educators, who branded it “eduwashing” and an “AI cheating company”. In a Bluesky post that elicited more than 700 reposts, Aparna Nair, an assistant professor at the University of Toronto, voiced the question that many colleagues were asking themselves faced with the prospect of their students lazing in bed while a machine did all their studying for them: “What even is the fucking point?”

The tool has since been taken down, but only after it was threatened with legal action for intellectual copyright infringement by CMG Worldwide, which manages the licensing rights for the Einstein name on behalf of the Hebrew University of Jerusalem.

ADVERTISEMENT

Of course, Einstein AI is far from the first online tool that has strained the boundaries of academic integrity. A Sydney court recently found that the homework help platform Chegg had facilitated cheating by students at Monash University, for instance. And the likes of ChatGPT and Google have introduced study modes in a bid to act as “personal AI tutors” and to “guide” students. But the companies insist these tools have been “guided by pedagogy” and developed alongside academics. Critics of Einstein AI claimed that its blatant promise to do all students’ work for them marked a watershed moment in edtech branding.

Dave Hitchcock, course director of the History Subject Suite at Canterbury Christ Church University, said the launch of Einstein AI was a “ripping the mask off” moment. Its launch showed that the edtech revolution “isn’t about education. This isn’t about learning. This isn’t about anything they say it’s about – it’s about money.”

ADVERTISEMENT

So while Einstein AI’s existence was very brief, is it inevitable that it will reappear in another guise – or be copied by other firms? Is mass cheating the unavoidable destination of higher education, eliminating its whole purpose – and destroying both its own business model and that of the edtech businesses, which will no longer have any students to sell products to?

A robot removing its mask to reveal dollars, illustrating that edtech applications such as Einstein AI are more concerned with making money than helping with education and learning.
Source: 
iStock montage

A recent report by the UK’s Higher Education Policy Institute found that even though ChatGPT only launched in late 2022, AI use among students is now “near universal”, with 95 per cent of students using it in at least one way and 94 cent using it to help with assessed work.

One consequence of this, according to Hitchcock, is that “less overall preparation is being done” for academic tasks. For instance, he is increasingly forced to rely on backup materials after students turn up to class not having done the required reading, instead relying on in-class ChatGPT summaries.

The widespread and partially accepted use of AI by students means that “what were once easily understood concepts – cheating and academic plagiarism – are now a lot harder to define”, Hitchcock added. “That basic process question of, ‘did you do it?’ has become fundamentally up for grabs and that really shatters a lot of basic assumptions we make about how education works.”

Michael Draper, professor in legal education at the University of Swansea, painted a similar picture, noting that student engagement is “decreasing year-on-year”. In seminars, the “pause” after he asks a question is “getting longer and longer – and you know it’s not because they’re searching within their own notes: it’s because they’re sticking it into a chatbot to get the answer,” he said.

“Everything’s just-in-time.…Maybe before the pandemic, or even prior to that, you actually might have had a discussion with students because they’ve actually done work in advance…But now you go in and you ask a question and they’ve all got their laptops open…waiting for the [chatbot’s] answer to come back.”

For Draper – who is also the chair of Swansea University’s Academic Regulations and Student Cases Board – AI has caused a perfect storm, alongside the cost of living and universities’ own dwindling resources. While student-to-staff ratios in the UK can be as high as 25, the “ideal” is between 10 and 15, Draper said. It is “very difficult” to develop relationships with students in such large cohorts, he explained – and that is made all the harder by the need for students to undertake paid work, limiting the time they have available for studying.

On that point, a report by Hepi and AdvanceHE, published last June, found that the number of hours spent on independent study has fallen to 11.6 hours a week, down from 13.6 hours in 2024 and 15.7 hours in 2021 – a 26 per cent decrease in five years. But the report said this “is perhaps understandable given the large majority of students who work for pay”. The report found the proportion of students in term-time employment has reached 68 per cent, rising from 56 per cent in 2024 and just 35 per cent in 2015.

ADVERTISEMENT

All of this, of course, only heightens the temptations for students to use AI to take major shortcuts on their coursework.

Hitchcock also believes that AI has heightened a trend according to which “the process of education is being downplayed in favour of the outcome”, with students interested in a degree primarily for the advantage it confers in the job market. The process of learning itself has been reduced to “just a thing one needs to go through and [to minimise, in order] to be as efficient as possible…Obviously that’s an incredibly impoverished understanding of what learning is.”

A student waitress cleans a table while a robot works on a laptop. To illustrate that more students are using AI to help with academic work as they have less time for studying because of their need for paid work.
Source: 
iStock montage

For Dan Sarofian-Butin, a professor at the School of Education and Social Policy at Merrimack College, Massachusetts, the impact of AI has become all-consuming. “I talk more about AI than I talk about my wife because I’m obsessed with it by now,” he said. “My wife’s pretty tired of it…My brain has been hurting for three years.”

Initially, he was positive about the potential for AI to improve learning if educators employed it wisely. But more recently he became less optimistic. The traditional “transmission” model of learning, which sees academics “transmit” knowledge to students through lectures and seminars, has been “broken” by AI, said Sarofian-Butin – because “AI can do it better than all of us”.

Hence, “just about every single one of my college students cheats with AI and they do it – to get to the existential point – because they don’t want to put in the work because it’s so much easier not to think,” he said.

Academics are, naturally, on the back foot. Today’s students, born in the late 2000s, are generative AI natives, and the younger ones used such technologies in high school, too. They know how to “cheat” – and they know how to “mask” their use of it, Sarofian-Butin said. 

“It’s so hard to be a police officer. That’s not what I signed up for as a professor.”

Neither did Hitchcock. For him, rampant AI-powered cheating is breaking the bonds of trust between student and educator, which are “fundamental to education…There is no point at all in me being in a classroom with students I cannot trust to do the work.” Echoing Nair’s point in less profane terms, he added: “We may as well all pack up and go home. Nothing will happen. There will be a profound waste of everybody’s time.”

Swansea’s Draper agreed that there is now always a “degree of suspicion” about students’ work submitted under unsupervised exam conditions – a relic of the Covid area, when students were first permitted to complete their assessments at home. For that reason, Draper has returned to in-person handwritten exams. But even this may not be enough to ensure integrity as tech grows ever more sophisticated. For instance, more than 7 million of Meta’s AI glasses were sold last year, which allow internet access and include tiny speakers, microphones and a camera.

ADVERTISEMENT

There is a tech “arms race”, Draper said, which is “just ratcheting up all the time”. And he questioned – only “partially” tongue-in-cheek – whether exams will have to be done in “Faraday cages” – which block some electromagnetic fields – to give universities the peace of mind that exams are not being undermined.

However, he noted that the need for extra checks on students before they go into exam halls will throw up ethical problems since ensuring that they do not have AI-powered tech hidden on them isn’t as simple as “checking for a calculator”.

Airport security measures at the entrance to an exam hall. To illustrate the difficulties in ensuring students do not have AI-powered tech hidden on them during tests.
Source: 
Getty Images/iStock montage

Universities are increasingly trying to catch up with student AI use by embracing it – subject to certain usage rules. Some institutions, including the universities of Oxford and Edinburgh, have partnered with OpenAI, and in January the University of Manchester announced the first partnership between a university and Microsoft Copilot.

Duncan Ivison, Manchester’s president and vice-chancellor, said banning AI use among students “isn’t a genuine option” and is “just hopeless”.

“We can’t bury our heads in the sand,” Ivison said. “Our students are using it. Our colleagues are using it. And I think we have a responsibility to come up with responsible and ethical ways of deploying it, at the same time as trying to understand what the potential benefits could be [and] what the dangers are.”

Richard Watermeyer, professor of higher education at the University of Bristol, agreed that there needs to be a “de-escalation around the narrative” surrounding AI, particularly on the part of the AI abolitionists.

Last year, for instance, he co-led a major study into how AI might be incorporated into preparations and assessment in the UK’s Research Excellence Framework, which is used to allocate nearly £2 billion in annual research funding every year. Initially, people “didn’t [even] want to have a conversation” about AI, he said – obliging him and his colleagues to “force” that conversation. 

Part of the aim of doing so, Watermeyer said, was to challenge what academics were “seeking to uphold” when they pushed back against AI, and to question why they believe “analogue was better”. In asking these questions, they sought to move the conversation beyond the “knee-jerk, ideologically politicised” position of “this is bad – which is a very binary way of looking at something”. And they succeeded: “people became much more open” to the idea of using AI in the REF.

The same ethos should be applied to students’ AI use, Watermeyer believes. He said the court cases being pursued by large numbers of UK students seeking compensation for the disruption to teaching during the pandemic – which have seen UCL forced to offer major payouts to thousands of students – could be seen “cynically”. But they could also be viewed as reflecting students’ genuine sorrow that “they were being deprived of the kinds of pedagogical experiences that universities are bloody brilliant at. I think that provides a really interesting window in terms of saying students want more. Let us not misassume the intentions of our students,” Watermeyer said.

But part of the challenge for academics warming to AI is “stigmatisation”, he conceded. People are nervous that “any positive comment” might see them pegged by colleagues as a “flag waver of AI – and if they’re that, what does that reveal about them?”. 

And while he agreed with concerns about the dangers of AI technologies and criticism of edtech firms, those concerns centre more on the ethics of “digital capitalism, [and] less about the tools being themselves inherently evil”.

An example of that suspicion of AI firms can be seen in the open letter that has been submitted by Edinburgh academics, calling on the university to end its partnership with OpenAI on the grounds that it “conforms neither to the University’s stated AI Principles, nor our Responsible Procurement Policy” in areas such as carbon emissions, fair work practices and “the University’s human rights related policies”. The letter also raised concerns about OpenAI’s transparency and accountability.

But, for his part, Sarofian-Butin is more conflicted. AI makes him feel “like I’m bipolar. Some days I love it like it’s the best thing in the world. Some days I just want to rip all my hair out.” And the good days – when he believes that AI, if “done right”, “can be so powerful” and is “absolutely the future of education” – continue to outnumber the bad.

Moreover, he believes his overworked brain has finally come up with how to get closer to that bright future. He has redesigned 80 per cent of his assessments, so that now students submit short 200-400 word reflections after each seminar. In these, they summarise their understanding of what they discussed and comment on how they might have used AI to aid their understanding. 

However, while this approach is working for him, he believes that academics need to be given more opportunity and incentive to similarly experiment with AI, so that they can come to their own understandings about how it can be used to aid working. Many academics feel that they lack the time to do this, but the problem runs “deeper”, according to Sarofian-Butin, hingeing on the fact that most academics would rather use their spare time for research.

“None of us want to put in the work to become better teachers because we were never trained to be teachers. We were trained to be researchers, professors, [to do] PhDs,” he said.

“So when I stand in front of the classroom, I want to have that transmission model of education work…I want to do things the old way. I think, sadly…we’re going to change [only if the impetus] comes from the outside: when we’ve realised just how broken our system is.”

But, ultimately, he himself “got into teaching because I wanted to actually educate my students”. And AI “finally gives me the right tool to get them to actually learn – and not just the students in the middle, not just those honour students, but every student”, he believes.

“I have to know how to do it the right way. But if I can do it the right way, I can teach every single one of my students the stuff that I care about. That makes me unbelievably excited.”

ADVERTISEMENT

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Please
or
to read this article.

Related articles

Sponsored

Featured jobs

See all jobs
ADVERTISEMENT