Academics are increasingly allowing artificial intelligence (AI) to be used for certain tasks rather than demanding outright bans, a study of more than 30,000 US courses has found.
Analysing advice provided in class materials by a large public university in Texas over a five-year time frame, Igor Chirikov, an education researcher at University of California, Berkeley, found that highly restrictive policies introduced after the release of ChatGPT in late 2022 have eased across all disciplines except the arts and humanities.
Using a large language model (LLM) to analyse 31,692 publicly available course syllabi between 2021 and 2025 – a task that would have taken 3,000 human hours with manual coding – Chirikov found academics had shifted towards more permissive use of AI by autumn 2025.
Academic integrity concerns were the main talking point regarding AI among 63 per cent of course materials in spring 2023 but this fell to 49 per cent by autumn 2025.
Instead, policies shifted towards calls for students to attribute their AI use, which was cited in only 1 per cent of syllabi in early 2023. By the end of 2025, this figure was 29 per cent, according to the working paper, titled “How Instructors Regulate AI in College: Evidence from 31,000 Course Syllabi”, published in Berkeley’s open-access repository.
“References to AI as a learning tool remain relatively rare at 11 per cent by fall 2025, though this represents growth from near zero,” notes Chirikov.
Course materials that mention AI instead moved towards policies that explicitly restrict or permit AI use depending on a specific task, continues the paper.
When policies mention drafting or revising, 79 per cent of policies ban the use of AI, it explains. For reasoning and problem-solving 65 per cent prohibited AI use but for coding/technical work only 20 per cent banned AI and for editing or proofreading the proportion was 17 per cent.
The shift in AI policies revealed academics had exercised “professional judgements…during a critical period of AI adoption”, argued Chirikov, who told Times Higher Education that it was clear scholars are “warming to” and “experimenting with” AI use rather than banning it.
“More importantly, instructors are actively redesigning courses by adding new assignments where students are expected to use AI, which does not really fit the idea that faculty have simply stopped trying [to police AI use],” he said.
“There is also a not insignificant number of courses that go further and treat AI as a tool students should use throughout the course, including on exams. That points to a group of instructors who are not just tolerating AI but trying to integrate it into assessment and learning in a deliberate way.”
The perception that scholars are in favour of blanket bans of AI use in assessment is increasingly out of date, continued Chirikov.
“Outright bans tend not to persist, possibly because they are hard to enforce. Instructors increasingly specify when AI use is acceptable and for which tasks, which aligns better with how learning actually happens,” he said.
On how perceptions of AI are moving from its threat to academic integrity to how it can enhance learning, Chirikov believed this was the “right conversation” to have with students.
“Academic integrity still matters, but the bigger question is what students are practicing and what skills they are building. More instructors are starting to frame AI in terms of learning, including when it supports practice and when it might replace the practice students need,” he said.
Register to continue
Why register?
- Registration is free and only takes a moment
- Once registered, you can read 3 articles a month
- Sign up for our newsletter
Subscribe
Or subscribe for unlimited access to:
- Unlimited access to news, views, insights & reviews
- Digital editions
- Digital access to THE’s university and college rankings analysis
Already registered or a current subscriber?








