Logo

‘Students don’t have to prove authorship of every word, they show their supervision of AI tools’

When all students are required to use generative AI for every assignment, their practice can be more rigorous, transparent and deeply reflective. Here, Tiatemsu Longkumer explains a rubric
Tiatemsu Longkumer's avatar
Royal Thimphu College
5 Jan 2026
copy
comment
2
  • Top of page
  • Main text
  • More on this topic
copy
comment
Female Asian college student writing in library on a laptop
image credit: silverkblack/iStock.

You may also like

Should we kill the essay?
4 minute read
Should we kill the essay?

Popular resources

From China’s imperial exams to the blue books of American universities, essays have been higher education’s preferred tool for testing knowledge and reasoning for centuries. Their appeal was evident: a good essay reveals not only what a student knows but how they think. Generative artificial intelligence unsettles this tradition. If a system can produce fluent prose in seconds, what are we assessing when we reward polished writing?

The temptation is to treat AI as a threat to integrity. I think that is a mistake. If we continue to prize surface-level polish, we incentivise students to outsource intellectual labour to machines. A better alternative is to design assessments that make thinking visible. That means embracing AI as a classroom collaborator and shifting the emphasis from what was written to how and why it was written.

In my courses, students must use AI for every assignment. They are graded on the quality of their interaction with the tool and the reflective evidence they produce about that process as well as the essay itself.

Why require AI?

Bans are both impractical and short-sighted. By requiring AI use, I normalise transparency: students document prompts, decisions and revisions in plain sight. More importantly, they build AI literacy: the ability to prompt effectively, evaluate outputs and integrate or reject suggestions with discernment.

This design also reframes integrity. Rather than demanding that students “prove” authorship of every word, I ask them to show their supervision of the tool: how they guided the model, where they exercised judgement and what critical decisions shaped the final text. Fabricating this trail often takes more effort than engaging honestly.

A rubric that rewards process

The backbone of this approach is a rubric that evaluates both product and process. It has two required artefacts: a system prompt and an AI reflection.

1. System prompt (not a chat prompt) 

Students craft a system prompt (an instruction manual that sets boundaries for how the AI should act). A strong prompt defines:

  • role (“You are an educational tutor in linguistic anthropology”)
  • mission (guide the student to analyse and reflect)
  • process (clarify, probe, request evidence, enforce structure)
  • tone and boundaries (supportive, Socratic, cites sources, avoids writing final answers unless asked).

For novice users, tutors may provide the initial prompt; as fluency grows, students design their own. In my classes, AI is cast as a Socratic partner, a questioner that elicits critical thinking, not a dispenser of answers.

2. AI reflection 

Every submission includes a concise reflection in which students:

  • summarise the system prompt they used
  • summarise the prompts they issued during the dialogue
  • critically evaluate the AI’s output (what was useful, what was biased or limited, and what required fact-checking), specifically documenting how they verified claims against authoritative sources (using class notes or tools such as NotebookLM) to distinguish between accurate synthesis and plausible hallucinations
  • write a brief personal reflection on their learning and the editorial decisions they made. 

Because reflection requires synthesising specific interactions from their chat log, students cannot bypass the necessary metacognitive work. This turns the writing process into observable evidence. It makes critical thinking, ethical judgement and intellectual independence both visible and gradable.

How the rubric works in practice

Six criteria, graded from “poor” to “excellent”, produce a detailed picture of each student’s engagement.

  1. Understanding of AI output: Does the student accurately paraphrase, critique or extend AI responses (rather than copy them)?
  2. Quality of responses to AI questions: Do their answers show depth, evidence and originality, not just agreement?
  3. Asking questions and seeking clarification: Do they demonstrate initiative, probing ambiguities, requesting sources and pushing for alternatives?
  4. Alignment with assignment objectives: Are they using AI to engage with the specific concepts and outcomes that the tutor is assessing?
  5. Directing and sustaining the conversation: Do they steer dialogue towards synthesis (rather than passively following suggestions)?
  6. Depth of critical reflection and analysis: Do they move beyond opinion to evaluate claims, counter-examples and trade-offs?

From dialogue to final essay

The assignment follows a distinct workflow. It begins with the “dialogue”: a sustained, back-and-forth exchange where the AI, governed by the system prompt, acts as a Socratic tutor. Instead of generating text for the student, the AI asks questions, challenges assumptions and probes for evidence. The student must articulate their arguments and defend their position. This exchange continues until the student is satisfied that they have sufficiently explored the topic and met the assignment objectives.

This approach sidesteps the arms race of AI-detection tools, which are unreliable and punitive. Because AI use is expected and documented, the integrity question changes from: “Did the machine write this?” to “How did the student supervise and critique the machine?” We reward intellectual labour that AI cannot automate: curiosity, judgement, ethical reflection and contextual insight.

What students (and staff) gain

Generative AI is not a passing disruption; it is the new backdrop of academic life. Pretending otherwise does our students a disservice. By requiring AI use and grading the process through a transparent rubric, we cultivate the skills graduates will need most: critical engagement with technology, ethical oversight and the ability to transform machine fluency into human insight.

The essay is not dead. It is evolving. And with the right rubric, assignments that mandate AI can be more demanding – and more rewarding – than ever.

Tiatemsu Longkumer is a senior lecturer teaching anthropology of religion, ethnographic writing, and society and technology at Royal Thimphu College, Bhutan. 

If you would like advice and insight from academics and university staff delivered direct to your inbox each week, sign up for the Campus newsletter.

You may also like

sticky sign up

Register for free

and unlock a host of features on the THE site