Logo

Beyond bans: AI-resilient and creative HE assessment design

To sustain academic integrity in an AI-present learning environment, educators must redesign assessment to foreground judgement, context and creative ownership, says Jasmine Mohsen
Jasmine Mohsen's avatar
SP Jain London School of Management
5 Feb 2026
copy
  • Top of page
  • Main text
  • More on this topic
A student having an online meeting
image credit: iStock/ChayTee.

You may also like

Essential GenAI skills for marketing students
6 minute read
Students working on a project together in class

Popular resources

Whether students will use artificial intelligence is not the question universities should be asking any more. They already do. The question is whether universities will continue to defend assessment models that were designed for a pre-AI world, or whether they are willing to redesign assessment so that learning remains meaningful, inclusive and intellectually demanding in an AI-present environment.

We cannot sustain academic integrity through prohibition and surveillance alone. Instead, we must shift towards AI-resilient assessment design, including a critical re-examination of the dominance of essays and a serious engagement with creative, multimodal assessment formats that foreground judgement, application and ownership.

What AI bans get right, and why they remain insufficient

Bans on AI rest on a fragile assumption: that removing these tools restores meaningful learning. Historically, this assumption has rarely held. Calculators did not eliminate mathematical reasoning; spellcheck did not erode literacy; reference managers did not weaken scholarship. Instead, assessment practices evolved to place greater emphasis on conceptual understanding, interpretation and judgement.

AI represents a similar inflexion point, but with higher stakes. Detection tools are probabilistic, opaque and uneven in who they flag, as they are prone to creating false positives. They shift academic labour from pedagogy to policing and recast students as potential violators rather than developing learners. More critically, they fail to confront a deeper pedagogical problem: if an assessment can be convincingly completed by AI, banning the tool does not make the task intellectually robust; it merely makes honest completion more difficult.

From detection to design: principles of AI-resilient assessment

A more productive response to generative AI is to redesign assessment so that learning remains central regardless of tool availability. AI-resilient assessment does not imply permissive or uncritical use of AI. It involves designing tasks which prevent the substitution of learning with automated text generation, because it resides in judgement, contextual understanding and decision-making rather than in written output alone.

Across disciplines, AI-resilient assessments tend to share three interrelated features.

1. Contextual grounding
Tasks are embedded in local data, lived experience or evolving real-world contexts that AI cannot convincingly fabricate. For example, business students may be assessed on their analysis of a live organisational challenge faced by a local small and medium enterprise (SME) partner. In industry-embedded modules, students can work directly alongside practitioners, such as in my AI and supply chain module that collaborates with the CEO and co-founder of the non-profit organisation StreetBox London. Such assessments require students to engage critically and innovatively with contemporary constraints to propose context-specific solutions.

Similarly, education students might design pedagogical interventions for a classroom setting they have directly observed, while health or social care students may be required to respond to case scenarios that evolve as educators introduce new information and constraints. In these tasks, relevance and specificity are prioritised over generic coverage.

2. Process transparency
Students must make their reasoning visible through reflective commentaries, annotated drafts, design rationales or short oral explanations. This may include a brief decision log explaining why they selected particular theories, data sources or design elements; an oral defence in which students justify trade-offs and acknowledge limitations or annotations demonstrating how they incorporated feedback across iterations. These mechanisms shift assessment away from polished final outputs and towards intellectual ownership and reflective learning.

3. Evaluative judgement
Rather than reproducing content, students must critique, prioritise and defend their choices. This may involve comparing multiple theoretical frameworks and justifying their selection for a given context, evaluating ethical trade-offs in marketing or policy decisions or critically assessing the limitations of AI-generated suggestions and explaining where human judgement intervened. While AI may support aspects of exploration or drafting, it cannot meaningfully complete these evaluative tasks on the student’s behalf.

Rethinking the essay’s dominance 

Central to this redesign is a critical reconsideration of the essay’s dominance in higher education. Essays have long been valued for their ability to assess argumentation and synthesis. Yet in practice, many essay-based assessments reward fluency, structure and surface coherence more than judgement, originality or contextual insight – precisely the areas where generative AI now performs well.

This does not mean we should abandon essays altogether. Rather, institutions should question whether they are being used by default rather than by pedagogical necessity. In many cases, learning outcomes related to creativity, persuasion, application and professional communication may be better assessed through non-traditional formats.

Creative assessment as an AI-resilient strategy

One effective response is to move beyond standard written submissions and ask students to produce creative artefacts that demonstrate applied understanding. In my teaching, for example, I’ve supplemented traditional essays with:

  • Strategic posters translating theory into visual argument
  • Advertising teasers that require audience segmentation and ethical judgement
  • Campaign concepts supported by theoretical justification
  • Brand storytelling artefacts aligned with psychological and cultural frameworks.

In these assessments, the artefact itself is only part of the submission. Students must also submit reflective commentaries or deliver short presentations explaining:

  • Why particular creative decisions were made
  • How theory informed design choices
  • What constraints were encountered and how they were managed.

This combination ensures that students evidence learning not through surface polish, but through reasoned justification.

Similar approaches are emerging across disciplines. Engineering students design prototypes rather than write reports. Law students draft policy briefs or advocacy documents. Social science students produce podcasts or digital exhibitions accompanied by methodological reflections. These formats mirror real-world professional practice while remaining academically rigorous.

Crucially, while AI can assist with ideation or drafting, it cannot replicate situated judgement, contextual sensitivity or the defence of creative and ethical choices.

Integrity through ownership, not prohibition

Reframing assessment in this way shifts the meaning of academic integrity. Rather than asking whether students have used AI, educators ask whether students can demonstrate understanding, ownership and judgement. Integrity becomes embedded in design rather than enforced through surveillance.

This approach also restores academic agency. Instead of acting as investigators reliant on imperfect detection tools, educators reclaim their role as designers of learning environments that demand intellectual accountability.

AI bans offer reassurance in uncertain times. They are simple, visible and administratively attractive. Yet they also preserve fragile assessment designs, widen inequality and push AI use underground.

AI-resilient and creative assessment is more demanding. It requires reflection, redesign and a willingness to move beyond inherited conventions such as the default essay. But it is also more honest about the world students already inhabit and more aligned with the deeper purposes of higher education.

Jasmine Mohsen is a lecturer (assistant professor) at SP Jain School of Global Management, London

If you would like advice and insight from academics and university staff delivered direct to your inbox each week, sign up for the Campus newsletter.

You may also like

sticky sign up

Register for free

and unlock a host of features on the THE site