
AI governance is a duty of care, not a branding exercise
You may also like
Universities are rapidly developing AI governance frameworks. Most are thoughtful, values-driven and ethically aware. They emphasise human-centred principles, integrity, innovation and responsibility.
Yet many educators report the same frustration: the frameworks are clear about why AI matters, but unclear about how to use it.
Institutions often hesitate to be specific, concerned that prescription might limit innovation. In practice, the opposite tends to happen. When boundaries are unclear, experimentation slows.
If universities want to support responsible AI adoption, governance needs to move from aspiration to application. Below are practical steps institutions can take to make that shift.
1. Translate principles into scenarios
Values such as “human oversight” or “responsible use” are too abstract to guide everyday practice.
Instead, governance should include short, discipline-sensitive scenarios. For example:
- Assessment design: is it acceptable for students to use AI to generate ideas? To improve grammar? To rewrite paragraphs?
- Feedback: can lecturers use AI to draft formative feedback, provided they review and amend it?
- Curriculum design: can lecturers use AI to generate reading lists or case studies?
Rather than issuing blanket permissions or prohibitions, provide example-based guidance that shows what acceptable practice looks like. This does not lock staff into rigid rules but gives them starting points.
International guidance, including UNESCO’s work on AI in education and the Beijing Consensus on Artificial Intelligence and Education, repeatedly emphasises the importance of defining roles and clarifying responsibilities. That clarity is practical, not philosophical. It should show up in real use cases.
2. Define levels of risk, not blanket rules
Not all AI uses are equal. Institutions can reduce anxiety by distinguishing between low-, medium- and high-risk applications.
For example:
- Low risk: using AI to summarise literature, generate quiz questions or draft internal documents
- Medium risk: AI-supported feedback or adaptive learning tools, requiring documented human oversight
- High risk: automated grading without review, surveillance-based monitoring or predictive analytics affecting progression decisions.
By mapping risk levels to required oversight, governance becomes proportionate rather than restrictive. Staff know when experimentation is encouraged and when additional review is required.
This approach reflects global recommendations that governance should be proportional and context-sensitive. It also prevents over-cautious blanket bans that stifle pedagogical development.
- Spotlight guide: AI and assessment in higher education
- AI and assessment redesign: a four-step process
- AI did not disturb assessment – it just made our mistakes visible
3. Make responsibility explicit
Liability is a common source of hesitation. If a lecturer uses AI in feedback and an error occurs, who is accountable?
Good governance answers this directly.
For example:
- The educator retains responsibility for all outputs shared with students
- The institution commits to supporting staff who follow approved guidance
- High-risk deployments require formal approval pathways.
When responsibility is shared and clearly articulated, educators are more willing to innovate. When it is ambiguous, they withdraw.
4. Provide tools, not just policies
Frameworks should be accompanied by usable resources:
- Template AI disclosure statements for assessment briefs
- Suggested wording for syllabus AI policies
- Decision trees for determining appropriate AI use
- Sample rubrics that integrate AI-aware criteria.
This transforms governance from a compliance exercise into a teaching support mechanism.
The Beijing Consensus on Artificial Intelligence and Education explicitly calls for capacity-building and institutional support structures rather than high-level commitments. Institutions that take this seriously invest in practical enablement, not just documentation.
5. Pilot before scaling
Rather than issuing abstract policies across the entire institution, identify pilot departments willing to test structured AI integration.
Document:
- What worked
- What failed
- Where guidance was unclear
- What support staff needed.
Then revise governance accordingly.
This iterative approach aligns with international calls for monitoring and evaluation mechanisms. Governance becomes a living process rather than a static document.
6. Protect teacher agency explicitly
Many staff fear that AI governance is a prelude to automation or managerial surveillance. Institutions should address this directly.
For example:
- State explicitly that AI tools augment rather than replace academic judgement
- Prohibit the use of AI analytics for staff performance monitoring without consultation
- Reinforce that human decision-making remains central in assessment and progression.
This builds trust. Governance is about creating conditions in which experimentation is safe and values are protected.
7. Review annually, revise publicly
AI evolves rapidly. Governance should not aim for permanence.
Commit to:
- Annual review cycles
- Open consultation with staff and students
- Transparent revision processes.
When governance is visibly adaptive, educators feel permitted to adapt as well.
From vision to infrastructure
When educators know what is permitted, supported and expected, they experiment more confidently. When responsibility is defined and support is visible, innovation becomes collective rather than individual.
Global policy frameworks have already signalled this direction. They emphasise operational clarity, defined roles and institutional responsibility. The challenge now is local translation.
If AI governance remains at the level of values statements, institutions risk stagnation, inconsistency and quiet disengagement. If governance becomes practical, proportionate and iterative, universities can foster exactly what they claim to value: responsible, creative experimentation.
Garth Elzerman is a lecturer at Xi’an Jiaotong-Liverpool University, China.
If you would like advice and insight from academics and university staff delivered direct to your inbox each week, sign up for the Campus newsletter.




