
Vibe coding belongs in your university’s GenAI literacy strategy
You may also like
Most institutional artificial intelligence literacy programmes stop at the same point: teaching staff to use tools. Write better prompts. Try this chatbot. Here’s how to spot hallucinations. All useful – but it leaves educators as consumers of technology that someone else designed, with someone else’s students in mind. When the tools don’t quite fit the context, there is no recourse but to wait.
Unesco’s AI competency frameworks for teachers and students recognise this gap. Both place “create” at the top – above “acquire” and “deepen”. The intended apex of AI literacy is not using tools well. It is building with them. Yet almost no institution trains educators to reach that level, because until recently it wasn’t realistic. You needed to write programming codes.
- GenAI has turned teachers into auditors
- A scaffolded approach to teaching with GenAI
- ‘AI turns the classroom from structured event into improv class’
Vibe coding changes that equation. The term describes building software by describing what you want in plain language to an AI coding assistant. It thrives in exactly the contexts where educators work, research shows: early stage prototyping, iterative design, solving specific problems for specific learners. The bottleneck shifts from writing code to describing intent – and describing pedagogical intent is what educators already do. This is a new and genuinely accessible form of AI literacy, and it’s one the sector should be taking seriously.
What we built and why
We built SmartTextbook – an open-source tool that converts a chapter or URL into interactive quizzes, mind maps, glossaries, structured summaries and an embedded AI tutor that answers student questions in real time. Students don’t just read – they engage with content through multiple modes, and educators retain full control over what gets published.
We didn’t start from scratch, and that matters. In 2025, Google’s LearnLM team published research on an AI-augmented textbook – personalised, multimodal, with 11 per cent higher retention in randomised trials. Impressive, but built by 34 researchers on proprietary infrastructure. For most university lecturers, “read this paper and build something similar” has never been a realistic instruction – until now. We took their research-proven feature list as a blueprint, then used vibe coding and open-source tools to build our own version.
That practice – borrowing empirically validated ideas from well-resourced research teams and implementing them through plain-language AI conversation – is what makes vibe coding genuinely powerful for educators. You don’t need to invent the pedagogy. The research literature already tells you what works. What’s been missing is the ability to act on it yourself.
Four lessons from building with GenAI
Borrow from proven research, then build: Start with what the evidence says works, not with what the technology can do. We took Google’s published feature set and described each behaviour to an AI coding assistant in plain language.
The gap between reading a research paper and implementing its findings has always been wide in education – limited by funding, technical capacity and institutional pace. Too often, promising findings sit in journals while educators lack the means to translate them into practice. Vibe coding narrows that gap dramatically. If you can articulate what a piece of research recommends, you can start building it.
Describe intent precisely – then ask the tool what could fail: Vibe coding rewards the kind of clarity that good teaching already demands. Describe what should happen, for whom, under what conditions – as if briefing a capable but uninformed colleague. Then go further: ask the AI to anticipate what could go wrong.
When we did this for our quiz scoring feature, the tool predicted that asynchronous state updates might count a student’s final answer twice – then ran the test, found exactly that bug in our implementation and corrected it before a single student encountered it. The educator defines the intent; the AI stress-tests it. That division of labour is what makes vibe coding so well suited to people who already think carefully about how learning can break down.
Embrace the open-source ecosystem: You don’t have to build everything yourself, and you shouldn’t depend on any single company’s infrastructure. Open-source libraries solved problems for us in minutes that would have taken weeks to build from scratch – a diagramming library for mind maps, a repair tool for malformed GenAI outputs. We designed SmartTextbook to support 11 different AI providers, so no institution is locked into one vendor. And because the project itself is open-source, anything we built can be adapted, improved or reshaped by other educators for their own students. Building in the open means your work becomes a starting point for someone else, not a dead end.
Never outsource academic judgement: AI-generated quiz questions occasionally contained factual errors – invisible to the model but obvious to a subject expert. We built an automated content-check into the publishing workflow as a first pass, but the final call on accuracy always belongs to the educator. This is perhaps the most important lesson of all: vibe coding scales what you can build. It does not replace what you know.
Where to start
Find a piece of research that describes what works for your students. Open an AI coding assistant – Replit, Cursor or similar. Describe what you want in plain language. Build one version. Test it with real content. Share what you make. You don’t need a development team or a budget line – just a clear idea and the willingness to describe it. That is what “create”-level AI literacy looks like in practice.
Simon Wang is a lecturer in English, Nancy Guo is a lecturer and Kaitai Zhang is a graduate student, all at the Hong Kong Baptist University.
If you would like advice and insight from academics and university staff delivered direct to your inbox each week, sign up for the Campus newsletter.

