Logo

Designing assessments with generative AI in mind

A new era of AI requires a balance between thoughtfully mitigating and responsibly promoting student use of new tools

Kate Crane's avatar
1 May 2024
copy
0
bookmark plus
  • Top of page
  • Main text
  • Additional Links
  • More on this topic
Students in an exam hall

Created in partnership with

Created in partnership with

Dalhousie University logo

You may also like

Designing online assessment to prevent academic misconduct
Advice on designing online assessment that reduces the opportunity or temptation to cheat

Just 18 months after the launch of ChatGPT, our world is now shot through with the use, results and implications of generative artificial intelligence (AI). On campus, all areas of teaching, learning and student life have felt its impact, though probably none quite so much as assessment. 

It might be tempting to simply ban generative AI use in courses to avoid having to make changes to assessments (and to avoid complicity in the use of dubiously ethical technological tools), but this approach would miss out on opportunities to prepare and empower students, as well as to create more effective ways of assessing student learning. Though one may, after consideration, ban its use (there are good reasons to do so, pedagogical and otherwise), the fact remains that generative AI has quickly permeated everyday life and most professional work contexts. Students and instructors must spend time carefully charting ethical pedagogical paths through this new terrain. A considered interplay between designing for thoughtful deterrence and responsible promotion, characterised by non-punitive and open co-exploration are the two sides of the design coin for effective and empowering AI-era assessments. 

The four broad strategies outlined below can help faculty chart a path forward. Each strategy in use will amplify the effectiveness of the others. Implementing an idea or two from each of the broad strategies will put you and your students in a strong position to continue to meet the changes and challenges generative AI brings.

Employ relational and transparent pedagogies

When education feels like a transaction, students will respond in kind by making transactionally motivated decisions, like trying to use AI to reduce time spent on required assessments, especially if they cannot understand the rationale behind the assessment in the first place. In whatever ways suit your pedagogical style, employ strategies that increase relationship-building and pedagogical transparency whenever you see an opportunity to do so. This might look like:

  • Auditing your assignments against what you know about these tools, discerning which aspects of assignments would retain their integrity even with AI use. Are you more concerned about an essay with a specific structure and linguistic clarity? Perhaps using AI, then, for brainstorming would not detrimentally affect student achievement of this outcome. Share this audit with your students – a demonstration of careful pedagogical consideration shows them that you are not arbitrarily prohibiting a tool that they perceive as useful or important. 
  • Seeking and garnering their thoughts and concerns, and sharing yours, in an early-term class session, showing that you want and appreciate their critical judgement. Incorporate some of these in course policies or expectations.

Get creative with assessment structure, environments and load

The prospect of four or five big, unwieldy essays or projects due for each course all in the same month will have students seeking shortcuts. Attend empathetically to the myriad challenges of contemporary students by altering the structure of assignments, where those assignments might be done and questioning the impulse to do “what we’ve always done.” Consider these ideas:

  • Break down assignments by scaffolding or multi-staging, including detailed, step-by-step instructions. Removing the stress of managing an entire complex assignment can help relieve impulses toward procrastination and seeking AI shortcuts. 
  • Assignment end-products, or parts of the assignment process, might be better matched with a medium other than writing. A one-paragraph research topic assignment could be verbally delivered during office hours, or via audio clip on the learning management system.
  • Create collaborative and communal assessment environments, bringing what is usually done alone and outside the classroom into the classroom. Consider approaches such as in-class writing circles that aim to help students overcome typical obstacles to writing (this example takes only 15 minutes) and collaborative final exams, such as the “Tentarium”, which combines the written exam and the seminar formats: students write the exam, then discuss with peers and tutorial leaders, then revisit the written exam to revise (more information below). 

Emphasise process over product

Most instructors probably include verbs such as “interpret”, “distinguish”, “evaluate” and “investigate” in their assignment learning outcomes. The road to those intellectual actions is rocky and winding; students need to stumble around, experiment, engage in false starts, revise, start over, collaborate and discuss. So why do we so often solely require – and solely reward – a single, polished artefact (such as an essay) and, in doing so, prioritise product over process? Here are some ways to do the reverse:

Incorporate a critical AI lens

Students need the time and support to critically examine and explore new technological tools and imagine the implications (positive and negative) for their lives and the lives of those who belong in their communities. They need time to develop their ethical stances. Here are some ideas:

  • Co-create course components such as syllabus language, assignment instructions, rubrics or course etiquette together. Tease out the specific implications of AI use. 
  • Set assignment prompts that help students explore the implications of AI use for their disciplines, their lives or their learning. Include readings in your course that discuss the pros and cons that students can draw upon in their explorations.
  • Test out the AI together. Compare human and AI output (a piece of creative writing, a laboratory experiment) to understand the limits (be aware of any institutional policies surrounding the use of third-party software and data security and privacy). 

Faculty do not have to have AI (and their ethical and pedagogical stances to it) completely figured out before redesigning their assessments. In fact, building the “figuring it out” into assessments is a big part of making them effective as preparatory and empowering, building up students’ AI literacy and critical engagement. 

Kate Crane is an educational developer (online pedagogies) at Dalhousie University.

Information on the tentarium approach: page. 94 in this resource.

Loading...

You may also like

sticky sign up

Register for free

and unlock a host of features on the THE site