Teaching intelligence: the benefits of diversifying assessment

While teaching a course on Margaret Atwood at the University of Reading, Madeleine Davies was pleased to find that her students “were amazing − they had the most fantastic conversations and the most original ideas”.

However, during the course’s exam those exciting insights and ideas evaporated. “They just trotted out what they knew would be a safe answer and took no risks at all,” said the associate professor of women’s writing. “It meant all those potentially high first-class ideas were gone.”

Dr Davies knew something had to change, “and it had to be the exam…it wasn’t bringing out the best in them”.

Embarking on a collaborative research project on “diversifying assessment”, she began looking at ways of moving away from conventional assessment formats. Dr Davies decided to use a learning journal, where students “could go a bit mad” and submit in a variety of forms, responding personally, creatively and critically to their set texts.

The students would submit 10 entries each of 500 words but only select five for assessment. “They produced the most astonishing works. I could see how they were engaging with the texts; they were making connections that even I hadn’t identified. They were riveting,” she said.

Dr Davies believes the success is down to the fact that students like handrails, but that such an approach doesn’t necessarily engage their critical thinking. “At the same time, within a diverse cohort not every student has been drilled in the art of essay writing − but you need to find a way to reward those students, especially as the workplace is unlikely to demand an essay of you, but it may demand other forms of writing,” she said.

It was important to give students early feedback, so they didn’t feel they were working blindly, she said. “Tell them that you want their most innovative work – but also still want good grammar, spelling, etc – and you begin to see them have the confidence to be much more innovative.” She also retained the classic long-form essay, so as not to overhaul the entire assessment for that course.

Dr Davies admits it did increase her burden – each journal ends up at 5,000 words – “but what you get back is so worth it”. In fact, she was so impressed with the work that she has produced a staff-student collaborative book made up of some of the submissions.

Geoffrey Crisp, deputy vice-chancellor (academic) at the University of Canberra, is also a firm believer that inspiring students to engage with assessment is critical for learning.

Professor Crisp has worked on using “negotiated assessment”, where students come up with assessment themselves and they have to negotiate that with the academic or the teacher, at previous institutions and is now hoping to implement it at Canberra.

Students are told the learning outcome and what the rubrics are, and then are put into groups and asked to come up with a task that demonstrates they can achieve them. If their instructor doesn’t think it will work, they have to revise it.

“Involving the student in the process is a key motivation for moving away from conventional forms of assessment,” he said. “For most students, [assessment] is something that is done to them. It’s very rare you hear a student say they were inspired by the task, but this way they are brought along and get a lot more out of it.”

It also helps them become less dependent on the teacher. At the beginning, he said, they do need a lot of support, but by the end they should know what a good piece of work is. And this method does not make it easier. “Students have told me their assessment was harder than an exam!”

Professor Crisp believes every discipline can use negotiated assessment but cautions that it will not work for every module. Some colleagues may be sceptical, but he advises bringing them in slowly, adding that when it was introduced at RMIT University, where he previously worked, the overall average marks for the class went up across assessments.

Ian Curran, vice-dean of education at Duke-NUS Medical School in Singapore, also believes that assessment should do more than prove the institution is “a knowledge factory”.

For Professor Curran, the most pressing problem has been the coronavirus pandemic and how to assess students in such a practical discipline. When Singapore went into lockdown earlier this year, he still wanted medical exams to test not only knowledge but other skills, such as communication and how students interacted with patients and doctors.

“Some institutions graduated their medical students without formally assessing them, but high-stakes practical exams, which test skills that cannot be evaluated by traditional written examinations of knowledge, are too important,” he said.

So, while in many ways the medical sector had already recognised the need to diversify assessment, the pandemic gave Professor Curran the impetus to take action.

“We had to find very creative ways to make that happen; simulating real-life scenarios during the Covid-19 lockdowns was a challenge,” he said. Duke-NUS academics developed a strategy in which the clinical and procedural skills of final-year medical students that would usually be assessed could be safely and reliably carried out amid robust Covid-19-mitigation measures.

The exams took place in non-clinical facilities, while the students and examiners were separated into cohorts and were briefed on what was expected during the examinations via video conference calls.

“At its heart, medicine is a human-to-human profession, and the way doctors cause harm is so often because of their…communication or inability to work as a team,” he said. “It’s easy to say: ‘This is too hard,’ but there are things you can’t learn in a textbook that you need to know they have. If you don’t, you will never really know if they are capable in the real world.”

Back to listing

advertisement
advertisement
advertisement
advertisement
advertisement
advertisement