Student evaluations of teaching ‘methodologically flawed’

Putting aside questions of sexism, racism and homophobia, Australian literature review finds that SETs are just poor science

四月 8, 2021
Shanghai, China - April 08, 2017 Discarded and vandalized bicycle of popular bikesharing company ofo laying in the street
Source: iStock

Critics say student evaluations of teaching (SETs) are skewed by innate biases against minority groups, and their results should never be used for professional assessment purposes. But a new analysis has found that SETs are so susceptible to factors unrelated to teachers and courses that their results should be disregarded anyway.

A La Trobe University review of 183 SET-related studies has found that issues that have nothing to do with teachers’ identity – such as class size, website quality, university cleanliness and even food options in the canteen – also skew the results. Student characteristics such as gender, age and disciplinary area influence the evaluations as well.

“That student demographics alone impact on SET results demonstrates just how flawed the system is,” says the paper, published in the journal Assessment and Evaluation in Higher Education. “The existing literature makes it clear that SET results are strongly influenced by external factors unrelated to course content or teacher performance. This analysis raises the question of how any university [can] justify the continued use of SETs.”

Author Troy Heffernan said researchers had spent decades exploring how SETs disadvantaged academics on the grounds of gender, racial background, disability and sexual orientation, with women and academics from minority groups routinely given less favourable evaluations than white, able-bodied males.

But the focus had now turned to even more basic methodological shortcomings, with evaluations influenced not only by the teachers’ irrelevant characteristics but also by background traits of the students.

An estimated 16,000 higher education institutions around the world regularly conduct SETs, the review found. Dr Heffernan said their administrators might not appreciate the fundamental weaknesses of data that appeared “sound”.

“On the surface, it seems like a great system. You have a class of 100. You ask them if they like the class or course. Over 100 students, you would think you’re getting some form of objective answer.”

Cost considerations also contribute to the continued use of SETs, he said. “The fact is, universities want this data – they want to understand how [to] improve classes – and student evaluations [are] a very quick, cheap way to get instant data.”

Dr Heffernan said none of the reviewed studies had reported favourable findings about SETs, although they had differed on “how damaging” evaluations were. SETs appeared less slanted against minority academics in the humanities than in science-based subjects, for example.

Some academics say they value feedback from SETs, both positive and negative. Dr Heffernan said some institutions conducted evaluations without using the results for career progression purposes. “The main problem is when a majority of universities use this information for hiring, firing and promotion.”

He said qualitative feedback sourced through student support teams would deliver more useful information than quantitative data from students. “Back and forth” dialogue about what “worked” in classes, and what students liked, would be better than “grading someone one to five”.

“But that takes time and money,” he noted. “In a post-Covid austerity-measure world, most universities probably aren’t prepared to do that right now.”

john.ross@timeshighereducation.com

Please Login or Register to read this article.

请先注册再进行下一步

获得一个月的无限制地在线阅读网站内容。只需注册并完成您的职业简介.

注册是免费的,而且非常简单。一旦成功注册,您可以每个月免费阅读3篇文章。:

  • 获得编辑推荐文章
  • 率先获得泰晤士高等教育世界大学排名相关的新闻
  • 获得职位推荐、筛选工作和保存工作搜索结果
  • 参与读者讨论和公布评论
Register

相关文章

Reader's comments (2)

The conclusions from this study is not new - MANY published studies (both reporting original data and systematic quantitative reviews) have reached the same conclusions as the paper reported in this article. The same conclusion about the poor validity of SETs have remained the same for decades but yet you have instructors/lecturers/professors in 'education' and policy makers claiming the validity of SETs. It makes you wonder the hidden agenda or lack of evidence-based policy making these people engage in.
deheuty, I fear the reason is all too clear. The SETs give a single number against which managers with no particular expertise can make a judgement. It's the same with research metrics. The consequences of a poor decision made on the basis of numbers that are of limited value often do not fall on those making the decisions.