Growing evidence of anti-female bias in student surveys

Dutch researchers find female academics 11 percentage points less likely to hit promotion threshold in course evaluations

August 14, 2016
Women in small car looking at man in larger car
Source: Getty
Slighted: student evaluations reveal evidence of ‘gender bias against female teachers’ and ‘do not exclusively evaluate the quality of a course’

A new study provides further evidence that students rate female lecturers more harshly than male academics in course evaluations.

Researchers examined five years’ worth of evaluations from Erasmus University Rotterdam’s International Institute of Social Studies and found that female lecturers were 11 percentage points less likely to receive an average score of at least four out of five from their students.

The study raises questions about the reliability of questionnaires such as the UK’s National Student Survey, which, as part of the teaching excellence framework, will play a key role in determining the tuition fees that English universities will be allowed to charge.

There is growing evidence that gender bias is a problem in student surveys. The Erasmus University paper follows a 2014 study by Anne Boring, a postdoctoral researcher at Sciences Po in Paris, which found that male students at one university were 30 per cent more likely to rate male teachers as excellent than they were female lecturers.

The Dutch paper also adds to doubts about the use of such ratings in hiring and promotion decisions. For Erasmus staff, a course rating of four or higher is vital because lecturers will be considered for promotion to assistant professor only if they have passed this threshold.

Researchers Natascha Wagner, Matthias Rieger and Katherine Voorvelt compared the evaluations of academics teaching 272 modules on a social studies master’s programme for their study, which has been published in the Economics of Education Review.

Once course-specific effects were controlled for, female academics received average scores that were 0.12 point lower than men’s on a five-point scale. While this sounds like a small difference, ratings were clustered very tightly around the overall average of 4.27, and gender was found to account for more than a quarter (27.6 per cent) of the variation in ratings.

Dr Wagner, an assistant professor in development economics, said that the results revealed evidence of “gender bias against female teachers” and confirmed that student evaluations “do not exclusively evaluate the quality of a course”.

She argued that student evaluations should not form part of hiring and promotion decisions because such a move “may put female lecturers at a disadvantage”.

“Employing student evaluations as a measure for teaching quality might be highly misleading,” Dr Wagner added.

Although previous studies have found evidence of bias against ethnic minority lecturers in student surveys, the Erasmus researchers found that any such effects were not statistically significant.

chris.havergal@tesglobal.com

You've reached your article limit

Register to continue

Registration is free and only takes a moment. Once registered you can read a total of 6 articles each month, plus:

  • Sign up for the editor's highlights
  • Receive World University Rankings news first
  • Get job alerts, shortlist jobs and save job searches
  • Participate in reader discussions and post comments
Register

Reader's comments (1)

The results of the Wagner et al. study confirm previous findings (Centra, 2009; Centra & Gaubatz, 2000; Feldman, 1993) that the effect of instructor gender on student ratings of instruction is small and should most likely not affect personnel decisions, as long as ratings are not the only measure of teacher effectiveness. We agree with Wagner et al. that “Cut-off points for excellence in teaching…are arbitrary and need to be complemented with qualitative feedback in order to get a holistic picture about teacher performance in class” (p. 92). We are troubled, however, that virtually no information is provided in the article about the survey used to collect ratings, other than it “features questions about the course in general and one question about each specific teacher” (p. 83). Moreover, no evidence is presented to support the instrument’s validity and reliability. In fact, the measure of teacher effectiveness is based on a single item. In order to get a complete picture of instruction, we must continue to insist that students’ voices be heard - we owe them the opportunity to provide input about their learning experiences. That feedback is valuable to the instructor as it can help them improve their teaching, and it is valuable to the institution as it provides another set of data that can be used to help evaluate, support and grow its faculty.

Have your say

Log in or register to post comments

Featured Jobs

Reader in Politics and Policy

St Marys University, Twickenham

Engineer

Cern

Professor of Anthropology

Maynooth University

Preceptor in Statistics

Harvard University

Postdoctoral Fellowship in Electrochemistry

Norwegian University Of Science & Technology -ntnu
See all jobs

Most Commented

Doctoral study can seem like a 24-7 endeavour, but don't ignore these other opportunities, advise Robert MacIntosh and Kevin O'Gorman

Matthew Brazier illustration (9 February 2017)

How do you defeat Nazis and liars? Focus on the people in earshot, says eminent Holocaust scholar Deborah Lipstadt

Improvement, performance, rankings, success

Phil Baty sets out why the World University Rankings are here to stay – and why that's a good thing

Laurel and Hardy sawing a plank of wood

Working with other academics can be tricky so follow some key rules, say Kevin O'Gorman and Robert MacIntosh

Warwick vice-chancellor Stuart Croft on why his university reluctantly joined the ‘flawed’ teaching excellence framework