Lecturers often fear that grudge-bearing students will take their revenge in end-of-course surveys. But a study that examines the state of research into teaching evaluations claims that academics could be the ones with axes to grind.
Michael Carlozzi, an independent researcher and public library director in Massachusetts, set out to explore why scholars’ opinions of student surveys seemed to be so divided: between “apologists”, who defend the value of such exercises as an improvement tool, and “deniers”, who warn of bias in, and the unreliability of, responses.
Comparing the scores of these researchers’ performance on the Rate My Professors website in the US, where students post ratings of lecturers’ classes, he found that lead authors of papers that were critical of student evaluations – the so-called deniers – were 14 times more likely to have a below-average score than “apologists” who had written positive papers.
Writing in Assessment & Evaluation in Higher Education, Mr Carlozzi says that “researchers’ personal attitudes” towards student surveys “might influence their research findings”, in a paper titled “Rate my attitude: research agendas and RateMyProfessor [sic] scores”.
The great diversity of opinion on the issue “may result not so much from panoply of choice”, he says, “as from agendas to find the ‘right answer’”.
“Perhaps it is not so much retaliatory students as faculty who have an ‘axe to grind’,” concludes Mr Carlozzi, who looked at the output of 230 researchers in the field.
Possibly aware that such claims will do little to cool tempers in a lively academic debate, Mr Carlozzi acknowledges that his study has limitations, including that he had to fit researchers into the categories of “apologists” and “deniers”, when actually their arguments tend to be much more nuanced.
And, asked by Times Higher Education whether he thought academics might be purposefully – and vengefully – negative about student evaluations, he said that “your guess is as good as mine”.
“It’d be an interpretation outside of what the data can show,” Mr Carlozzi said. “I’d like to believe that researchers are not deliberately choosing the models or analyses that find ‘convenient truths’, as it were. Some deniers, after all, are prolific and very successful researchers in their primary fields.
“So I don’t have any reason to think these analysts are p-hacking [cherry-picking statistically significant data] or data dredging. Could some? Possibly – data dredging happens in all disciplines.”
Nevertheless, Mr Carlozzi said that his conclusions meant that scholarly contributions to the debate over student evaluations should be treated with a critical eye.
“We [just] have to be sceptical of a finding [in a study],” he said. “Just because it’s published, doesn’t mean we can axiomatically treat it as the truth.”