Is student course evaluation actually useful?

Virtually all modern university courses end with a request for feedback. But are students’ reactions even useful for improving future course design, never mind assessing lecturers? Seven academics discuss their experiences

April 16, 2020
Source: Getty

‘I am always curious as to how students might react to what I have been trying to do’

I was somewhat sceptical when student feedback forms were first introduced, back in the Jurassic days before email. Not many students could be bothered to collect a physical form and fill it in, and some of those who did used it as an opportunity to say some pretty bizarre things. Several colleagues were propositioned, and a friend who is one of Italy’s most distinguished medics was enraged to be told that he should have his hair cut because he looked like a hippy.

But over time I have come to see student feedback as very useful – provided that it is taken seriously by all parties, and provided also that it is not used as a blunt instrument by management (a real danger now that teaching as well as research are criteria for promotion).

Feedback is particularly useful when comments highlight the unanticipated needs of a contemporary student cohort. It is all too easy for academics to lose sight of generational differences – not merely regarding life experiences (current students see the Vietnam War as ancient history, whereas I vividly remember taking part in student protests against it) but also in terms of prior learning experience.

For instance, I recently ran a special seminar on how to read ambiguous texts. This was a direct response to the number of student feedback forms saying that they needed more help with this. Close reading – taking a text to pieces – was something my generation learned to do at school but is a skill that not all literature students have today. Add to the mix the international dimension of today’s student body and the gap between what students and lecturers share in terms of information, skills and interests can be very wide indeed.


Of course, there are always minor aberrations on evaluation forms, and evidence is lacking as to whether the feedback can actually improve teaching quality. Nevertheless, I believe that it can serve as a guideline to enable each of us to think harder about what we are doing. If feedback is positive then this can be very encouraging, particularly if a teacher is seeking to do something innovative and different. But when a whole group complains about the same thing and gives roughly the same low scores, we need to take note and act accordingly.

Perhaps we need to review the material content of what we are teaching, but it is more likely that we need to rethink what we actually do during a lecture, and reflect on why some students feel that we are not communicating effectively. Concerns expressed can include too much informality in presentation – such as wandering around in a seminar room – overcomplicated or illegible PowerPoint presentations, too much material crammed into one session, or unrealistic assumptions about basic student knowledge.

Several colleagues have told me that they have a moment of dread when the feedback forms come in, which is usually followed by a sense of relief. I can’t say that’s how I feel. Rather, I am always curious as to how students might react to what I have been trying to do. Sometimes I am affronted, as Beatrix Potter’s Mrs Tabitha Twitchett used to say, but student feedback is surely an important part of the process of self-appraisal, with which all professionals should be engaged.

Feedback from students might sometimes offend our vanity, but it is a valuable resource that can help us to know ourselves better.

Susan Bassnett is professor of comparative literature at the University of Glasgow and professor emerita of comparative literature at the University of Warwick.

‘Do law firms let client reviews determine who becomes a full partner? Of course not’

Thirty years ago, student evaluations of teaching (SETs) were virtually non-existent. Today, many faculty live in fear of them, and their fears aren’t unfounded. Despite evidence that SETs are unreliable and biased, they continue to be weaponised against academics, especially those working on temporary and part-time contracts.

To clarify, I don’t think collecting student feedback is inherently bad. Nor do I long for the good old days when faculty could show up with no syllabus, grading scheme, or need to be remotely accountable to students. However, using SETs to make decisions on hiring, contract renewal and tenure and promotion is a misguided practice that needs to end.

First, you don’t need to be a quantitative researcher to question the validity of the data. SETs often yield a low response rate, making them statistically insignificant. Also, since outliers are rarely eliminated, just one scathing ranking can significantly skew the results. There are also a host of other variables that make course data unreliable, and these variables have only expanded since SETs were first introduced in the 1990s.

Then, they were completed in person, usually on the final day of class. Today, anyone, including students who haven’t shown up for months, can complete an evaluation. Non-participants generally have little insight into a course’s content or a professor’s pedagogy, but in the neoliberal academy, they are treated like any other paying customer, and this gives them the right to leave a review.


Out of context, SETs have come to be treated like fast-food chain ratings on Yelp, rather than serious academic exercises. They can be completed any time, anywhere and in any state of mind (in the past, my own students have gleefully told me that they completed theirs in the middle of the night while partying with classmates). Naturally, most students don’t realise what’s at stake, nor that some faculty have more at stake than others. 

Based on my own experience (as a department chair, I read more than 1,000 SETs), it is obvious that bias still abounds. Cis-gendered white men – whether young graduate students or elderly full professors exhibiting clear signs of geriatric decay – are nearly always perceived to be smarter, more authoritative, funnier, and just better educators than the rest of us. Students are so sexist that even women who skew masculine rather than feminine on the gender spectrum tend to be ranked higher.

I’m not alone in reaching these conclusions. Several recent studies, including a widely discussed 2018 study, “Gender bias in student evaluations”, published in Political Science & Politics, offer evidence that SETs are rife with gender bias. But racial and ethnic biases are also rampant. In anglophone countries, people from non-English-speaking backgrounds are also routinely ranked lower. 

Course evaluations may have been a good idea when introduced, but they are rarely used to promote outstanding teaching. They are generally only taken seriously into account when decisions are being made about contract renewals or tenure and promotion. Hence, the more precarious someone’s position, the more SETs matter.

Given their lack of validity and their well-documented bias, higher education needs to accept that ceasing to rely on SETs to make important decisions is a logical, not radical, proposal. Do hospitals let patient reviews determine which physicians are hired? Do law firms let client reviews determine who becomes a full partner? Of course not. Reviews may be used to attract new patients or clients, but no one thinks that patients or clients are best positioned to determine the value of highly trained professionals.

The same logic should hold true in higher education. Until it does, our hiring and promotion decisions will continue to be influenced by deeply entrenched biases.

Kate Eichhorn is associate professor and director of culture and media studies at The New School in New York City.

‘I pride myself on delivering engaging material and it’s encouraging to get feedback that reinforces this’

Feedback is something that, as academics, we are very used to receiving. Our grants applications and paper manuscripts are reviewed by editors, panels and peer reviewers – often harshly. Every time we deliver a talk at a conference, members of the audience have awkward questions. Being critiqued is simply part of the job. However, for whatever reason, student feedback can have a particularly painful sting to it.

Course evaluations are a source of dread for some lecturers. Will I receive lavishly positive feedback that could form the basis of my future promotion application? Or will I be the victim of a student who simply didn’t like getting out of bed for early morning lectures?

Personally, I have had mainly positive experiences with student feedback about my lecturing and the courses I convene. I pride myself on delivering engaging and accessible material and it’s encouraging to get feedback that reinforces this.

For instance, I teach the plant part of a third-year biochemistry course that is delineated into two clear halves, comprising medical biochemistry and plant biochemistry. More than 90 per cent of the students taking the course aim to become either medical doctors or researchers. As such, I am providing teaching to students outside their current area of interest. However, I take this as a challenge to get them interacting with the material, and comments such as “probably the first time in my life I ever thought plants might actually be interesting!” are testament to what I am trying to achieve.

Occasionally, I even recruit some of the medical students to undertake further postgraduate studies in plant sciences – a rare coup indeed!

However, not all feedback is positive, and nor should it be – we can always improve. This kind of feedback comes in two forms, constructive and not. Constructive criticism can be valuable and I happily take it on the chin. On several occasions in courses I convene, we have altered the content or assessment based on feedback. There is no doubt that the changes have positively impacted on the course, and the students like to feel they are being listened to, rather than talked at. Everyone wins.


However, some of the less constructive feedback that colleagues have received can be challenging to deal with. A comment such as “you honestly shouldn’t be lecturing” may be made in jest, but I doubt that the student who made it considered what impact it might have on a lecturer.

I think it is reasonable to suggest that most of us work hard to give good lectures – no one intentionally tries to give a bad lecture. Perhaps it is also fair to say that, despite working hard on it, lecturing doesn’t come naturally to everyone. Consequently, such feedback could be somewhat soul-destroying.

There is a significant emphasis at many institutions globally on student mental health, and rightly so. However, academics aren’t immune to the challenges of maintaining good mental health, and copping a verbal broadside from a student can hurt. Can you imagine the uproar if the situation was reversed and the lecturer was critical of a student without a constructive basis?

So how should we engage with student feedback? In my case, I strongly encourage students to share their honest experiences of the course, both positive and negative. However, I do also remind them that we are human and that we try. So if they have something to be critical about, that is absolutely fine, but they should be constructive and consider the impact of what they say.

Peter Solomon is a lab leader in wheat biosecurity at the Australian National University. 

‘My guerrilla questionnaires are closely geared to the structures and content of each course’

Without doubt, the most accurate, perceptive and honest student feedback is the kind that identifies Dr Michelson as a teacher so incisive and engaging that the entire class has gained lasting respect for early modern Italy and a measurable, permanent jump in wisdom. The rest is lies.

I’m obviously joking, but it’s true that even the best feedback will never meet that fantasy, and that we need all our courage when it’s time to read it.

It should be stated clearly, at the outset, that studies repeatedly confirm how harmful feedback forms are to many lecturers because they consistently replicate society’s biases against women and people of colour. Feedback is more useful for learning about the students themselves than about their instructors.

Students at my university are generally professional and polite in their feedback, avoiding the worst of the gender pitfalls and refraining from personal comments. Nonetheless, we’re all hyper-aware of our own mistakes and gaffes in class, and we expect the students to judge us as harshly as we generally judge ourselves. In that light, any positive comments shine extra bright, like a diamond prize in Minecraft, but the most negative comments feel truer.

The best remedy for this syndrome, if you can stomach it, is writing a paragraph summarising all feedback (also useful for job applications). That forces us to absorb the good news as well as the bad.

The literary genre that is student feedback takes two forms. The first of these is the official university-wide questionnaires featuring standard questions across all disciplines. These are most useful in large lecture surveys, which offer a decent sample size and a chance to put outliers in context. Sometimes, they reveal how much work students put in, how closely they read the handbook, and, perhaps, what topics they most enjoyed. But they often highlight how little students have noticed.


My university’s questionnaire used to ask students to rate each lecturer in a team-taught survey by name – only students didn’t know the lecturers’ names. My colleague jokes that before my arrival, she was the only lecturer to receive accurate feedback, because hers was the only female name on the roster. On feedback day, I would sit in tutorial groups reminding students which name belonged to which lecturer. I listed lecture topics, hair colour and, if necessary, age bracket and accent. Usually this worked, but on one occasion, the students drew a complete blank. Nothing I said about that brilliant scholar and revered teacher brought him back to them. Finally, a light-bulb moment: “Oh!” cried the student, “He’s that one who said ‘Fuck’ during the lecture!” All heads started nodding and they picked up their pencils.

These insta-forms are far less relevant to honours classes, where classes are small, students’ writing styles become familiar long before the term ends, and statements like “methods of assessment allowed me to demonstrate my learning” mean little. Here’s where the second feedback sub-genre comes in: the unofficial evaluation forms distributed on the sly by the lecturer for their eyes only.

Unlike university-wide forms, these guerrilla questionnaires are closely geared to the structure and content of each course. Mine ask my students to identify readings they liked best and least, and what themes deserved more or less attention. They also probe what the students learned from giving presentations and hearing them, and solicit opinions on future tweaks to the class.

I use the responses to make real adjustments every year. And unlike the often harmful generic feedback forms, these show me where my students have engaged with the material, often surprising me in wonderful ways – and directly benefiting both future students and the future Dr Michelson.

Emily Michelson is senior lecturer in history at the University of St Andrews.

‘Course evaluations can confuse as often as they clarify’

Early in my career, I taught a course on Shakespeare. It was one of the best teaching experiences of my life. The students were clever and engaged. We had great discussions seemingly every class. I was confident about my lectures, and the students were producing smart, creative work.

The course went so well that I was excited to receive the feedback at the end of the semester. As an early career academic, I was hoping for good scores: something I could add to my teaching dossier. So when the big brown envelope arrived via campus mail, I tore it open eagerly.

However, to my surprise, my numbers were pretty mediocre. On the final question – “overall, how would you evaluate this course?” – I actually received a score slightly below the university average. I was disappointed and confused. What happened?

One possibility is that my perception of the course had just been wrong. Maybe the readings I chose were tedious. Maybe my lectures only made sense to me. Maybe I was boring.

Or maybe I just chose the wrong day to hand out the evaluations; I realised afterwards that I had done so on the same day that I explained the format for the final exam. Perhaps some anxiety about the exam spilled over into the feedback. 

To this day, I’m not sure what the truth was – and that says something about the limitations of course evaluations. They ought to help us understand and improve our pedagogical practice, but they can confuse as often as they clarify.

Hence, this past semester, I experimented with a short “learning reflection” assignment in my first-year class. Halfway through the term, I asked my students to respond in writing to two questions:

1) What is the most interesting or important thing you think you’ve learned so far in this course?

2) What’s one thing you’d like to learn more about as the course progresses?

Their answers were illuminating. Students who hadn’t spoken much in class revealed that they were very invested in the course content. Some students described specifically how their thinking had changed since the beginning of the course. Others identified which topics they found most compelling and which ones they still didn’t understand.


This helped me in a way that traditional student feedback forms have not. It enabled me to get to know my students better. As a middle-aged professor, I am increasingly out of touch. My pop culture references are getting stale. My jokes don’t always land. I need to bridge the growing distance between me and my students so that I can teach them better.

This kind of learning reflection also creates opportunities for students to communicate their needs and interests to professors while there is still time to respond pedagogically. If we only receive feedback after classes have ended and marks have been submitted, what are we supposed to do with that information? Will the next group of students have the same priorities and expectations?

Moreover, a short reflection that focuses attention on student learning, rather than professorial performance, is less likely to elicit unhelpful and toxic forms of feedback about a professor’s haircut, attire, race or gender.

All that said, it’s important to remember that student feedback isn’t just about us. It is also a mechanism for students to hold their professors accountable. And while it can be abused, it does seem necessary for students to have a means of expressing their discontent when things go badly wrong.

So while other models of soliciting student feedback might be more useful for improving our pedagogical practice, traditional course evaluations may still perform an essential function.

Andrew Moore is director of the great books programme at St Thomas University in Fredericton, New Brunswick, Canada.

‘My interactions with students are beginning to remind me of my days in retail sales’

At the end of each semester, I often take time to talk with other women faculty of colour I know after we have been through the often dreaded ritual of reading our course evaluations.

Each round of evaluations inevitably brings insulting comments that contain significant evidence of racial and gender bias. And although we know the data showing that women and faculty of colour are consistently rated lower than their white, male colleagues, it can still be demoralising.

My most challenging evaluations came when I left for maternity leave a month before the end of the semester and the class was taken over by my teaching assistant. Students used the course evaluations to complain about my maternity leave and to criticise the teaching assistant (also a woman) of “taking her job too seriously”. I received no helpful feedback on the course content and the quantitative scores were the lowest I had received in my teaching career.

Occasionally, course evaluations contain useful thoughts for future course planning. However, more often they are simply an opportunity for students to share subjective feelings about the course and its instructor. These can range from irrelevant to inappropriate. And that matters when, for most faculty, these evaluations play a central role in the promotion and tenure process.

Women and faculty of colour are already underrepresented at all faculty levels and teaching evaluations can have an impact on their sense of belonging, their confidence in the classroom and their ability to focus on other areas of their job.

Course evaluations can also have a negative impact on quality and rigour. Research has shown that courses that are viewed as more difficult consistently receive lower evaluations than courses that are perceived as easy. While most faculty would not admit that they consider course evaluations when developing their syllabuses, the fear of receiving negative evaluations often plays a role in determining course content and assignments.

For example, I have repeatedly received complaints from students that I assign too much reading, despite my trying to follow university-issued guidelines about the amount of work that is appropriate. As a result, each semester I find myself attempting to reduce the total number of pages I assign to avoid complaints.

This system inevitably contributes to what has been described as the “McDonaldisation of higher education”. My interactions with students are beginning to remind me more of my days in retail sales than a traditional professor/student relationship. There is constant pressure to deliver service that makes the students feel like happy customers.

To be sure, students should have the opportunity to give feedback on their experiences. As the cost of higher education skyrockets, many students are understandably more concerned about grades, the “usefulness” of courses and their future employment prospects. But given that it is impossible to eliminate sources of bias from their feedback, evaluations should play a limited role in faculty evaluation.

Department chairs and administrators should review course evaluations for evidence of serious problems, such as faculty negligence. However, continuing to use them as part of formal faculty evaluations is unfair – if not deeply unethical.

Jessica Welburn Paige is an assistant professor in sociology and African American studies at the University of Iowa.



‘An act with no immediate payoff for their own experience is not high on students’ agenda’

Student feedback? That would be nice. Or not. I don’t really know, because I rarely get any.

At the end of the winter term, for instance, I lectured to 300 students. At least, there were 300 students in theory: in practice, only half of them turned up. The other half evidently preferred me at double speed on YouTube, so, I guess their feedback could be: “Dr Tregoning sounds like one of the chipmunks, which is a bit off-putting.”

Anyway, back to the point: of the 300 students who could have provided feedback, I got only one response. To which my response is to mark and give feedback on only one student paper – that seems reasonable to me!

I mostly attribute the lack of feedback to the brilliance and panache of my lecturing style (LOL). My UK Independence Party CD8 T cells as an analogy for MHC presentation is pretty much as funny as you can get in immunology. You’d laugh too if you knew what on earth I was talking about.

In all seriousness, I don’t know why I don’t get much feedback, but it is a problem. In its absence, the pressures of all the other stuff that makes up an academic job mean that revising lectures can fall off the to-do list. I have got better at updating slides immediately after giving them, but I have been caught out when Past Me decided that writing “change this” in huge red letters across one of the key slides would be helpful as Future Me would have time to obey the instruction. Future Me didn’t.

Lack of feedback can also be problematic career-wise. Student feedback is a criterion upon which the quality of our teaching is judged, so making that judgement on the basis of a very small sample size is deeply flawed. It probably also suffers from the TripAdvisor effect: only those people with really strong opinions are going to provide feedback.

The narrative we are fed is that in the age of fees, students are consumers rather than educational partners. This isn’t necessarily what I have seen. In my experience, students now are pretty much as students were. They may have a bit more awareness about the future and the pathways they need to take to get jobs. But they are still young adults finding their place in the world, who happen to have some lectures in the background. It isn’t surprising that an act with no immediate payoff for their own student experience (as opposed to that of students coming after them) is not high on their agenda.

The most frustrating thing about the lack of feedback is that when given, it is really helpful. Of course, over the past 10 years I have been on the receiving end of a few choice words, my favourite being that my lecture was “surprisingly enjoyable”.

When I started teaching medical students, I was unused to dealing with the large lecture theatres. It turned out that dimming the lights is the accepted signal that you are about to start, rather than shouting over gossip and YouTube videos played at double speed. I’d also inherited some slides and had not personalised them enough, which became quickly apparent – the good news for subsequent years was that the feedback made it better.

But if there is nothing else to object to – or, indeed, praise – in my lectures, then so be it. This certainly isn’t an invite for negative feedback as an opportunity for personal growth. I get more than enough of that from my children and Twitter.

John Tregoning is a reader in respiratory infections at Imperial College London.


Print headline: At the sharp end

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Please Login or Register to read this article.

Related articles

Reader's comments (12)

Everyone who is really interested in teaching and learning, that is teaching students on the basis of an understanding of the psychology, sociology and philosophy of education, would know that students feedback is always useful for improving our teaching of specific subjects and consequently what students learn. Indeed should be the case independent of what any professor (lecturer) would like to hear. Those who do not like student's feedback are simply not teachers because they have not been trained to teach i.e. Exposed to courses in psychology and sociology of childhood, adolescence and adulthood, lesson planning, rubric development, teaching practice, clinical supervision, assessment and evaluation, teaching methodologies. The key issue driving such aversion to students feedback is the fact that most university professors are lecturers. This who taught at lower levels of the education system or underwent some teacher training would never be subversive or would not be put off by students feedback. They would use that as an opportunity to improve their lesson objectives, content sequencing, learning styles, teaching methods. They would be welcoming feedback during the course knowing fully well that end-of-semester examinations is only one form of assessment/feefback.
In an ideal world, you might be right. Unfortunately, most university feedback forms are so badly designed, they are worse than useless. The list of faults with standardised feedback forms is long, but suffice it to say the feedback forms that I am forced to use (across three different universities): * Have statistically meaningless response rates. The vast majority don't fill them in, so how does it reflect the cohort's experience? * Rely almost entirely on Likert scales, often with 5 being a high rating on one question and 5 being low on another. The results then being summed to get an overall rating! * Have no context or actionable feedback (open field usually left blank). What do you do when some on the course rate a question 5 and other rate it 1? What do you change, if anything? * Are like TripAdvisor, in that they get filled in by people who either love the course or hated it. I suspect usually determined by the mark they got (You can tell from the few comments that are left). * Are often factually incorrect/just plain wrong in their description of the course and its delivery. I'm not talking about opinion or a different point of view, but objective facts e.g. Complaining about in-class tests when we don't do any. I am afraid I have lost all faith in the validity and usefulness of course feedback. It's a good idea and I wish I could get useful, actionable feedback, but the ways it is most often implemented makes it a waste of time. This is a reflection of the simplistic metrics driven environment within universities and the obsession with 'student satisfaction' rather than the quality of learning and education. The feedback forms are more 'Do I like my lecturer' forms, rather than 'Do I think I learnt something'.
Not all kinds of feedback are legitimate and helpful. To make the categorical claim that 'if all lecturers do not like student feedback, then they are not teachers' is simplistic. When looking at student feedback, there are questions about the truthfulness of the feedback (i.e., not all forms of feedback relies on postmodernist interpretation of truth; lies), and the qualifications on the person providing the specific feedback. Like what was reported in this article, students can feedback on the teaching and learning experience but they are not qualified to provide feedback on how learning or teaching should be done or what they should be learning or taught - because they are not qualified to do so. It is analogous to a patient giving feedback on how they were treated by hospital staff (legitimate) and how they like the surgery to be done (illegitimate). There is simply no evidence that student feedback reflects reality. Rather than just making claims - show me the evidence for its veracity.
There is always one: one student who has to write something abusive; one student who gives a feedback grade of 1 out of 5 across all the questions because of a mark awarded they felt was too harsh; one student who has to write that, "the lecturer is past his sell-by-date"; one student who gets his/her lecturer mixed-up with someone else and hurls abuse at the wrong person; one head of department who sacks lecturers because their course evaluations are below the mean for the third year in a row; and one reader of commentaries like these who has no empathy for those lecturers for whom unjustified negative feedback from students can be both health and career destroying. Formal student evaluations are so generic in their design that they are incapable of producing much that is a positive help to quality improvement. On the other hand, they do give the customers a sense of power and an opportunity for payback for perceived wrongs, real, imagined, and misunderstood, but is that why we use them? Informal student evaluations, such are described in these commentaries, are a positive boon to any teacher. Quite how they morphed from informal to formal is beyond me, though i do remember vividly the introduction of formal student feedback in the early 90s: being denied permission at that point to continue to use my own customised feedback instrument, one that did actually help me greatly for the four years i used it. It is time to abandon this destructive formal feedback mechanism and replace it with an informal one that all staff must use, such as the two-question version presented in these commentaries. Then, student feedback will mean something positive for all which, as all "trained" teachers know, is why it is was sought in the first place.
It's a shame that this article did not start with a good review of the research on SETs. Most Universities do not use validated instruments which is far from ideal, most are knocked up by a committee, which results in a camel. By validated feedback instruments do exist and they should be used and researched more in-depth. They are not perfect, but then no measurement instrument is, particularly those that measure social interactions and their impact. I am however struck by how even the outcomes of rough and ready evaluation questionnaires tend to match what Faculty already know about each other.
Can you please post some examples here?
Student evaluations replicate the infantilising scores or gold, silver, bronze, tin "stars" so commonplace in academia - and beyond. While the comments can be really useful, I really don't know anyone who believes that knowing they have scored 3.86 (or 4.24 for that matter) overall is in any way informative. The comments can shine a light on things one should rethink when they're about what worked, what worked less well, etc., but all too often there is always some student more interested in being spiteful or finding their most personally offensive put-down as a chance to express their frustratio, rather than engaging with constructive criticism. If we gave feedback of the sort some of us have received (and yes, I have also received some gratifying comments!), we would probably be had up before some committee or other. The idea that evaluations are anything else than management tools is disproved by the fact that few have been called in by their line manager to be commended for their teaching. But if the scores are poor, you'll be the first to know! If these were designed to be helpful, scores would be dropped leaving only comments, with someone having gone through beforehand to remove all offensive or needlessly unpleasant comments. Innovations in pedagogy jar with ways with which students have become comfortable and, by definition, anything that jars risks producing senses of disorientation. These can be incredibly transformative for students who throw themselves into things, but the unfamiliar can equally be a source of anxiety. Risk taking for its own sense is foolish, but if one's approach to teaching is to transform your own and students' relation to the world, then it is all important.
Law firms do let clients determine who makes partner. Not by reviews, but by number of clients and how much they have been willing to pay. Literally if you don't have clients and large billings you do not make partner. I don't think this would be what most professors would support.
A lot of the flaws in SETs pointed out in the article and the comments can be designed out if there is concerted collaboration between the SET administrators, academic leaders and representative teachers and students across each discipline. Some universities chose to make this investment, some do not. Here is a starting list of what can be 'fixed': shifting emphasis in questions from teacher 'performance' to student learning experience; periodically updated and validated instruments; not running the standard SET when a teacher is trialing a new approach; running some form of mid-semester feedback (either formal or informal); regularly reminding students of their obligation to provide constructive feedback and potential consequences if they do not; providing students with a free-text field for each item not just the overall satisfaction item; dealing with low response rates in reporting of results (e.g. not reporting quantitative results if a statistically valid response rate threshold has not been met; use moving averages over multiple semesters); analysing student feedback for your university to determine the actual level of gender or racial bias to inform localised responses; taking the time to tell students what has been changed in response to their feedback and that of previous cohorts; inviting students to co-design improvements such as group tasks, assessments, etc; having a clear policy to distinguish how feedback on the course/subject vs. feedback on the teacher will be used/not used. While SETs will always be an imperfect instrument, with a bit of effort they can be turned into something approaching fit for purpose.
Thank you for this post it really helped me. Keep it up.
Two stories and one observation. In the days before standardized university issued student feedback forms, I issued my own short and free response only questionnaire to part time students on a masters course, who were all employed senior managers. Given the block release delivery and high price of the course, I used a local hotel. One individual on an early cohort consistently offered the comment that 'the coffee is awful' at the end of each residential workshop. I consistently responded that his subjective one out of 30 plus opinions was insufficient for me to take any action. The second story related to a similar course, this time designed and delivered in-house to a group of middle managers employed by a large corporation. I was asked by my then Dean to replace a colleague to teach a particular module because the colleague had received negative feedback from the students, passed onto the Dean by the corporation's Management Development Manager. I asked my Dean, ''Do you want me to get consistent (high) scores of 5 on the student evaluations, or do you want me to do my job of educating and developing these managers?'' He thought for a moment, I think about whether to challenge my cynicism, and then replied, ''Get a positive evaluation; we need the business from this client''. I did just that. My observation is hinted at in some of the above. It is simply what grounds are there for believing that changing something to satisfy this group of individuals will help satisfy a new and different group of individuals? As my first story indicates, I believe collecting feedback is useful and should be done. But always tailored to specific courses. And student evaluation itself always has to be evaluated.
I encourage diaglogue with students throughout the module, and am always ready to listen to (but not necessarily act upon) their comments... but this is a habit developed through some years teaching in FE before slithering into a university. I do sometimes wonder at the questions that are asked on the evaluation forms that the students are sent, and the over-reliance on Likert scales rather than getting them to say what they think. OK, it's easier to use metrics as an overview, but they are not very informative or helpful when you are looking to improve.


Featured jobs