What constitutes good university teaching? Some argue that nothing beats the inspiring lecture from a department’s star professor, while others will wax lyrical about the merits of a lively debate in a tutorial or the unrivalled experience of getting hands-on in the laboratory.
Yet none of these examples of outstanding teaching – which often take many hours of preparation, revision and fine-tuning to perfect – was captured this year by the UK’s teaching excellence framework, the results of which focused squarely on student outcomes, such as graduate earnings and satisfaction levels, judged against benchmarks allocated to each institution.
These outcome metrics – particularly taken at institution level – are poor indicators of teaching quality because they make no attempt to assess the time and resources poured into providing top-quality education, many academics argue. Teaching inputs are a better guide to quality than easily influenced student satisfaction polls or graduate earnings at the mercy of regional labour markets, they say.
Enter the subject-level TEF, a pilot of which was announced by universities minister Jo Johnson on 20 July. As part of the study, institutions will be invited to apply TEF metrics to some or all of the subjects they teach to help highlight pockets of excellence not shown up by the current institution-level TEF. In a more radical departure, however, the new Office for Students will also run an “exploratory pilot” to measure “teaching intensity” in five subject areas, in which both timetabled hours and class size will be assessed.
The pilot “will measure teaching intensity using a method that weights the number of hours taught by the student-staff ratio of each taught hour,” explains the pilot’s specification, published by the Department for Education.
“Put simply, this model would value each of these at the same level: two hours spent in a group of 10 students with one member of staff, two hours spent in a group of 20 with two members of staff, one hour spent in a group of five students with one member of staff,” it explains.
Once contact hours are weighted by class sizes, and aggregated up to subject level, those running the pilot will be able to calculate a “gross teaching quotient” score, which would be an “easily interpretable number” and used as a “supplementary metric” to inform subject-level assessments.
Some university staff are frankly astonished that they might be asked to represent a department’s teaching with just a single number. In a further twist, contact hours and class sizes will be calculated by both an “institutional declaration and a student survey", while scores for e-learning and work-based placements would be listed separately.
Michael Merrifield, professor of astronomy at the University of Nottingham, whose subject group of physics and astronomy will be one of the five disciplines assessed, said he was concerned that any measurement would only take into account scheduled contact hours, given the different modes of teaching used throughout different points of the learning cycle.
“The final year of our MSci degree is almost entirely taught in a student-led mode, where they use the staff in an open-door consultant capacity,” explained Professor Merrifield, who said that the “number of contact hours is very high, but difficult to capture".
“I fear that we would be driven away from this very effective mode of teaching if we were being pushed by metrics to deliver more ‘normal’ teaching,” he added.
Others have wondered how a teaching intensity score would adequately reflect either field-based or work-based learning.
That approach might give a false impression of teaching intensity because of the way that many courses were delivered, said Johnny Rich, executive director of the Engineering Professors’ Council.
“Some [institutions] focus on the theoretical basis [of engineering] while others embed industry [experience] throughout every aspect of the curriculum,” said Mr Rich.
“Both approaches are valid, but may lead to different performance in the subject-level TEF, which would mislead students for whom one approach or another might be right for them.”
Some university leaders may also take issue with the very premise of “teaching intensity”, in which, for instance, a seminar with 16 people is rated twice as good as one with eight people in it – pointing out that world-leading Harvard University is renowned for having larger seminar classes of 10 or more.
Handing out lower scores for lectures with 100 people in the audience than those with 10 or 20 is likewise nonsensical, believed Pam Tatlow, chief executive of MillionPlus, which represents modern universities in the UK.
“The number of people you sit alongside in a lecture does not reflect how much you learn,” said Ms Tatlow, who described the idea of teaching intensity scores as “hideously complicated and likely to give the wrong impressions about the different context in which students learn".
Studies have shown that students generally perform better in smaller groups. The DfE pilot mentions how the influential 2010 report Dimensions of Quality by Graham Gibbs, former director of the Oxford Learning Institute at the University of Oxford, concluded that “large class sizes can have negative impacts on access to teaching staff, assessment and feedback, student engagement and depth of learning”.
When the results at the single research-intensive Australian university were analysed at faculty level, however, that link became even weaker. There was considerable “variation as to whether small, medium, large or extra-large class sizes were rated most favourably in terms of student satisfaction”, says the report, titled “So how big is big? Investigating the impact of class size on ratings in student evaluation”, published in the journal Assessment and Evaluation in Higher Education. That raised the question of whether the issue of student satisfaction was “faculty-dependent”, rather than directly related to class sizes, state the authors Deanne Gannaway, Teegan Green and Patricie Mertova.
Gervas Huxley, teaching fellow in economics at the University of Bristol, whose upcoming research paper in the journal Fiscal Studies has informed the new pilot, believed that the benefits of smaller class sizes were clear.
“If you are a student in a small class it is a much better learning experience,” said Mr Huxley.
You will get a lot more of your questions answered by a tutor, for instance, if you are in a group of two or three, he explained, saying that smaller classes also helped less confident students to speak up in class and join discussions.
The creation of a measure to allow students to make a meaningful comparison of teaching intensity was long overdue, claiming that information for prospective students on class sizes in the UK was “non-existent”, he added.
Harvard’s use of larger classes was not proof that larger group teaching was preferable to smaller tutorials, Mr Huxley added.
“Many US universities charge enormous fees and offer very little in the way of education – the Ivy League is getting away with murder,” he said.
“Some extremely expensive universities offer very little idea of their teaching to applicants and it’s a scandal,” added Mr Huxley, who said that he was motivated to undertake the research by claims from Lord Willetts, the former universities minister, in 2013 that university class sizes were the same as they were 50 years ago.
Mike Peacey, lecturer in economics at the New College of the Humanities, who co-authored the Fiscal Studies research paper with Mr Huxley, said that students deserved to know more about what they were paying for.
“When students apply to university at the moment they have some idea of contact hours but when it comes to class sizes their information is very ropey,” said Dr Peacey.
“Some universities may choose to offer teaching in small classes and others prefer larger ones, but if they are offering different amounts of teaching resources to different students, it’s a bit of mystery why tuition fees are all exactly the same.”