UK universities have trialled a new tool that monitors what is said about them online, which could potentially be adopted across the sector as an early warning system for standards issues.
Ten institutions were involved in a pilot that provided them with a dashboard displaying near real-time feedback from social media platforms, as well as student review sites.
The tool, which collates data from online ratings and uses an algorithm to assess the sentiments expressed in anonymised online comments, was created by the Quality Assurance Agency and Alex Griffiths, director of data science at Statista Research and a fellow at the London School of Economics’ Centre for Analysis of Risk and Regulation.
The pilot was launched after a study by Dr Griffiths, published last year, revealed a close correlation between the views expressed about institutions on Facebook, Whatuni and StudentCrowd and performance in sector quality measures such as the teaching excellence framework and the National Student Survey.
Since then, Dr Griffiths has extended the research to include material from Twitter, Google and Student Hut, and he has found that the correlation extends to these platforms as well.
The dashboard tots up the amount of positive, neutral or negative feedback an institution has received. It also calculates a moving average “collective judgement score” over a year.
The results of the pilot, presented at the QAA conference on 7 May, showed that institutions found the tool very useful, although often for different reasons, according to Dr Griffiths.
While many larger institutions already had teams monitoring Twitter and Facebook, the new tool allowed some of that work to be automated. Smaller institutions that lack the resources to scour social media found the tool to be “particularly beneficial”, Dr Griffiths said.
The pilot also confirmed that the majority of feedback from students was positive, with a sector-wide average in excess of four out of five.
Paul Hazell, the QAA’s evaluation and analytics manager, explained that some universities wanted to use the tool for continual course monitoring, rather than annual programme reviews, while others wanted to enhance their quality reports to their governors. “Some wanted to use the data to address particular concerns, some as part of their efforts to improve the student experience and one wanted to use it in the academic development reporting process,” he said.
Most universities said that they would prefer feedback at a more granular level: information about who was making the comments, such as past, present or potential students, staff or relatives, Mr Hazell added.
The QAA will finish assessing the pilots, gauge the reaction across the sector and weigh up the cost and benefits of rolling out the tool more widely, said Mr Hazell. “The pilot was very much about the potential for improving quality and the student experience…our intention was not for it to be used for regulatory purposes,” he added.
Dr Griffiths noted that if the QAA did not want to use it as an early warning tool, it could have other uses. “We’ve got some clear steer from providers that they find it useful. The good thing is we’ve got this wealth of data, there are so many ways you can use it,” he said.
“You could forget about individual providers and explore issues across the country, or we could have a chat with the Office for Students about using it in the regulatory sense: it was originally designed as a regulatory tool.”
Academics and university leaders will discuss how universities can encourage innovative teaching and learning practices at Times Higher Education’s Teaching Excellence Summit, which is taking place at Western University, in London, Ontario, Canada, from 4-6 June 2019.