Relying on ambiguous participation data raises ethical challenges and can often result in misleading conclusions around student engagement, according to a new paper.
Universities regularly use data tracking logins, class attendance and page views to make decisions about teaching quality and student support, with high rates often presented as evidence of successful learning environments.
New research argues that although such data can provide significant benefits when used responsibly – supporting monitoring and intervention and boosting student well-being – its use should also be seen as “complex and contested”.
Study author Elva Retnawati, a PhD student at the Huazhong University of Science and Technology, said participation data can present a number of issues, including challenges around defining it.
Although many would classify participation data as attending sessions, posting online or accessing material, the paper says it is an “elastic” concept that is rarely consistently defined.
It also overlooks offline study, peer learning and critical reflection, and could “disadvantage students whose engagement does not align with system assumptions”.
Students who engage through reflection, peer support or independent study might appear disengaged to many university observers. Others might appear inactive because of limited digital access, timetable clashes, mental health challenges or resistance to being monitored.
And those who do appear engaged, according to such metrics, might only be doing so to appear involved, rather than because they think it will help their learning, the study highlights.
“When data becomes the primary indicator of performance, institutions may prioritise visible activity over depth, creativity, and autonomy,” says the study, published in Studies in Higher Education.
As such, the paper warns that most forms of participation data risk presenting a partial, and sometimes misleading, picture of student engagement and capture “only a narrow slice of learning activity”.
There are also ethical concerns, with some extensive monitoring often contributing to feelings of surveillance, discomfort or uncertainty, it says. And it cautioned that data could “amplify existing inequalities” without the necessary context.
The study warns that there could be pedagogical danger in relying too heavily on participation data.
“Misinterpreting analytics may shift teaching to what is most easily measurable, such as post counts or login frequency, rather than learning quality.”
Participation data should not be used as a definitive measure of student experience or capability, and is just one limited component of a broader learning environment, Retnawati writes.
Instead, universities are urged to consider multiple or hybrid forms of evidence, including dialogue, reflection, peer interaction and self-reporting.
Register to continue
Why register?
- Registration is free and only takes a moment
- Once registered, you can read 3 articles a month
- Sign up for our newsletter
Subscribe
Or subscribe for unlimited access to:
- Unlimited access to news, views, insights & reviews
- Digital editions
- Digital access to THE’s university and college rankings analysis
Already registered or a current subscriber?








