Data is a flawed guide to student preferences and performance

Agency and incomplete information imperil data-driven assumptions about how to personalise teaching and learning, say Kate Ames and Colin Beer

July 10, 2022
Source: Getty

In the era of big data, universities are on a march towards “personalisation”. The aim is that students and staff have knowledge about student activity and how this influences academic results so that positive behaviours can be encouraged and student journeys can be supported in an individualised way.

This personalised data is powerful and important. It allows users and institutions to make informed decisions – but only up to a point.

For instance, anyone working in higher education will currently be feeling the pain of delivering lectures to empty halls, despite students indicating that they want a face-to-face experience. And while universities are delivering courses that data tells us should be popular, they aren’t filling quotas.

It’s a wicked problem – trying to plan for what people say they want, only to find that that’s not what they actually want – or, perhaps, need. It’s a known phenomenon associated with self-reporting.

A natural tension exists between data scientists (“We do because we can”) and educators (“We do because we should”). The former explore the possibilities of capturing and retrieving data, while the latter explore the potential for learning and teaching. There is, of course, an overlap, but, increasingly, there are questions that require broader perspectives.

Predictive analytics is a good example. Data can potentially predict success or failure in the classroom and can be used to inform students about their likelihood of success based on a range of indicators around engagement. It sounds like a great idea, but raise this with most educators and you’ll get a shocked look: “Why would you tell a student that they are likely to fail?”

An abundance of research tells us that students will rise (or fall) to the expectations we have of them (the Pygmalion and Golem effects come to mind). There are also different views on what constitutes engagement and the actual value of historical indicators, especially in a post-Covid hybrid learning world.

A second example is in planning. As higher education institutions, we base our decisions on data that reveals usage patterns for activities, resources, classes and learning management systems. In a disrupted environment, however, there is a problem.

We can present a student with a view on what we think they will need, use and access, based on past/current history. It works to an extent, but we are increasingly challenged by personal silence or absence: those students who disappear, disengage, don’t turn up. We assume that they are not interested or engaged and are not learning. But this is based on an assumption that learning is happening exclusively in the classrooms or learning environments that we’ve created. We can’t see what is happening outside those spaces.

In other words, we are ignoring agency, whereby people intentionally contribute to, or direct, their life circumstances where they have capacity to do so. This makes them unpredictable. However confident we may be in our predictions about future action, individuals can and do make different choices.

Or perhaps we simply don’t know what to do about such agency, because it relates to power. We are seeing the influence of it in the workplace, where increased agency over how, where and when to work is changing fundamental structures and processes. That preference would not have shown up in pre-pandemic workplace data – partly because no one was asking people whether they would prefer to work from home because it wasn’t considered a viable option.

In higher education, we design short online lectures because data tells us that students watch only the first few minutes of a long lecture – but then students complain about the short lectures. It can feel like a no-win position.

Data is a great thing. We need to be informed to make decisions. But we need to remind ourselves that we are at a tipping point when it comes to the relationship between personalisation and agency. We can predict all we like about what the future of higher education looks like, but more research is needed to fully understand the implications of agency in the classroom. This includes understanding the social and economic systems that influence student decisions, and how we can adapt our practices.

In the case of students who say in surveys that they want to come back to campus but then don’t turn up, it feels like the data has lied. But if we asked students directly about access to and cost of public transport, childcare availability or how they feel about walking on campus at night, we might account for the apparent discrepancy.

Anyone who has tried to drop children at school or daycare and get to somewhere by 9am knows that lectures scheduled at that hour are never likely to be full. And the mere idea of walking through a dark campus after class is enough to prompt some students to go home earlier, however much they may want to attend. In both cases, the cost of the effort simply isn’t enough to motivate them to act on the desire to go to class.

So the data hasn’t lied. We just don’t know enough about the influence of external factors on the motivation to act. Or perhaps we do, but we aren’t taking enough notice or moving quickly enough to adapt. Either way, it is clear that the path from data to effective personalisation cannot bypass agency.

Kate Ames is a professor and director of learning design, and Colin Beer is a curriculum/educational developer at CQUniversity, Australia.

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Please Login or Register to read this article.

Related articles


Featured jobs