In Life's an Ocean, The Verve sang of imagining the future and waking up with a scream because they were buying some feelings from a vending machine. I wouldn't say that I have awoken with a scream, but we might imagine a future in which academic practices have reached a similar state of efficiency, ordering and control. Put starkly, the day cannot be far away when there is an "app" that tells us what articles to read. I'm imagining a simple application that builds up a personalised profile of the research articles we read, and then uses that profile to predict what we are likely to want to read. Such devices are already informing us what music to listen to, what films to watch and what books to buy, so it can't be long before they are doing our research for us, too. This will be the day when, as feminist scholar Donna Haraway predicted years ago, we will become more inert and our research devices will become more lively.
Imagine the ease of researching in a world where the research materials "find" us. Where we need only log in to see what we must read in order to complete a project. No more searching, no more wasting time reading the wrong things or looking in the wrong places, no more aimless flâneurs wandering around libraries or flicking through e-journals to see what they might find. None of this will be needed because the power of algorithms, as sociologist Scott Lash has put it, will be reshaping the academy. These algorithms will streamline, predict, make decisions for us and do work on our behalf, taking some of the agency from researchers and research processes - and making it their own.
This might sound like futurism, but the reality is that algorithms are already sorting the academy in lots of ways.
I've been quite speculative in suggesting that research articles will come to find their readers, but in many ways this is already the case with books. We need only to think of how Amazon's predictive algorithms already shape our encounters with academic books. We might question their predictive accuracy, but it is likely that we have all had moments where the recommendation system on Amazon has suggested a purchase that we have gone on to buy, read and build into an article, book or lecture. Clearly algorithmic processes have implications for the way that our research or teaching turns out.
In Rob Kitchin and Martin Dodge's book Code/Space: Software and Everyday Life (2011), the authors demonstrate the importance of software for the functioning of the social world everywhere from the home to air travel. It would be remiss to think that higher education somehow sits outside these broader social developments. Kitchin and Dodge point out that even mundane technologies such as Microsoft Word or Adobe Photoshop come "loaded" with "algorithmic normalities" that "subtly ... direct users to certain solutions". Without thinking too hard, we can immediately see that PowerPoint's algorithmic normalities are likely to be providing us with subtle directions in how to lecture.
Algorithms are also implicit in our research practices. We can begin by thinking about Google's famous PageRank algorithm. Google is an interface that almost inevitably plays a part in social research as we search around for peers in our fields, look for background information, check on a speaker we spotted talking at a conference, and perhaps even when we unwittingly discover something that triggers an idea for a project. Add to this the growing use of Google Scholar as a means of finding materials as well as other more automated resources such as the readings and news feeds on sites such as academia.edu.
Algorithms are also increasingly part of the research process itself. I was recently trained in the use of the survey analysis software SPAD in order to carry out multiple correspondence analysis. Here, algorithms perform the complex mathematics that enable survey data to be mapped on to geometric space. Elsewhere, the analytical capacities of the SPSS computer program are a widely used resource for quantitative analysis. Atlas.ti's algorithmic functions are used to perform pattern recognition on qualitative interview datasets and group content to see what the data are saying. The list goes on. Algorithmically aided analyses feed into findings, shaping knowledge and then perhaps playing out in material ways through policy, planning and the like.
All of this is before we even begin to think about how algorithmic processes converge with higher education's systems of measurement in the distribution of funding, the production of league tables, the outcome of the research excellence framework, the use of citations, the ordering and ranking of journals and so on.
The outcome, as the geographer Stephen Graham has put it, is that we need to look into the "very guts" of these systems. Researchers are looking at the social power of algorithms in various spheres including the financial sector, the military and bioinformatics, but are leaving out the higher education sector. It is important for us to begin to acknowledge and think through the power of algorithms as they come to order and shape our research and teaching practice, and play a part in the very formation and communication of knowledge. At the moment we are paying little attention to these developments. As a consequence, we are being reworked from the inside out with little reflection.