A research project aims to compile a truly reliable journals ranking list. Linda Bennett explains
Which journal articles should academic researchers and university students read? What articles do they read? What has authority and what is relevant? These are questions that I and my colleagues at Manchester Metropolitan University have been asking in a research project that analyses the reading habits of staff and students from four leading UK universities.
One undergraduate said: "To be honest, it's difficult. I've had this conversation with my friends. If something has been published, we have to hope that it's the best. False information is bound to slip through the net, though."
One way of establishing authoritativeness is to name-spot. One postgraduate said she "looks for the names of established academics". Other students, and some academics, said they trust the library: if it has bought a journal, they assume the librarian has approved its quality. This puts a huge burden on the librarian, given that there are more than 20,000 peer-reviewed academic journal series available worldwide. And as their number and costs grow, it also puts a strain on university finances. Librarians said that their role is not to evaluate the content of journal articles but to make relevant information available and help people to find it, in itself a marathon task.
All these methods for establishing authenticity and relevance are demonstrably hit-and-miss. A more considered approach might involve consulting a journals ranking list. With the proliferation of academic journals in recent years, such lists have become useful, if not indispensable. They come in four basic forms: peer surveys of journal esteem; citation studies; hybrid lists, which produce rankings on the basis of a combination of measures of peer esteem and citations; and derived lists, which extrapolate journal rankings from research assessment exercise and related audit ratings.
These lists have limitations: they are usually biased in favour of the judgments of senior research academics, often concentrate on publications from the US and Anglophone world and do not take into account the requirements of lecturers or the reading habits of students and researchers. Put a little crudely, their guiding precepts are elitist. For the broader user groups, especially those working in business, accountancy, education, librarianship and similar professions, titles that concentrate on topical issues and developments may be more important than those that focus on theory and theory testing. As a senior academic at a prestigious UK university said: "The published stuff can be old and horribly theoretical... [it] rarely translates into a subject of interest in the classroom."
To overcome the possible weaknesses of some established rankings lists, we have been developing a list that combines measures of research rigour as defined by academic peer review with measures of readership relevance as defined by the reading habits of university lecturers and students. Using the Association of Business School's list of academic journals in business and management as a starting point, we have collected detailed qualitative information from academics (teaching and research-orientated), students at all levels and librarians at four universities about what kinds of resource they use for their work and how they find and use it. We have also analysed the usage statistics from the same four universities.
There is much work to do, but preliminary findings of the project are interesting and include the following:
- Journals with the highest rankings in lists such as the ABS's are also generally the journals that are read most frequently by academic researchers and students
- A few journals with low rankings have very high readership levels, suggesting that the value of these publications needs to be better reflected in the lists themselves
- The more assured and renowned users are, the less they care about citation ratings and journal rankings. The same finding applies generally to the institutions within which they operate
- Writers at all levels reference articles that they haven't read, either because they are referring to something that was actually referenced in another paper or because the literature review component of their study was undertaken by a colleague.
When it is completed, we believe this research will provide an essential tool for librarians, students and academics trying to assess which journals they should buy, use or recommend. Will it dispel all their problems of choice? Perhaps, but only in the journals field. As every librarian knows, commercially produced information is the tip of the information iceberg in this internet age.
Or, as a well-known practitioner/academic at an old university said: "I use Google, not Google Scholar. Google tells you what sort of things other people are looking at."
Linda Bennett is a visiting research fellow at Manchester Metropolitan University and principal of Gold Leaf Consulting.