Journal lists that rank scholarly journals by their perceived quality may be fundamentally flawed because of the bias among researchers who are keen to label their own work as top drawer.
Research into the compilation of journal lists concludes that the peer-review system used is open to personal bias. It raises questions about the role of the lists as a measure of academic excellence.
It finds experts' ratings of journal quality are heavily influenced by concerns about their own identity as researchers.
Kim Peters, research fellow in the School of Psychology at Exeter, said: "We found that (academics) rated the quality of journals more favourably when they had personally published more papers in that journal, when they were a member of the journal's board, when the journal reflected their disciplinary affiliation and when it reflected their geographic affiliation.
"In summary, expert academics show strong, and predictable, self-favouring biases in their ratings of journal quality," she added.
The findings support assumptions made by the researchers based on the "social identity" theory, which argues that people want to judge themselves positively, and will therefore evaluate positively anything that is associated with themselves.
A paper penned by the research team, and due to be presented at a symposium hosted by the British Academy of Management in September, concludes: "These results suggest that attempts to publish a definitive list of journal quality are misguided in a context of weak paradigm development."
Journal lists are controversial as a measure of quality, and their use has been ruled out in the forthcoming research excellence framework, which will replace the research assessment exercise as the means for distributing about £1.5 billion in annual quality-related research funding in England.
However, they are still a popular measure within universities, and some departments are believed to have selected papers for submission to RAE 2008 on the basis of journal list rankings.
Dr Peters said she found it interesting that academics showed the same personal biases as those they studied in other people.
"Being aware of these processes doesn't seem to stop people from behaving in these biased ways," she said.
The researchers said the academic community needed to think carefully about how the lists were constructed.
The paper, "Experts' judgments of academic journal quality in management and organisation studies", calls for new journal lists based on more objective criteria, but says that using citation rates alone is not the solution.
"Citation rates are also inherently biased in various ways, partly because to use them people need to make a number of 'subjective' decisions, such as which journals to include," Dr Peters said.