Altmetrics: what’s not to like?

1:AM forum on altmetrics weighs pros and cons of tracking research impact via blogs, ‘likes’ and social media traffic

October 2, 2014

Researchers’ creativity could be undermined by too many new metrical approaches to research evaluation, the director of the Wellcome Trust has warned.

Jeremy Farrar told a conference in London last week that so-called altmetrics - research metrics that go beyond traditional citation counts - offered the possibility of “driving change” by better connecting researchers with each other and with the wider community. But he said that the current burden on the research community was “massive”.

“We are in danger of overburdening it with ever more approaches, and it is on the edge of not being able to cope…such that we will destroy [its] creativity and innovation,” he told the 1:AM conference on altmetrics on 25 September.

Altmetrics often involve counting the number of mentions or “likes” of papers on social media. Some commentators have suggested that this could provide an early indication of how many citations a paper is likely to accrue ultimately. But Euan Adie, founder of altmetrics provider Altmetric, admitted that the correlations were “very weak”.

Altmetrics were better suited to assessing the broader social impact of papers, he said, adding that while analysing social media was useful, the movement had to go further.

He said that his firm was also assessing mentions on blogs and review websites such as Faculty of 1000, as well as policy documents - although he said it would take human intervention to ascertain whether the policy had been implemented.

Andrea Michalek, co-founder and president of Plum Analytics, said that her company was also counting comments and reviews, Wikipedia mentions and bookmarks in reference managers.

Adam Dinsmore, evaluation officer at the Wellcome Trust, said that the trust used altmetrics to flag up potentially interesting “narratives” about the influence of research it funded. However, he said, a high altmetric score did not necessarily imply that the research had made a crucial scientific impact, and it was important to consider whose attention papers had caught. For example, the trust searched for mentions of its papers in academic syllabuses and on websites with .ac and .gov domain names.

Liz Philpots, head of research at the Association of Medical Research Charities, said that she used altmetrics to monitor public discussion of funded research, but warned that assessing researchers on the basis of altmetrics would risk judging them “on the basis of their communication skills, not their research skills”.

James Wilsdon, professor of science and democracy at the University of Sussex, is chairing an independent review of the use of metrics in research assessment for the Higher Education Funding Council for England. It has received 152 written responses, which will soon be posted online. Preliminary conclusions will appear in March, followed by a final report in June.

He said that the “incredible dynamism and innovation” in altmetrics was “genuinely exciting for all of us who want to see a creative, outward facing, socially engaged and impactful scientific enterprise”. But he echoed Professor Farrar’s concern that it also had the potential to overburden researchers with “yet more complex systems and tools for them to worry about”.

Times Higher Education free 30-day trial

Please login or register to read this article.

Register to continue

Get a month's unlimited access to THE content online. Just register and complete your career summary.

Registration is free and only takes a moment. Once registered you can read a total of 3 articles each month, plus:

  • Sign up for the editor's highlights
  • Receive World University Rankings news first
  • Get job alerts, shortlist jobs and save job searches
  • Participate in reader discussions and post comments

Reader's comments (5)

I am glad THE covered the conference. However, the choice of featured image is unfortunate, as it does not really reflect what alternative metrics (or rather article-level metrics) are about. Furthermore, the image's caption is not only misleading, but false. Altmetrics do not "count the number of ‘likes’ on social media". Altmetrics track, measure and offer different ways of analysing how researchers and academics are *mentioning* (i.e., linking to) academic articles published online. Digital Science's Altmetric, for example, measures, amongst other social media engagement, links made to academic articles on Facebook public pages, but not merely 'likes'. The image and caption do a serious disservice to the article. A web-savvy platform like THE will surely be aware that many readers will browse and mainly come across the image and caption-- and move on. When covering these methods and technologies, as in the case of all academic research, it is essential to be rigorous, even if there is some kind of editorial guideline to make headlines catchy ("what's not to like?"). There is much more to social media than Facebook likes, and in 2014 THE of all platforms should be aware that academics are using social media seriously, critically and professionally, and do not necessarily look like the Reading festival students in the picture. There is a general agreement in higher education it is important to resist the dumbing down of everything. Higher education journalism has an important role to play in avoiding this increasingly-common and frustrating reality.
Hang on, Ernesto! So you're saying that although this article mentions 'altmetrics' 19 times in all, it's not a very accurate account of altmetrics? And that "many readers will browse and mainly come across the image and caption--and move on" and hence that readers' browsing habits are not in fact a good way of measuring the intellectual content of a piece of writing. I think you'll find that those are precisely the arguments people use against metrics, 'alt' or otherwise, in the first place.
@gabriel sure. That doesn't make Ernesto's point any less valid though. :) Altmetrics isn't actually about the numbers as an end goal. It just has a misleading name. It covers anything that isn't citations, not just social media. Even if it *did* just cover social media, there's obviously value in looking there for supporting evidence of impact as well as 'traditional' outputs, as Ernesto says and as evidenced by things like the THE article from today I'm seeing being tweeted to the right: This kind of stuff came up at the conference fairly frequently, which is why the caption and image on this (otherwise good) piece are unfortunate. (disclosure: I'm the Euan quoted in the article)
The Twittersphere has hooked onto this article's lead: "Researchers’ creativity could be undermined by too many new metrical approaches." But there's more to the story than 'metrics = no more academic freedom.' In fact, by measuring impacts in new ways--and by acknowledging impacts beyond those we've traditionally accepted--researchers can gain *more* creativity and flexibility. Here are some scenarios where academic freedom and creativity can be enhanced by tracking metrics beyond citations: - A bioinformatician who's really great at (and passionate about) writing research software code can finally start to get credit for the wildfire adoption of her software by others in her field, now that she's able to document the minimum reuse her code receives, evidenced in GitHub forks, pull requests, etc. This gains her the freedom to spend more time focusing on software development, rather than publishing papers about the software she's written. - Now that article-level metrics are more appreciated by his department chair, a chemist has the freedom to publish in journals other than the "glam mags"--journals like PeerJ and PLOS ONE--and even post his preprints on ArXiv. - Researchers who are doing a service for the world by being very good at communicating the results of their research by appearing on TV, writing articles for magazines, etc--say, in the area of public health research, climate change, etc--can track the impact of their work beyond mere circulation counts or estimated television audience. The issue at the heart of everyone's worries? That the ability to measure will soon mean that minimum metrics will be required for job applications, tenure, grants, etc. ("Your readership on your latest paper is below 1,000 downloads and 2,500 pageviews--sorry, you don't have the kind of impact we're seeking here at University of XYZ.") It's a legitimate concern, but one that can't be remedied by rejecting metrics altogether. Instead, it's up to all of us--the altmetrics aggregators, the department heads, the HEFCE panelists, the individual researchers who make up departments and universities--to make sure that metrics aren't misused. Farrar's full speech to the conference can be viewed at encourage readers to watch it to understand the context of his "creativity" remarks, and the larger issues we're wrestling with in the field of altmetrics, particularly in applying these new measures in ways that benefit, rather than restrict, academic freedom and creativity.
I too am not too sure why THE would chose the image they have as they can be misleading to the content and ideas behind it. For anyone wanting a more in-depth review of the conference, I blogged about it here.

Have your say

Log in or register to post comments