Logo

The difference between good and bad impact management

How do academics and research managers really feel about impact management? Here’s how it can be done badly, and how to do it right
Bournemouth University,University of Plymouth
3 Feb 2026
copy
  • Top of page
  • Main text
  • More on this topic
Newton's cradle on a white background
image credit: iStock/Mihaela Rosu.

You may also like

Maximise your research impact with these seven LinkedIn tips
5 minute read
LinkedIn logo reflection against a glass building

Popular resources

Over the course of writing our book Exploring Research Impact in Academia and Why It Matters, we were repeatedly struck by the same contradiction: universities speak passionately about research impact, yet in practice they often manage it in ways that undermine its very purpose. Our interviews with academics and impact professionals reveal a sector trying to do the right thing, but which ends up hampering rather than helping.

Here, we’ll reflect on what good and bad impact management looks like, based on our conversations with research managers and academics.

When impact is done badly

We should begin with the uncomfortable reality: much of what currently passes for impact management is neither supportive nor effective. Too often, those involved experience it as superficial, bureaucratic and disconnected from academic practice.

1. Over-bureaucratisation and management-by-checklist

The people we spoke to repeatedly described processes that prioritised procedural compliance over genuine engagement or support. Impact repositories, compulsory templates and constant requests for evidence became symbols of institutional control rather than support. Not a single academic we spoke to saw value in centrally mandated systems designed to monitor impact. Most felt these systems were tools for the institution to retain ownership of case studies rather than to add value to the work itself.

When management reduces impact to a series of checkboxes, academics quickly recognise that their work is being measured and scrutinised, rather than supported.

2. Leadership without knowledge

We were often told that there is a gap between senior leaders’ rhetoric about impact and their actual understanding of it. Many interviewees told us that their leaders’ engagement with impact was superficial, often limited to demanding more impact activity without offering any meaningful support or demonstrating academic literacy in the area.

Some were even instructed by leaders to write “higher-quality papers” to maximise funding returns, or “do four-star impact” simply because this is where the greatest funding return has been in research evaluation exercises. When leadership lacks knowledge, impact becomes an administrative burden rather than an academic endeavour.

3. Short-termism and REF-driven behaviour

A recurring theme across interviews was the dominance of the REF in shaping institutional responses. Impact was framed as something to be done for assessment cycles, rather than a long-term commitment to stakeholders. This created perverse incentives:

  • demanding activity from every academic (“everyone must do impact”), or
  • restricting impact to only one or two individuals (“we only need two case studies”).

Both myths were cited frequently and reflect management cultures that do not understand academic practice.

The result is predictable: shallow, last-minute attempts at “doing impact”, often retrofitted to existing research in ways that feel inauthentic.

4. Outsourcing impact to consultants

Our research revealed significant reliance on external consultants, sometimes at substantial cost. This trend reflects a belief among some leaders that impact is a commodity that can be purchased. While some said their experiences with consultants were supportive and nurturing, they also commented on the outsourcing of support. 

Interviewees sometimes expressed scepticism about the “impact industry”, highlighting formulaic training, generic process models and a tendency to prioritise surface-level performance over meaningful engagement. If institutions place more trust in consultants than in their own impactful academics, it is little wonder that impact fails to embed.

5. Impact treated as surveillance

In many institutions, monitoring veers into something resembling surveillance: endless requests for evidence, mock reviews by people with little knowledge of the field and scrutiny that feels more like control than support. Academics frequently interpreted this as a sign that their institution did not trust them, and that impact was being policed rather than facilitated.

Such practices demotivate the very people who are best placed to deliver the meaningful research impact that will return considerable value to the institution.

What good impact management looks like

Despite these challenges, our research also uncovered examples of practices that genuinely support impactful work – sometimes isolated, but sometimes systemic. Good impact management is neither mysterious nor burdensome. It simply aligns with academic identity, supports long-term relationships with stakeholders and treats impact as integral to scholarship rather than peripheral administration.

1. Respecting academic identity and expertise

The most impactful academics we interviewed did not produce impact because of REF incentives. They did so because impact was part of their scholarly identity – they believed their research could contribute to positive change, and their work with stakeholders grew naturally from that belief.

Good management recognises that impact grows from academic values, not managerial instruction.

2. Support, not control

Where institutions got it right, they provided:

  • time
  • recognition
  • flexible support, and
  • access to knowledgeable colleagues.

They did not dictate how impact should be done. They avoided rigid timelines and metrics, and instead offered encouragement, infrastructure or practical assistance when needed.

Leadership in these cases understood academic culture and could speak with credibility, rather than relying on slogans or compliance demands.

3. Investing in internal expertise

Several academics commented positively on impact professionals who understood research policy, had developed deep knowledge and worked alongside researchers as partners, not auditors. These professionals understood both the research and the value of its impact. They helped academics articulate impact without reducing it to formulaic phrases, were part of institutional culture and available to support, and were often valued sounding boards and advisers. 

Institutions that build and trust internal expertise cultivate more authentic and sustainable impact.

4. Allowing impact to develop organically

Meaningful impact is rarely linear. It depends on relationships, timing, context, and often a level of serendipity. Our interviewees stressed that genuine impact requires long-term engagement and cannot be reverse-engineered from a process model or conjured up at the end of a project.

Good impact management recognises this uncertainty and supports academics in pursuing the work over time, even when it does not lead to immediate or measurable results.

5. Recognising impact as part of research culture

When institutions value impact as scholarship, rather than as an administrative obligation, academics feel empowered to pursue it. This means:

  • recognising impact in promotions
  • allocating time for it
  • reinvesting QR funding to support both embryonic and long-term activity, and
  • celebrating diverse forms of contribution.

These practices send a powerful message that impact is respected, understood and embedded in research culture.

Across all our evidence, the message is consistent: impact succeeds when it is co-created by academics and stakeholders, and supported by leadership, rather than imposed by management or outsourced to consultants. Bad impact management treats impact as compliance; good impact management recognises it as scholarship and supports it.

If institutions wish to foster transformative and meaningful research impact, they must stop managing impact at academics and start supporting impact with them.

Andy Phippen is professor of IT ethics and digital rights at Bournemouth University. Louise Rutt is senior research environment, culture and impact manager at the University of Plymouth. 

If you would like advice and insight from academics and university staff delivered direct to your inbox each week, sign up for the Campus newsletter.

You may also like

sticky sign up

Register for free

and unlock a host of features on the THE site