The Australian government recently released the fourth national report on the Excellence in Research for Australia (ERA) assessment. The latest review, conducted in 2018 by the Australian Research Council, uses performance data for the period from 2014 to 2016. We now have 14 years of longitudinal research performance data on the Australian higher education research system, based on exercises in 2010, 2012, 2015 and 2018.
The government’s principal stated objective for undertaking these expensive exercises is to “establish an evaluation framework that gives government, industry, business and the wider community assurance of the excellence of research conducted in Australia’s higher education institutions”.
Arguably, this objective is being achieved, but at some considerable cost to the undergraduate teaching and learning experience and to national, higher-level technical skills-base development as universities increasingly focus on their research agendas. More than 50 per cent of all the money expended by universities on research and research training now comes from discretionary income sources.
The 2018 comprehensive stocktake involved 42 universities submitting more than 500,000 research outputs to be assessed in 2,600 units of evaluation (UoEs), grouped into 22 fields of research with 157 subgroups.
Each successive review has identified increased research income, research publication outputs and an improvement in overall quality of performance as assessed by the proportion of outputs that are rated above world standard. Some 69 per cent of the UoEs were rated above world standard (a 5* or 4* rating) in the 2018 exercise, compared with 63 per cent in 2015 and 46 per cent in 2012.
It seems to me that universities have progressively refined their expertise in submission preparation based on previous experiences. Still, the more professional administrative structures have masked somewhat genuine research improvements.
The Australian government does not directly link research funding to university ERA performances. Limited attempts to do so in the past have attracted much criticism. But funding allocation formulas that are acceptable to discipline groups and universities have not been satisfactorily developed.
Undoubtedly, the benefit to government from these reviews is to have a measure of where excellence for various disciplines resides in universities. The data are used by government policymakers and others to justify the near A$10 billion (£5.4 billion) expended annually by all parties on university research and research training.
And the main benefits to universities of the ERA initiative came from the 2012 and 2015 exercises. The ERA has been the enabler and motivator for universities to undertake more directed strategic research planning. Many universities have felt empowered to rationalise their research priorities for investment and their staffing profiles based on the ERA outcomes. For the first time, universities had discipline-specific, independently assessed comparative performance data for all their active fields of research.
One consequence, however, has been the realignment of academic staff duties. Since 2013, there has been a very significant system-wide increase in the number of teaching-only staff and no net growth in research-only and teaching-and-research staff, despite the rapid growth in postgraduate education.
Another strategy that has emerged has been for universities to place a higher priority on the recruitment of overseas doctoral research students than on Australian students because of their superior timely completion records and, presumably, their publication productivity. Australian doctoral students represented only 22 of each 100 additional student load growth in 2017 compared with 2008.
Still, an important ongoing benefit of the ERA rankings by discipline is that they provide a valuable marketing tool for student recruitment by identifying an institution’s staff and research strengths. The improved international rankings of several Australian universities have also benefited from the empowerment provided by ERA outcomes for research reform.
We’ll soon see the results of the first ARC research impact and community engagement exercise for Australian universities, announced at the end of 2017. An important national debate will be whether the ERA and the Engagement and Impact Assessment should continue and at what frequency, considering that they are time-consuming and costly activities for universities with diminishing returns.
It is my view that another ERA exercise should not be conducted for at least five years. The role of the ERA in reforming university research practices has not been wholly in Australia’s long-term international competitive best interest because of the burdensome administrative costs, the strategic staffing realignments and the resulting over-reliance on overseas doctoral students. A review of the Australian higher education research system is warranted.