The research assessment exercise review panel wants to see radical change. The THES asks three experts to do some out-of-the-box thinking
My first suggestion for improvement of the research assessment exercise is for clear descriptions of exactly what criteria are used to evaluate the quality of different kinds of publications, writes Eileen Gambrill. Take this example: "The panel (social policy administration and social work) have defined international excellence as... the extent and originality of the contribution to one or more of the following: the advancement of knowledge and of the discipline; the advance of theory; the advance of policy debate; the understanding or advance of good practice; the logical coherence of argument and the use of interpretation of evidence; and the clarity of research questions, appropriateness of research design, degree of methodological rigour and innovative approach."
This sounds good. But exactly what criteria are used to identify the degree to which "advances" are made? What criteria are used to evaluate "appropriateness of research designs"? How is "methodological rigour" defined? The manner in which research is carried out influences the degree to which biases compromise its conclusions. This is especially true of a profession such as social work in which practitioners, students and clients are influenced by claims of knowledge picked up by the media, too often with too little critical appraisal, and passed on to the public. Inflated claims may result in use of ineffective or harmful services and neglect of services found to be helpful.
If criteria that decrease the likelihood of biased results or faulty conclusions are not used, we should be told why not. Clear criteria and strong arguments for how they are used would enable more accurate descriptions of the quality of different kinds of studies, including accurate description of the kinds of biases that were controlled for.
The RAE should also make widely available the specific criteria used to appraise different kinds of publications and the related rating scales. Assessors should select examples of what they regard as the publications from each department that best reflect these criteria, and send a copy of such articles, together with their ratings, to external reviewers. For example, research articles should be rated on a scale that describes the extent to which the method used can answer the questions raised ranging from one (not at all) to five (ideal). In addition, rather than simply being given references from each staff member, external reviewers should be given an abstract of each piece as well as the individual rating given by the review panel to the piece. This would allow external reviewers to understand exactly how the RAE appraised publications. RAE assessors should also provide reliability figures for their ratings. Only if we examine the extent to which different raters give similar scores on quality, can we determine the degree of agreement between them. Lack of agreement may point to areas in need of clarification. Secrecy allows bias that may perpetuate stereotypes. Transparency is likely to encourage judicious use of taxpayers' money.
Each description of research productivity by each department could also be rated on a propaganda index, ranging, say, from one (none) to six (saturated). We could develop an index representing the number of claims made that have survived critical tests, compared with the number of claims that have not been critically tested or that have been critically tested and found to be false. Propaganda includes inflated claims about what has been accomplished, hiding limitations of research methods and ignoring incisive critiques, misrepresenting alternative well-argued views and hiding deep conceptual problems with a popular view.
These suggestions should be accompanied by the realisation that trying to create change is difficult. It is important not to punish innovative research programmes that may require a developmental agenda over many years. The principal question should be: "Are initial steps well thought out?" We should discourage quick results of minimal value in discovering what actually helps.
Eileen Gambrill is a professor in the School of Social Welfare, University of California, Berkeley. She was an external reviewer for the RAE assessment in social work, social work administration and social policy in 2000-01.