The RSS has argued that the use of metrics needs improvement and should support, but not replace, peer review when it comes to assessing university research excellence.
The RSS made the argument in a response to the Stern Review of the REF, which was commissioned to improve how the Research Excellence Framework (REF) is carried out. The REF has attracted criticism from some who believe that it costs too much (estimated at more than £230 million) and unwittingly encourages ‘gaming’ relating to the use of journal impact factors, the selection of researchers for university rankings and citation counts.
The current president of the British Academy and former World Bank chief economist, Lord Nicholas Stern, who is leading the review, said: ‘Research assessment should not unwittingly introduce incentives for perverse behaviour, nor should it be overly burdensome.’
Evidence submitted by the RSS to the Stern Review (PDF) says that prominent REF metrics are ‘gamed’ and calls for investigation of a more accurate ‘quality per researcher’ metric. It also says that ‘very careful consideration should be given to any actions that increase the data collected for REF’, concurring with the findings of a 2015 HEFCE report, ‘The Metric Tide’, which said that using metrics alone would be no substitute for peer review.
The RSS also calls for changes to peer review arrangements, to ensure that the REF provides fair assessment of research quality regardless of the working arrangement that produced the research. It points out that important interdisciplinary research may miss out under previous arrangements. ‘Researchers might be driven away from working across subject boundaries if provision for peer review appears more administratively complex or carries a lower chance of recognition,’ the response notes.
Research assessment should more explicitly recognise ‘scholary impact’, says the RSS, in order to sufficiently value ‘fundamental research’ which may not yet have a practical application. ‘Methodological advances very often begin to develop before a clear link to a specific area of application has been established,’ it points out. ‘If the REF does not recognise the importance of fundamental research, institutes will likely prioritise appointments with high prospects of short-term impact.’
The RSS has been actively working on REF methodology prior to the Stern Review; last year by setting up a working group to consider how rankings based on REF results could be improved. The working group produced a report, summarised by this letter published in Times Higher Education from RSS president Peter Diggle, explaining some of the problems with the current system.
Evidence submitted by the Council for Mathematical Sciences (PDF) also contains contributions from the RSS as well as further points such as the need to recognise that different disciplines should assess metrics differently, and that qualitative submissions about departments’ research environment is an administrative burden that seems to favour large departments.