HSS – When it comes to research evaluations, peer reviews and bibliometric analysis should be seen as complementary rather than determined, according to new article

Published on :

Research reveals that peer reviews and bibliometric analysis should be considered as complementary modes of evaluation when it comes to evaluating research.

Research shows that peer reviews and bibliometric analysis should be viewed as complementary modes of assessment. The researchers suggest that targeting peer reviews on publications whose quality cannot be unambiguously ranked using bibliometric analysis would be more effective in assessing research standards at UK universities and better use of research. public funds.

Research evaluation

The researchers used the UK’s 2014 REF exercise to investigate the attributes of the top-rated (four-star) publications in economics and econometrics. Although the official documents contain aggregate scores for each institution, the researchers show how these aggregates can be used to infer the score assigned by REF panelists to each publication.

The results demonstrate that this score responds to the prestige of the journal as measured by the influence score of Thomson Reuters articles. While the use of this particular metric is justified by its attractive properties and by previous research, different measures of journal impact such as impact factor are also used by researchers.

Several econometric analyzes confirm the limited contribution of other publication attributes, such as citations, to the score assigned by REF panelists, and that publications in the top generalist journals and the top five economics journals are unambiguously awarded four stars.

Implications for a future assessment

The findings have important implications for the design of future research evaluations, other than informing the discussion of the 2021 REF cycle that is underway. The use of bibliometrics in research evaluation has been intensely debated, and the REF regulation is based on expert reviews reflecting concerns about the responsible use of metrics.

The results show that at least in economics and econometrics, a journal impact index closely approximates expert judgment, in particular for results published in exceptional journals.

Dr Marco Ovidi, Queen Mary’s School of Economics and Finance, who co-authored the study, said: “Our research reveals that in economics and econometrics, the classification of REF 2014 research results made by the experts is very close to the classification that would be obtained by using a bibliometric indicator of the impact of journals.

“The differences are particularly small for research results published in high-impact economic journals. Our results suggest that the expensive peer review process should be focused on finding hidden gems in journals of relatively lower reputation rather than overrated results in top rated media.

“We believe our findings should be of interest to academic departments and research policy makers as the next research evaluations, REF 2021, are underway.”

More information

Comments are closed.