Supporting Europe’s bold vision for responsible research assessment

Early career researchers need to be at the table when decisions about assessment systems are made.Credit: Getty

Concerns that research assessment systems are too narrow in what they measure are no longer new. Existing approaches favor individuals or teams who obtain large grants, publish in high-impact journals, such as Nature — or file patents, to the detriment of quality research that does not meet these criteria.

According to a report in November 2020 by the Research on Research Institute (RoRI) – a network of experts who study how research is done – this method of assessment puts pressure on the research community to succeed in the interest of performance measures. It also increases the risk of breaches of research ethics and integrity. At the same time, it acts as a systemic bias against anyone who does not conduct – or choose not to prioritize – research that meets criteria measurable by a number.

Concerns about the distorting effects of commonly used assessment procedures have already led to initiatives such as the San Francisco Declaration on Research Assessment (signed to date by more than 2,500 institutions, including Naturepublisher Springer Nature and 19,000 people); the Leiden Manifesto for Search Metrics; the SCOPE principles established by the International Network of Research Management Societies; and the metric tidal ratio, commissioned by UK funding agencies. There are, in fact, at least 15 separate efforts urging policymakers, donors and heads of institutions to ensure that assessment systems minimize harm.

Many of the architects of these projects are beginning to worry that each subsequent initiative equates to more (undoubtedly valid) words, but less to practical action.

The Agreement on the reform of research evaluation, announced on July 20 and opened for signatures on September 28, is perhaps the most promising sign of real change to date. More than 350 organizations have pooled their experience, ideas and evidence to come up with a model agreement to create more inclusive assessment systems. The four-year initiative is the work of the European University Association and Science Europe (a network of donors and science academies across the continent), together with previous initiatives. It has the blessing of the European Commission, but with the ambition of becoming global.

Signatories must commit to using metrics responsibly, such as ending what the agreement calls “inappropriate” uses of journal and publication-based metrics such as Journal Impact Factor and the h-index. They also agree to avoid using rankings of universities and research organizations and, where unavoidable, to recognize their statistical and methodological limitations.

Signatories must also commit to reward more qualitative factors, as the standard for leadership and mentoring, including doctoral supervision; as well as open science, including data sharing and collaboration. It is absolutely true that the final research paper is not the only indicator of research quality – other forms of results such as datasets, new article formats such as saved reports (Nature 571, 447; 2019) and more transparent forms of peer review are equally important.

What makes it more than just a statement of good intention is that the signatories pledge to create an organization that, in fact, will hold itself accountable. In October, they will meet in a UN-style general assembly to review progress and create a more permanent structure. At the heart of this structure will be the idea of ​​giving researchers, especially early career researchers, an influential voice. They need to be around the table with their institutions, with senior colleagues and funders – those whose evaluation systems have been the source of much stress at present.

The agreement focuses on three types of research assessment, covering organizations, such as universities and departments; individual researchers and teams; and specific research projects. Each type of assessment will almost certainly require different types of arrangements, and these, in turn, will vary from country to country.

But the purpose of this exercise is not to create a uniform method of evaluating research. It is a question of setting out principles on which everyone can agree before embarking on their assessments. Assessments should be fair, the reasons for decisions transparent, and no researcher should be disadvantaged or prejudiced. If excellence is to be the criterion, it should not be limited to a restricted set of indicators (such as funding raised or publications in high-impact journals), because Nature constantly supported (Nature 435, 1003-1004; 2005). There is excellence in mentorship, in data sharing, in the time spent training the next generation of scholars, and in identifying and providing opportunities for underrepresented groups.

As the authors of the RoRI report say, the time for declarations is over. Research evaluation must now begin to change, to measure what matters.

Comments are closed.