FI

International solutions to the peer review burden

11 Sep 2017

Research funding agencies around the world are faced with the same challenges: the number of applications is continuing to rise, but the amount of funding available remains unchanged. For peer reviewers, this has meant a growing workload, while for researchers the toughening competition has reduced the prospects of getting funding.

On 29–30 June 2017, the Netherlands Organisation for Scientific Research (NWO) hosted in Amsterdam an international conference aimed at addressing these challenges. It drew a large number of participants from European, American and Japanese research councils. The Academy of Finland was represented by Science Adviser Laura Kitti (Strategic Research Unit) and Senior Science Adviser Juha Latikka (Natural Sciences and Engineering Research Unit).

“The consensus was that peer review is still the best way to sift out the highest-quality applications. The peer review process also enjoys the backing and confidence of the science community,” Laura says.

However, there is a clear trend now that funding agencies in different countries are looking to support multidisciplinary research projects that can deliver a stronger social impact. But the assessment of social impact is far from simple and straightforward, and contributors to the conference discussed alternative ways of doing this.

“The Strategic Research Council at the Academy of Finland has systematic and transparent procedures in place for assessing the social impact of the projects it funds. This is almost unique in Europe and attracted a great deal of interest at the conference, among others on the part of the OECD representative.”

The SRC has separate panels to assess the scientific quality and the societal relevance of applications. The relevance panel gives its assessment of the proposed project’s social impact based on the items listed in the call for applications. All applications must include an interaction plan, which outlines how the project proposes to work with end-users to generate research knowledge and solutions that will pave the way to more informed decision-making. The two assessments carry the same weight, so successful applications must receive high ratings for both scientific quality and social impact.

Sharing best practices

The conference provided an overview of review processes in different countries and offered discussion about why application processing times, for instance, vary from country to country and about lessons that can be learned. “It was particularly interesting to see just how differently the same process can be organised in different countries,” Laura says.

The strategies used to curb the rising number of applications have included tightening the mobility criteria set for applicants and announcing calls focused on more closely defined research themes. The Amsterdam conference also featured discussion on the role of universities, which it was suggested could conduct an initial screening of applications so that only those with the greatest potential are sent on to funding agencies.

“Perhaps the biggest difference between our process at the Academy of Finland and processes in other countries is that we tend to use overseas reviewers. In the UK and other bigger countries, it is less often that peer reviewers need to disqualify or recuse themselves. Domestic panellists are therefore used more often. Otherwise the Academy largely follows the European way and there seem to be no major differences.”

ICT development opens up new opportunities to further improve review processes. AI can already be used to find suitable panellist candidates from databases and to identify potential grounds for disqualification. It is also possible to compare projects on a quantitative basis. But the actual assessment still requires human input, that is, outside experts. “Reliability and transparency are more important factors than speed,” Laura stresses.

The peer learning process applied at the conference was considered extremely useful, and there was a broad sense that international cooperation in this area should be continued. “We decided to create an information sharing network that will meet every few years in this same set-up.”

Alternative methods put to the test

The conference introduced several alternative methods for reviewing applications that are currently being tested in different countries. “The intention is not to replace the peer review method but rather to complement it with alternative methods suited to each particular case,” Laura explains.

The double-blind review process is being tested mainly in high-risk technology projects, where the prospects of success are slim but the potential breakthroughs have huge significance. The process involves sending in a short anonymous standardised application, including a self-assessment of the project’s relevance. The most potential applications are picked by an anonymous panel, with each panel member selecting their own personal favourite that is awarded funding. The remaining projects are selected in accordance with normal procedures. As panel consensus is not required, exceptionally high-risk projects have better chances of being funded.

“Another interesting method that’s being tested is a lottery system. This involves first selecting the best 10 per cent of applications, which are given funding, and rejecting the poorest 10 per cent. The projects funded from the remaining pool of mid-ranking applications are then drawn by lottery. This method is being tested among others by funding agencies in Germany, Denmark and New Zealand, where it is used alongside traditional decision-making methods so that the results can later be compared.”

The sandpit method can be used in cases where researchers are called upon to find a solution to some specific problem. This will involve researchers being invited to a brainstorming session where they develop a range of proposals. They then split up into teams to further elaborate their ideas, and each team will then pitch its solution to funding agencies. This method has mainly been applied to technology projects.

One of the methods being trialled in more socially oriented projects is to involve those people in decision-making who are directly affected by the research. For instance, patients may be contacted to solicit their views on a proposed research project on the treatment of healthcare patients.

Social media opens up new opportunities for crowdsourcing, which has been used in the Netherlands for purposes of assessing the quality of arts projects, for instance. “If, say, a theatre group is applying for funding, who should pass judgement on its artistic quality? Surely the members of the audience should have a say – and they can be reached through social media.”

Evaluation methods and the effects of changes to these methods have received only limited research attention, and indeed the conference called for more work to be done in this area. “We’ll be receiving results on these methods later, but it’s clear there’s a need for additional research. This knowledge is needed to ensure the continued high standard of the application review process in Finland and elsewhere.”

Original text in Finnish by Oona Riitala

Photo by Marjo Aaltomaa

Last modified 11 Sep 2017
Connect with us
Facebook Slideshare Twitter Youtube
SWITCHBOARD +358 295 335 000
REGISTRY +358 295 335 049
FAX +358 295 335 299
   
EMAIL firstname.lastname@aka.fi
OFFICE HOURS Mon–Fri 8.00–16.15
   
STAFF DIRECTORY »
CONTACTS & INVOICING »
QUESTIONS & FEEDBACK »