Blogien bannerikuva

New models for evaluating societal impact in strategic research programmes in Finland

10 Dec 2018

The first programmes funded by the Finnish Strategic Research Council (SRC) will end in August 2019 and their evaluation will be carried out in the spring of 2020 – also for the first time. The characteristics of SRC funding set new requirements for the evaluation.

Compared to other Finnish research funding instruments, the SRC-programme evaluation should address new types of questions:

  • How can societal impact be measured?
  • When can societal impact be expected to arise?
  • How are the interaction and impact of SRC projects assessed?
  • Who is the evaluator, that is, who performs the evaluation, and how can it be carried out so as to cover the entire field of strategic research in terms of both research and interaction activities?
  • From whom is information collected when also both stakeholders and interaction partners occupy a key position from the point of view of the impact of SRC projects?
  • How should we assess the programme directors’ activities that aim to establish links between researchers and knowledge users and to support co-creation between projects within programmes and between different programmes?

These issues are a hot topic in the Division of Strategic Research, whose staff are currently planning the programme evaluation concept. Views and ideas are gathered based on the Academy of Finland’s experiences, as well as from SRC projects and programme directors, and from strategic research stakeholders. In addition, international expertise in impact assessment is utilised, for example by exploring research on the topic and by building impact assessment networks. For example, science advisers from the Division of Strategic Research have visited the European Commission's science and knowledge service, the Joint Research Centre (JRC), and exchanged ideas of how to evaluate the impact of research and interaction activities on policy cycle and policy outcomes.

Lessons learned from REF impact assessments

As part of the planning of the programme evaluation, the Division of Strategic Research organised a seminar on impact assessment in Helsinki on 31 October 2018. The Division invited Director of Research Steven Hill, Research England, and Senior Lecturer Gemma Derrick, Lancaster University, to discuss how the societal impact of research can be assessed and how it can be assessed in the context of strategic research. Hill talked about the Research Excellence Framework (REF), the UK system for assessing the quality of research in UK higher education institutions. One aspect of the REF is an assessment of the impact of research. Derrick in turn has studied REF peer review processes and panellist views and their formation (see Derrick's new book “The Evaluator's Eye: Impact Assessment and Academic Peer Review").

The presentations by Hill and Derrick and the discussion inspired by them raised many thoughts and new questions both among the Division of Strategic Research and the seminar audience, which mainly consisted of representatives of the SRC consortia and programme directors. Hill’s ten lessons on impact assessment in the light of REF experiences and results were particularly useful:

  1. With the REF, higher education institutions have learned strategic thinking and to record the impact they have achieved, which earlier may not have been recognised as well.
  2. The method of impact assessment developed by the REF works: panellists believe that the method is fair and credible.
  3. The ability of metrics to measure impact is limited; impact is more efficiently captured through narratives and case studies.
  4. When impact narratives are written in higher education institutions, evidencing  impact is difficult and may attract more attention than the impact assessment in itself (“With what evidence can I verify my claim regarding impact?”).
  5. Higher education institutions have changed their operations and rewarding practices so that impact assessment is embedded in their daily activities  and functions also as a dimension of career development.
  6. There is a wide spectrum of impacts; there is no one model that suits everyone.
  7. Multidisciplinary research is very important for impact; the most influential studies were largely multidisciplinary.
  8. There is a positive relationship between high scientific quality and impact.
  9. Impact emerges in different disciplines at different times (cf. faster in natural sciences and engineering disciplines and slower in social sciences).
  10. High-impact research has good national and international networks, and its geographic reach is wide.

Ideas for planning programme evaluation

From the point of view of evaluating SRC programmes, a particularly important message was that a number of criteria need to be developed to assess societal impact, so that they measure the objectives of each programme and respond to the challenges of the social problem that the Finnish Government has set in the theme concerned. For example, in assessing the activities and projects of the Equality in Society programme, it is necessary to assess the objectives set out by the theme, such as whether producing new information on the structures creating societal inequalities that would enable solutions has been achieved, whether interaction activities have enabled new pathways and forums for societal impact and ways to promote equality together with stakeholders, and whether the resulting information and actions have had an impact on Finnish society.

Another key lesson was that the programme evaluation should be planned and implemented with particular care, as what is being measured begins to direct what gets done. According to the REF experiences, institutions of higher education have through the assessment process learned strategic thinking and to direct their activities to produce highly valued activities and impact. A new dimension has also opened up in researchers’ career paths when the researchers’ meritoriousness is also analysed from the point of view of the impact of their research. Strategic research programmes and consortia, for example, will follow with interest what the program assessment concept will be like, what the definition of impact will be and what criteria will be used to assess impact.

The third lesson was that, in Finnish strategic research, ways to assess impact have already progressed in the right direction. According to Steven Hill, impactful research and high-quality research are closely connected. The Act on the Academy of Finland defines that research funded by the Strategic Research Council should be of social significance, have an impact on Finnish society and be of high quality. The prerequisite for multidisciplinarity is also well in hand, as SRC consortia must be multidisciplinary and comprise more than one organisation. There is also a good starting point for national and international networking of consortia, as the consortia must already in the application phase have extensive networks within and outside academia. In addition, in the Division of Strategic Research, we monitor the metrics on the scientific quality as well as activity and scale of the consortia’s research and interaction activities (so-called output indicators) and impact narratives. Thus, in the light of the REF experience, we have a good starting point for a successful programme evaluation.

Metrics can only supplement peer review

What was most thought-provoking in Gemma Derrick's presentation was the importance of peer review. Derrick’s important message to the planning of SRC programme evaluation was that metrics is mechanics and can never replace peer review. According to Derrick, by overemphasising metrics, the assessment process may undermine the key objective of peer review, which is the formation of a common understanding of what is research that has impact. The mechanics of metrics may guide the evaluation too much and provide the basis for consensus-oriented group thinking, and thus trump the most valuable dimensions of peer review. Peer review should rather be based on a multidisciplinary peer debate that draws on each panellist’s own expertise, tacit knowledge and experience. Peer review certainly has its challenges, which need to be tackled, and the evaluation processes need to be developed further, but it cannot be replaced by metrics. This is particularly important to keep in mind when considering the possibilities of data analytics and data mining as a means of assessment.

Derrick encouraged us to see peer review in impact assessment as a process, not only as the end result. Likewise, it is important that instead of asking what impact is, we ask how to value impact. According to Derrick, the processual nature of impact assessment needs to be understood. For the assessment of strategic research programmes, this could mean, for example, that, as much energy should be focused on understanding impact and its process nature as on measuring results and their impact. Instead of asking “what has changed”, we could, for example, ask “what kind of change process the programme and its consortia managed to set in motion, and how the impact of these processes can be assessed”.

Hill and Derrick’s visit to the Academy of Finland and the seminar on impact assessment gave many good ideas and raised many important questions that need to be considered when planning and implementing the evaluation of strategic research programmes. Their visit also allowed time and room for staff at the Academy of Finland, programme directors, consortium representatives and staff at the Division of Strategic Research to reflect on impact. This is a good starting point for us.

Original text in Finnish by Milja Saari who works as a science adviser in the Division of Strategic Research.

Do you have questions or feedback for us?