Narrative, not numbers, should drive researcher assessment

27.01.2022

Counting publications has long been discredited as a way of deciding how researchers are performing, but we hardly ever discuss the alternatives. That needs to change, according to Bert Overlaet, the author of LERU's new paper on the assessment of researchers.

Why does the assessment of researchers matter?

Assessment determines how researchers are rewarded and recognised, and how their careers develop, so to a certain extent it steers their behaviour. And since we need a range of contributions and practices in modern science, we have to make sure that these are recognised and valued in our assessments.

What is at stake if we get it wrong?

Getting assessment right is a precondition for a lot of things to happen, such as Open Science. The traditional high impact factor journals do not embrace Open Science, and if assessments privilege these journals we get a very closed system, with people keeping their data for themselves. Of course, researchers still want to be recognised if they are the first to find something or have good ideas, so it’s not about everybody being equal. But it is about having a broader perspective, and encouraging a more open form of scientific activity.

Criticism of journal impact factors goes back a long way...

The San Francisco Declaration on Research Assessment (DORA), which focused opposition to journal impact factors, was published nearly a decade ago. Since then, there has been a lot of debate, and research showing that they are totally irrelevant criteria for assessment, but nobody discusses what the alternatives could be. And that is what we have tried to do with this paper.

The multidimensional approach you propose will be familiar to older researchers.

Yes, this is what I experienced in the first phase of my career. There were hardly any metrics at that time, so when I applied for a professorship in 1991 I was asked to fill in a lengthy questionnaire about what I had done in research, in education, and in service to society and the institution. And I think it’s not a bad thing that we go back to that practice, and try to systemise it in the present scientific context.

Bert Overlaet Klein


How has that context changed?

To reflect the increased importance of team science, for example, we need to enlarge the range of contributions that are valued and recognised in an assessment. So, we have a senior researcher at KU Leuven who is designing the steering software for the next major European satellite. He is leading a consortium of 11 universities, and he is the guy who will make sure that the satellite can make observations at the right time, in the right place. This is a tremendous contribution, but he will probably never be the first author on an academic paper.

Your paper also suggests including a development perspective.

In an academic career you are expected to develop into broader and more important roles. That could be by becoming a dean or a rector, spinning off a company, holding office in a scientific society, or communicating with the public. So, it’s very important that we also assess how a researcher has developed over time, what leadership, collaboration and innovation capabilities they have shown, and what this tells us about their potential for the future.

What do these qualities involve?

Leadership is about how well you treat your younger researchers, for example, and your ability to build a sustainable research group. Collaboration involves a different kind of relationship, based on equality and good will rather than hierarchy; a researcher who does not have good people skills does not do well in a collaboration. Innovation involves taking risks, so you need to ask how people deal with failure, for example, and learn lessons from it, because that then leads to success.

You also suggest assessing the personal and professional context of a researcher’s career.

People don’t work in the same circumstances, and if you don’t take that into account you are not being fair. It also means that if researchers have a choice to make, they will go for the safest option, and we don’t want that to happen. We want people who take risks, including with their personal careers. For example, when I was thinking about becoming director of human resources at KU Leuven, I was only an associate professor and risked never becoming a full professor if I made the move. Luckily, the rector was wise enough to recognise my contribution in that role, and I did become a full professor, but it shouldn’t depend on the rector. We need to deal with this on a more systematic basis.

Is there a risk that a broader assessment puts additional pressure on researchers?

Yes, and we have to make clear, both to researchers and assessment panels, that the idea is not to tick boxes. Just because we are asking for a broad perspective on their contributions, it doesn’t mean that they have to be good in everything. What we do at KU Leuven, and what is becoming more common in other universities, is ask for a narrative CV, in which a researcher can show what they have done already and what they plan to do, and that becomes a starting point for discussion.

What are the main hurdles to introducing this new approach to assessment?

Time is one issue. If you can exclude 80% of the candidates for a position just by looking at a couple of numbers, then go into more depth with the remaining 20%, that’s very efficient, but it is also exclusive. So, you have to take more time, read more, and judge more for yourself. It’s also easy to compare people when you are only looking at one dimension. With more dimensions, comparisons become more difficult, and assessment panels will have to be more explicit about their criteria. There’s also a risk that broader assessment criteria leave more room for bias, so you have to be very alert to that.

What can policymakers and research funders do to help?

The European Commission is preparing an action plan for research assessment, and we are happy to see that a number of topics addressed in our paper are being taken seriously in that exercise. One important aspect of that is the idea of coherence between research policy, which promotes certain areas of science through funding, and research assessment policy. If the two don’t line up, then what you get is just more misery.

Professor Bert Overlaet is a former Director of Human Resources at KU Leuven and Chair of the LERU Careers of Researchers & HR Policy Group.

©LERU: Text by Ian Mundell.