15 June 2023 – A hybrid panel discussion hosted by EMBO and EMBL on 12 May marked the anniversary of DORA, a worldwide initiative aiming to advance approaches to the assessment of scholarly research. The panel discussed issues with current methods of research assessment, as well as solutions and actions for improvement in which panel members have been involved.
Wolfgang Huber, group leader at EMBL and co-Chair of the EMBL responsible research assessment working group, kicked off the discussion by reminding everyone about the original intention to objectively quantify research outcomes and the importance of this concept. “Research assessment is an integral core aspect of doing science as it helps decide on the recruitment of the next generation of scientists and the allocation of funding,” he said. Journal impact factors and H-indexes have been widely used to assess research. “But publications are actually not the scholarship itself. They are more of an advertisement of scholarship as the actual scholarship often consists of the complete set of reagents or data analysis code, for instance in my field, that generates a paper. “We should think about research outputs in a broader way than just the papers,” Huber said. He cited what is known as Goodhardt’s law: Whenever a measure becomes a target, it ceases to be a good measure. “We are in a world where a focus on simplistic quantitative measures has become an obstacle to good science,” he said. Clear messaging that EMBL was looking at the value of research to assess it was important and ongoing, he commented. While there were no simple measures for the success of the initiative, the real value would lie in a culture change, Huber said.
Bernd Pulverer, Head of EMBO Scientific Publishing and DORA co-founder, talked about the development DORA underwent from its spontaneous inception largely by editors during the 2012 meeting of the American Society for Cell Biology to discuss the distorting focus of research assessment on publications in a small subset of journals. “At the meeting we quickly converged on the point that actually the journal impact factor as a single metric is at the heart of some of these issues,” he said. “The impact factor is beguiling as a metric as it appears to be predictive of the likelihood for future performance, but it is inherently limited as it is a poor measure of the impact of a specific article and thus the research performance of its authors”. Today, DORA has become a global initiative endorsed by more than 23,000 individuals and institutions, with many funders and publishers among them: “DORA has turned into an advocacy group and is developing tools for more balanced research assessment nowadays. In particular, it is a platform to showcase best practice.” When asked about practices and policies at EMBO, Pulverer listed a number of examples: Applicants for EMBO Postdoctoral Fellowships and other EMBO grants were not allowed to indicate the journal impact factor or any other metrics in their applications, and ; application guidelines instruct reviewers not to use journal impact factor in the evaluation of candidates; for transparency, guidelines for reviewers and applicants are published on the EMBO website; clear-cut conflict of interest policies are applied throughout all EMBO selection committees; journal names were removed from the files of the candidates shortlisted for the EMBO Gold Medal; and reviewed preprints made it realistically possible to assess research outputs at a much earlier stage than journal publications. “And we dropped impact factors for the promotion of the EMBO Press journals,” he said. When asked about what role DORA could have in regulating the diversity of journals, Pulverer answered that “DORA is not a licensing agency or regulator. It’s really down to the community itself. Scientists have a responsibility to publish in journals that are not predatory, and research assessors have a responsibility to filter out journals that are not adding value.”
Guillermina Lopez-Bendito, a group leader at the Institute of Neuroscience in Alicante, Spain, an EMBO Member, and Chair of the EMBO Young Investigator Committee, emphasized that the lack of standardized and comprehensive methods for evaluating the quality and impact of scientific work was a major obstacle to advancing research assessment. “Contributions to science go beyond research papers alone. We need to also consider mentoring, outreach activities, peer review, and evaluation. We should also incorporate assessments of whether researchers have translated their results and discoveries in ways other than publishing,” Lopez-Bendito said. She highlighted the importance of candidates’ narratives in evaluations, as they provide an opportunity for candidates to explain the impact of their research. “DORA is refocusing the attention of reviewers and evaluators on what truly matters, which is the quality of the work.” In her view, the future of research assessment lies in improving the transparency, openness, and accessibility of review processes. She also advocated for the inclusion of interviews in every evaluation, as they allow candidates to fully demonstrate their research output.
Karim Labib, from the MRC Protein Phosphorylation and Ubiquitylation Unit at the University of Dundee, Scotland, and Chair of the EMBO Installation Grant Committee, addressed the assessment of research outside one’s own field. “A key challenge is how best to assess research in areas that one is not extremely familiar with, without relying solely on simple metrics,” he said. Labib supported the idea of interviewing all shortlisted candidates, as it provides a common method and equal opportunities for candidates. Interviews have long been a common practice in the selection of EMBO Young Investigators and have now been introduced in the selection of EMBO Installation Grantees. Labib also emphasized the role of the lead reviewer in interview panels. He advised that lead reviewers should wait until after the interview before sharing their views with the panel, so that the panel members can assess the candidate’s performance in a less biased manner. Labib concluded by stating that the emergence of preprint servers has sparked a revolution in the publishing landscape, significantly impacting how science is assessed. The panellists agreed with Labib’s view that another bottleneck in research assessment is time. “Scientists are generally interested in participating in research assessment, but lack of time is the primary constraint. Reviewers are often too busy or under too much pressure.” Valuing the work of reviewers and providing financial incentives may help increase the pool of good peer reviewers.
Brenda Andrews, from the Donnelly Center at the University of Toronto, Canada, and Vice Chair of the EMBL Scientific Advisory Committee, is actively involved in research assessment. As the founding editor of the open-access journal G3 Genes Genomes Genetics, her goal is to publish valuable research findings without considering the impact factor or subjective opinions about the importance of the work. “Senior colleagues bear a significant responsibility in leading by example and changing how we think about research assessment,” Andrews stated. However, she acknowledged that impact factors are still discussed in evaluations. Review committees have become increasingly aware of this issue in recent years. Andrews explained that there is now a clear emphasis on the description of the work and the progress made in setting up labs and training people, rather than solely focusing on the publication venue.
Cecilia Perez, a postdoctoral researcher at EMBL, became interested in topics related to social justice in research assessment during her PhD studies. She shared her experiences when applying for scholarships and became aware of the biases present in assessment processes, which sometimes resulted in good applications being overlooked. “Research assessment is a subject that is very important to me. I would like to highlight the arbitrary nature of assessment processes and emphasize the need for improvement and fairness to achieve greater diversity and equality,” Perez expressed. She also has experience in reviewing manuscripts. Pulverer picked up on this, noting “encouraging early career researchers to participate in the peer review process is exciting because currently there are few incentives for them to engage.” Perez reflected on the challenges of organizing authors on papers, particularly in collaborative projects that are becoming more common. The panelists agreed that it is necessary to “move away from the simplistic notion that only the first and corresponding author matter,” as Pulverer pointed out. Labib mentioned that a more nuanced assessment of author contributions has become easier with the inclusion of author contribution sections in most journals. Additionally, Pulverer explained that attributions at the level of individual experiments within a paper are possible with the EMBO SourceData platform.
The panel discussion on the occasion of the DORA anniversary was co-organized by EMBO (Sandra Bendiscioli, Senior Policy Officer) and EMBL (Katherine Silkaitis, Strategy Officer) and chaired by Sandra Bendiscioli.
The quotes from the panel discussion were edited.