1. Skip to Menu
  2. Skip to Content
  3. Skip to Footer

Inside scientific publishing

The San Francisco Declaration on Research Assessment

 

More than 7000 scientists and 250 science organizations have by now put their names to a joint statement called the San Francisco Declaration on Research Assessment (DORA). The declaration calls on the world’s scientific community to avoid misusing the Journal Impact Factor in evaluating research for funding, hiring, promotion, or institutional effectiveness. Here EMBO Director Maria Leptin discusses some of the concerns and also provides her personal perspective about the use of Journal Impact Factors and the significance of the recommendations.

 

On May 16, scientists, editors, publishers, societies and funders from many disciplines took action. By publicly signing the San Francisco Declaration on Research Assessment, they officially declared that the Journal Impact Factor is being misused. EMBO actively participated in drafting the declaration and was one of the early signatories on the document. I also supported the DORA position statement as a scientist. Inappropriate use of the impact factor has infiltrated the scientific community to become an unwanted and pernicious mark of quality for science and scientists. It is pernicious in the sense that the simplistic use of metrics can give a skewed view of a researcher’s achievements. The problem is not restricted to individuals. Misuse of impact factors can also influence decisions on the fates of university departments and institutes as well as national decisions on the future of scientific programmes. Many scientists are worried about the way in which the impact factor is used in the assessment of research and this is one of the first occasions that there has been a broad call to action.

DORA has 18 recommendations that are relevant to researchers, funding agencies, institutions, publishers and organizations that supply metrics. The shared message for all audiences is that journal-based metrics, such as impact factors, should not be used as a surrogate measure of the quality of individual research articles, to assess an individual scientist’s contributions, or in hiring, promotion, or funding decisions. Several excellent editorials and commentaries have been published already and I encourage readers to consult these articles that focus on different aspects of the declaration [1-4].

DORA encourages publishers of scientific journals, which include EMBO, to reduce the emphasis on the impact factor as a promotional tool either by ceasing to use it or by providing a balanced presentation of multiple metrics that provide a more differentiated view of journal performance. But we as scientists must also be aware that our own behaviors as authors, referees and members of evaluation panels and search committees have contributed to the role the impact factor plays now.


The problem does not lie primarily with the impact factor itself. Metrics are not inherently bad but they become counterproductive when used incorrectly. The impact factor was developed to compare journals and to assist librarians in making decisions on subscriptions. There is nothing wrong with this, but it does not make sense to use the impact factor to compare the quality of research or researchers. If one wanted to rely on metrics at all to judge individual scientists, metrics that assess the performance of individual researchers and differentiate between the work of researchers in different areas of research would be more appropriate, such as the h-index or article-level, rather than journal-level, metrics, including citations; but these too need to be used critically, fairly and have limits.

Another problem with using the impact factor to compare research is that it is a single metric for the whole range of the life sciences. Important distinctions between research fields have to be taken into consideration. Scientific communities differ in size and focus. Accordingly, papers from smaller fields of research are likely to be cited less frequently. As a consequence, a journal that is highly regarded in one community and in which it may be very difficult to publish, may have a lower impact factor than a mediocre journal in another, larger discipline. For generalist journals that represent multiple communities, the equations become increasingly difficult to solve: they have both to attract high citing papers from ‘hot’ areas but also remain fair and publish excellent research from low-citing fields, at the risk of accumulating a ‘long tail’ of low-citing papers that decrease the impact factor below a critical threshold (Figure 1).

 It would not be sensible to take the extreme position that journals do not matter. Ideally, a scientist should be evaluated on the content of his or her publications, not on the names or impact factors of the journals in which they publish. But when search committees or grant panels have to evaluate more than a hundred applicants to establish a short list for in-depth assessment, they cannot be expected to form their views by reading the original publications of all of the applicants.

I believe that the quality of the journal in which research is published can, in principle, be used for assessment because it reflects how the expert community who is most competent to judge it views the science. There has always been a prestige factor associated with the publication of papers in certain journals even before the impact factor existed. This prestige is in many cases deserved. Publication in the best journals matters not because of the association with the impact factor, but because of the high standards of the journals, their staff and the value that comes from the peer review process. It is generally accepted that a consistent publication record in quality journals – however quality may be judged or measured – does reflect excellence in research.

When the impact factor is misused everyone suffers: the scientist who has to try to publish in high impact journals that may not even exist in his or her field, the journal that is prestigious but has an impact factor that is insufficient to attract articles from scientists who are increasingly under pressure to collect credit points in the form of high impact factors, and the institutions and university departments which may find it difficult to achieve or retain diversity and breadth of scientific scope. We, the scientists, both as authors and as evaluators, should stop being obsessed with the impact factor, but we should not throw out the baby with the bath water and assume that the quality of journals is irrelevant.

 

 

 

 

 

Fig. 1 | Papers are ranked by number of citations. The shaded boxes show the average number of citations for the papers in each box, which would correspond to the impact factor of the journal if only those papers had been published.

REFERENCES

  1. Pulverer B, EMBO J 32, 1651–2 (2013).
    www.nature.com/ emboj/journal/vaop/ncurrent/full/
    emboj2013126a.html
  2. Alberts B, Science 340, 787 (2013). www.sciencemag.org/content/340/6134/
    787.full
  3. Misteli T, J Cell Biol 201, 651– 652 (2013). http://jcb.rupress.org/content/201/5/
    651.full
  4. Schekman R, Patterson M, eLife (2013) doi: 10.7554/eLife.00855
    http://elife.elifesciences.org/content/2/
    e00855