The journal impact factor (IF) is the most ubiquitous metric used to evaluate research nowadays. It is a measure of the average number of citations a paper published in a given journal has received over a two-year period. It is calculated by Thomson Reuters, and is published yearly in the Journal Citation Reports.
The IF was developed in the 1970s as a tool to help librarians decide which journals to keep in their collections. However, over time its use evolved beyond its original purpose. Today, many scientists use it to judge the quality of a journal, and of the papers contained in it. Funding institutions and universities often turn to the IF to rank researchers and make decisions like grant allocation and promotions.
Many in the scientific community have expressed their outrage at such misuse of the IF. Several Nobel Laureates have been outspoken about their dislike for the metric. Martin Chalfie, winner of the 2008 Nobel Prize in Chemistry, said in an interview, “I can categorically say I hate IFs.” Another Nobel Laureate, Peter Doherty, said that the importance given to the IF is skewing science.
In the 2012 Annual Meeting of the American Society of Cell Biology, a group of researchers, editors and publishers got together to discuss their concern over the misuse of the IF to judge the quality of research. The resulting document, the San Francisco Declaration on Research Assessment (DORA), has since garnered the support of many individual researchers, and institutions throughout the world. Among the institutions that support the DORA are the EMBO, the PLOS journals and Hindawi.
What the IF Is
The IF of a journal is calculated with the following formula:
For example, if Journal X has an impact factor of 11 in 2018, it means that the articles published in X during 2016 and 2017 received on average 11 citations each in 2018.
The journal IF can be a useful measure of the attention a journal gets, but it has important flaws:
- Citation distributions are skewed. One good paper can skew the score of the journal. Being published in a high-impact factor journal doesn’t guarantee that a paper will have a certain number of citations. To counter this, some researchers have suggested publishing citation distributions along with the IF.
- Citable documents vs published documents. The denominator in the above equation includes citable sources, like papers and short communications. The numerator, on the other hand, includes citations found in any of the different types of articles published by the journal. These can include non-citable sources, like letters. The IF of a journal can vary depending on what kind of published documents are considered as valid citations.
Certain editorial practices can also be used to inflate the journal IF. For example, editors can choose to publish more citable papers, like reviews or papers on trendy topics, to boost their score.
Other Citation-Based Metrics
Although the journal IF remains the most popular metric, other citation-based metrics also exist. Some focus more on the impact of an individual paper or researcher, like the h-index and the Eigenfactor and Article Influence.
However, these metrics still rely on the number of citations to estimate impact. Theoretically, the more citations a paper has, the more it is contributing to the work of others. The problem is that the number of citations can be affected by outside forces, and is easily manipulated.
The number of researchers working in a given field will affect the number of citations a paper gets. Therefore, researchers working in “hot” fields will have more citations than researchers working in one of the more obscure topics, but this doesn’t say anything about the quality of their research. Citation counts can also be affected by self-citations, and by negative citations.
It can be very misleading to judge an article or a researcher based on the journal IF they are publishing in, or even based solely on the number of citations their papers have accumulated. In the words of Randy Schekman, the 2013 Nobel Laureate in Medicine, “A paper can become highly cited because it is good science – or because it is eye-catching, provocative, or wrong.”
Citation-based metrics are useful, but they are not enough on their own. It is like looking through a keyhole. You get a glimpse of what is on the other side, but you shouldn’t mistake it for the whole picture.
Alternative Metrics
Realizing the limitations of the IFs, but recognizing the usefulness of measuring the impact of research, many scientists have turned to alternative metrics, or altmetrics. Altmetrics is a very broad term that encompasses all the information found online about a piece of research. This can include activity in online discussion groups and forums, citations in policy documents, mentions in social media and blogs, article views and downloads, and citations in Wikipedia, among others.
Altmetric, a provider of alternative metrics, lists three main advantages of compared to citation-based metrics:
- Speed. The fast rate with which information is shared on the web makes the accumulation altmetrics faster than journal or book-based metrics.
- Scope. Altmetrics don’t focus on only one type of impact indicator—namely citations, but rather paint a broader picture of the kind of impact a paper—and a researcher—have.
- Application. Altmetrics can be applied to evaluate journals, articles, researchers, departments, universities, and even countries.
Qualitative data should always be considered along with the quantitative scores. It is not enough to count how many people are talking about the research online. To avoid misleading conclusions, it is also important to consider who is talking about it and in what context.
Altmetrics should be thought of as a complement to traditional metrics, and can be a nice addition to a CV or a grant application. They are good to measure engagement and, in some cases, potential impact. Click here to learn more about what they are and how to use them.
Conclusion
The journal IF, especially when accompanied by citation distributions, can provide some useful information regarding the output of a journal as compared to others. However, it should never be used to evaluate the quality of an article, or the impact of a researcher.
There is no perfect metric, but considering a host of metrics together can help create a more complete picture. With the speed at which things are communicated online, gathering alternative metrics is now easier than ever. Having ways to estimate the impact of someone’s research is useful. But in the end, nothing can substitute actually reading their work.
What is your experience with altmetrics? Do you find them useful, or do you prefer traditional metrics?
– Written by Marisa Granados, Research Medics Editorial Desk –