All About Impact Factors

“Impact” by Dru! is licensed under CC BY-NC 2.0.

This week, Clarivate Analytics released its annual Journal Citation Report, which includes new and updated Journal Impact Factors (JIF) for almost 12,000 academic journals. In case you’re not familiar, the JIF is based on the average number of times a journal’s articles were cited over a two year period.

Impact factors are a relatively recent phenomenon. The idea came about in the 1960s, when University of Pennsylvania linguist Eugene Garfield started compiling his Science Citation Index (now known as the Web of Science), and needed to decide which journals to include. He eventually published the numbers he had collected in a separate publication, called the Journal Citation Report (JCR), as a way for librarians to compare journals (JCR is now owned by Clarivate Analytics). Now, impact factors are so important that it is very difficult for new journals to attract submissions before they have one. And the number is being used not just to compare journals, but to assess scholars. JIF is the most prominent impact factor, but it is not the only one. In 2016, Elsevier launched CiteScore, which is based on citations from the past three years.

Academics have long taken issue with how impact factors are used to evaluate scholarship. They argue that administrators and even scholars themselves incorrectly believe that the higher the impact factor, the better the research. Many point out that publishing in a journal with a high impact factor does not mean that one’s own work will be highly cited. One recent study, for example, showed that 75% of articles receive fewer citations than the journal’s average number.

Critics also note that impact factors can be manipulated. Indeed, every year, Clarivate Analytics suspends journals who have tried to game the system. This year they suppressed the impact factors for 20 journals, including journals who cited themselves too often and journals who engaged in citation stacking. With citation stacking, authors are asked to cite papers from cooperating journals (which band together to form “citation cartels”). The 20 journals come from a number of different publishers, including major companies such as Elsevier and Taylor & Francis.

As a result of these criticisms, some journals and publishers have also started to emphasize article-level metrics or alternative metrics instead. Others, such as the open access publisher eLife, openly state on their website that they do not support the impact factor. eLife is one of thousands of organizations and individuals who have signed the San Francisco Declaration on Research Assessment (DORA), which advocates for research assessment measures that do not include impact factors. Another recent project, HuMetricsHSS, is trying to get academic departments, particularly those in the humanities and social sciences, to measure scholars by how much they embody five core values: collegiality, quality, equity, openness, and community. While these developments are promising, it seems unlikely that the journal impact factor will go away anytime soon.

What do you think about the use of impact factors to measure academic performance? Let us know in the comments.

Leave a Reply