Tag Archives: research impact

All About Impact Factors

“Impact” by Dru! is licensed under CC BY-NC 2.0.

This week, Clarivate Analytics released its annual Journal Citation Report, which includes new and updated Journal Impact Factors (JIF) for almost 12,000 academic journals. In case you’re not familiar, the JIF is based on the average number of times a journal’s articles were cited over a two year period.

Impact factors are a relatively recent phenomenon. The idea came about in the 1960s, when University of Pennsylvania linguist Eugene Garfield started compiling his Science Citation Index (now known as the Web of Science), and needed to decide which journals to include. He eventually published the numbers he had collected in a separate publication, called the Journal Citation Report (JCR), as a way for librarians to compare journals (JCR is now owned by Clarivate Analytics). Now, impact factors are so important that it is very difficult for new journals to attract submissions before they have one. And the number is being used not just to compare journals, but to assess scholars. JIF is the most prominent impact factor, but it is not the only one. In 2016, Elsevier launched CiteScore, which is based on citations from the past three years.

Academics have long taken issue with how impact factors are used to evaluate scholarship. They argue that administrators and even scholars themselves incorrectly believe that the higher the impact factor, the better the research. Many point out that publishing in a journal with a high impact factor does not mean that one’s own work will be highly cited. One recent study, for example, showed that 75% of articles receive fewer citations than the journal’s average number.

Critics also note that impact factors can be manipulated. Indeed, every year, Clarivate Analytics suspends journals who have tried to game the system. This year they suppressed the impact factors for 20 journals, including journals who cited themselves too often and journals who engaged in citation stacking. With citation stacking, authors are asked to cite papers from cooperating journals (which band together to form “citation cartels”). The 20 journals come from a number of different publishers, including major companies such as Elsevier and Taylor & Francis.

As a result of these criticisms, some journals and publishers have also started to emphasize article-level metrics or alternative metrics instead. Others, such as the open access publisher eLife, openly state on their website that they do not support the impact factor. eLife is one of thousands of organizations and individuals who have signed the San Francisco Declaration on Research Assessment (DORA), which advocates for research assessment measures that do not include impact factors. Another recent project, HuMetricsHSS, is trying to get academic departments, particularly those in the humanities and social sciences, to measure scholars by how much they embody five core values: collegiality, quality, equity, openness, and community. While these developments are promising, it seems unlikely that the journal impact factor will go away anytime soon.

What do you think about the use of impact factors to measure academic performance? Let us know in the comments.

ORCID iDs @ Temple

Last year on the blog, we introduced ORCID, a non-profit organization that provides persistent, unique identifiers to researchers across the globe. ORCID iDs help ensure that researchers get credit for all their scholarly contributions.

While there are a number of different researcher identifiers out there (including ResearchID and Scopus Author ID), we recommend that all Temple researchers register for an ORCID iD. It’s free and it takes less than a minute to sign up.

There are currently 3,364,764 live ORCID iDs. Sixteen publishers, including the American Chemical Society, PLOS, and Wiley, now require that authors submit ORCID iDs at some point in the publication process. And if you think ORCID is just for scientists, you’re wrong. Cambridge University Press has begun integrating ORCID iDs into their book publishing workflows, and Taylor & Francis is currently undertaking a pilot project to integrate ORCID iDs into their humanities journals.

Researchers can use their ORCID iD profile to highlight their education, employment, publications, and grants. They can even add peer review activities. The American Geophysical Union, F1000, and IEEE are just three of the organizations that currently connect with ORCID to recognize the work of their peer reviewers.

In order to get a better sense of who is using ORCID at Temple, we looked for researchers with publicly available ORCID profiles who note “Temple University” as their current place of employment. We found 205 ORCID iDs that matched this criteria. Of those, the Lewis Katz School of Medicine has the highest number of researchers with ORCID iDs at Temple. The College of Science and Technology has the second highest number, with faculty from Physics, Chemistry, and Biology being well particularly well represented. The College of Liberal Arts has the third-highest number of ORCID iDs, thanks in large part to the Psychology department. A handful of researchers in the Fox School of Business, the College of Engineering, and the College of Education have also signed up for ORCID iDs. The overwhelming majority of researchers with ORCID iDs at Temple are faculty members. Some postdoctoral fellows have ORCID iDs, but very few graduate students do.

Because filling out one’s ORCID iD profile is optional, and profiles can also be set to private, our data is incomplete, and probably underestimates the true number of individuals at Temple with ORCID iDs. Nonetheless, it is exciting to see that researchers in almost all of Temple’s schools and colleges have signed up for ORCID iDs. We’re confident that this number will continue to grow in the future.

Temple Libraries is proud to be an institutional member of ORCID.

Owning Your Impact

ownyourimpact

“Measure a thousand times, cut once” by Sonny Abesamis is licensed under CC BY 2.0.

Scholars are routinely called upon to demonstrate the impact of their research, whether for tenure and promotion, or for grant and fellowship applications. Traditionally, citation counts and journal impact factors were used to determine research impact. Today, there is widespread acknowledgement that both of these metrics are seriously flawed. One recent study, for example, showed that men cite their own papers 56% more than women (which means some men may have inflated citation counts). Another study pointed out that publishing in a journal with a high impact factor does not necessarily mean that your own work will be highly cited. Still, it’s clear that traditional metrics are not going away. Just last December, the publisher Elsevier launched their own journal impact calculator, CiteScore.

Metrics can cause significant anxiety among scholars. While such anxiety is understandable (no one wants to be reduced to a number), the more proactive a scholar is when it comes to documenting their impact, the better off they will be. Scholars should learn to take control of their metrics, and use them to craft their own story about their research. There are four main ways to do this:

First, build and maintain your online presence. Make sure your faculty profile on your department page is up to date. Register for an ORCID iD and use it. Create profiles on various academic social networks, and on Google Scholar. Consider having your own website. Join Twitter and connect with colleagues around the world.

Second, make as much of your work openly available as you can. People can’t cite or talk about your work if they can’t read it. Publish your research in open access journals, or post a preprint or postprint to a disciplinary repository. Can’t do either of these things? Figure out alternative ways of sharing your scholarship. Blog or tweet about research in progress. Use Figshare or Slideshare to call attention to unpublished scholarly output. Or, ask your publisher if you can share a small part of your monograph (such as the table of contents) on an academic social networking site (such as Academia.edu or ResearchGate).

Third, keep track of your citations. Regularly check your citations on Web of Science, Scopus, and Google Scholar. But be skeptical of the results, particularly if you are a humanities scholar. Edited collections, for example, are often not indexed. New journals may also not be indexed. Ultimately, your best bet is to do your own research on who is citing you by searching the appropriate databases for your field (such as JSTOR, Proquest, or Google Books). Once you have a list of all your citations, dig a little deeper. Quantity matters, but so too does quality. If you can show that you are being cited by leaders in your field, or by scholars from outside your discipline, you can make a stronger case for your impact.

Fourth, pay attention to altmetrics. Altmetrics can provide you with more details about your research impact than citation counts alone. Most sharing platforms make it easy for you to see how many people are viewing or downloading your scholarship (disciplinary repositories usually offer scholars similar analytics). Sign up for ImpactStory to help keep track of what people are saying about your work on social media. Try searching the Open Syllabus Explorer to see the frequency with which your book or article is taught and what related work is taught alongside it. Here again, however, it’s important to remember that existing tools can only tell you so much. You must do additional research to really find out all the ways in which your work is being used.

For more information on enhancing your impact, check out the Library’s guide to the topic.