
This post was guest authored by Scholarly Communication and Publishing Graduate Assistant Paige Kuester. This is the third part of a three-part series. Read Part 1 and Part 2.
We’re almost there! We’ve just got to go over some of the newest ways of measuring impact, and then we can all go home with that win.
Altmetrics
This is one of the newer types of bibliometrics. Instead of traditional methods of just including citations to gain an understanding of an author’s work, this “alternative metric” takes into account different ways that people interact with articles, like bookmarking them, blogging about them, tweeting, and so much more in addition to regular citations.
Different companies have different ways that they establish their scores, but here are a few that provide this service, including the company Altmetric, ImpactStory, and Plum Analytics.
Of course, there is criticism for this method also, because it does not fall strictly within the scholarly realm, and some article topics are not inherently tweetable. The scores should not be used as the sole judge of the impact of an article because there are always various factors involved. Obviously, you can also game this metric and the h-index by talking and citing your own articles, and getting your friends to do it, too. However, it is one of the new ways that these measures are trying to keep up with the technological times, as papers are more widely available online and so may have a wider audience impact than previously.
I suppose this is a bit like picking your favorite baseball player. While altmetrics are clearly less subjective than that, there still are similarities. Other bibliometrics measures only look at how other scholars use the work, which would be just like only allowing other players on your team pick your favorite player. They probably have good judgement about the ability of the player, but they are not looking at outside factors, like how the player behaves on twitter.
This points to another downside of altmetrics, that a high score only indicates a high level of attention towards an article, though this could be negative press, just like players who get a lot of news coverage when they commit crimes. Nevertheless, altmetrics does not just seem to be a trend because scholars do want to see the impact of their articles outside of strict academia, but there may still be some wrinkles to iron out.
We have covered almost all of the bases now.
Now we enter the realm of proposed metrics.
S-index
The s-index, otherwise known as the self-citation index, is exactly what is sounds like. Flatt, Blasimme, and Vayena propose this in a 2017 paper, not to replace other indices, but just to be included with the h-index so that others can get an idea as to whether an author’s h-index is self-inflated. The authors present evidence that for every self-citation, that article tends to get an average of 2-3 more citations than they would have otherwise. Arguments against this are proposed by Phil Davis from the Scholarly Kitchen. He specifically points out that not all self-citations are bad, some are necessary if the field is small and/or the author is building on their own previous research. It is not yet clear if this will catch on or not.
One way to look at this would be if baseball players only hit the ball when other players were on base and about to score. It could be a specific strategy that the player only has a successful at-bat when they have a chance to improve their RBI. But who wouldn’t take that? It’s good for them, and it is good for other players who only get to score, or get exposed to home plate because of that player. Or it could be just the situation that the player is in.
Ok, not the best example, but I promise that’s it. We made it to the end!
Not all of the metrics here are perfect. They analyze different aspects of an article or an author’s impact and so should be used in conjunction with each other. As technology opens new doors to sharing work and assessing impact, we will probably be seeing new measures and new ways to measure old metrics. Just like baseball, there is not one single number that can tell you everything you need to know, but things are improving and changing every day.
Now you know enough about the metrics of scholarly publishing to score a home run.
Or something like that.
Sources:
Allard, R. J. (2017). Measuring Your Research Impact. Uniformed Services University. Retrieved September 21, 2017, from http://usuhs.libguides.com/c.php?g=184957&p=2506307
Davis, P. (2017, September 13). Do We Need a Self-Citation Index? Scholarly Kitchen. Retrieved September 21, 2017, from https://scholarlykitchen.sspnet.org/2017/09/13/need-self-citation-index/.
De Groot, S. (2017). Measuring Your Impact: Impact Factor, Citation Analysis, and other Metrics. UIC University Library. Retrieved September 21, 2017, from http://researchguides.uic.edu/if/impact
Flatt, J. W., Blasimme, A., & Vayena, E. (2017). Improving the Measurement of Scientific Success by Reporting a Self-Citation Index. Publications 5(3). Retrieved September 21, 2017, from http://www.mdpi.com/2304-6775/5/3/20/htm
Garfield, E. (2006). The History and Meaning of the Journal Impact Factor. JAMA 295(1). Retrieved September 21, 2017, from http://garfield.library.upenn.edu/papers/jamajif2006.pdf
Reuter, A. (2017, September 11). Baseball Standing Explained. Livestrong.com. Retrieved September 21, 2017, from http://www.livestrong.com/article/337911-baseball-standings-explained/