Self-citation is a sensitive topic in some circles. Especially circles that are known disdainfully as “citation farms,” which consist of authors who routinely and massively self-cite or cite each other in order to boost the impact of their publications. While these “citation farms,” also known as “citation cartels,” are thought to be the hallmark of bad science, most researchers agree there is room for self-citation — but that it’s just good practice to limit the practice when possible.
You’re one of a kind
Self-citing may be necessary – for instance, if you’re the only person ever to have conducted a certain type of research before. Citing your own work or that of co-authors may be the appropriate course of action. “Everyone self-cites because sooner or later, everyone builds upon previous findings,” says Hadas Shema on the Scientific American blog. Shema quotes R. Costas, et al, in their 2010 article, “Self-citations at the meso and individual levels” as writing: “Given the cumulative nature of the production of new knowledge, self-citations constitute a natural part of the communication process.”
But what if the self-citing is just blatant self-aggrandizing and self-promotion?
H-Index vs. S-Index
There’s a simple way to measure the impact of a researcher’s work – by using the Journal Impact Factor (JIF). An h-index of 20 indicates that a researcher has published 20 papers with at least 20 citations. After 20 years of research, an h–index of 20 is good, 40 is outstanding and 60 is exceptional. The advantage of the h-index is that it combines productivity (the number of publications) and impact (number of citations) in just one number.
According to Nature magazine, in 2017, a professor had an idea for more thoroughly accounting for self-citations. This professor, now at the University of Helsinki, suggested a self-citation index, or s-index, along the lines of the h-index productivity indicator.
With this new tool, an s-index of 10 would mean a researcher had published 10 papers that each had received at least 10 self-citations.
The professor who envisioned this scoring system claims he wasn’t about establishing thresholds for what was an appropriately low s-index, or, for that matter, shaming those who self-cited a lot. Using data, it simply was another tool by which to measure impact of certain theories and articles.
According to John Ioannidis, a physician at Stanford University in California who specializes in meta-science: “This [study] should not lead to the vilification of researchers for their self-citation rates, not least because these can vary between disciplines and career stages.” He goes on to add: “It just offers complete, transparent information. It should not be used for verdicts such as deciding that too high self-citation equates to a bad scientist.”
Does self-citation reduce the probability of publication? It can, says some authors. “Thomson Reuters monitors self-citations when calculating a journal’s impact factor, and may delist a journal when self-citation rates become too high or changes the relative ranking of a journal within its field,” says Phil Davis, Ph.D., an expert in science communications on the blog, The Scholarly Kitchen. “No editor wants to be known as the one who put the journal in ‘time out.’”
A balancing act
“Self-citation is necessary to inform the reader about the author’s prior work and provide background information. Low self-citation rates can lead a reviewer to believe the author’s background is inadequate, while high rates might indicate that he/she is ignoring the work of colleagues. A balance is recommended,” says Paul W. Sammarco, in Ethics in Science and Environmental Politics. So, good luck balancing – especially for those in disciplines with novel experimentation. As long as you are not purposefully citing friends and colleagues in an effort to boost their h-indexes, all should equal out in the end.