Academia’s emphasis on citation rates is “mixed news” for philosophy: it can bring attention to high-quality work, but tends to make philosophy and other humanities fields look bad in comparison with other areas, says Eric Schwitzgebel (UC Riverside), in the following guest post.
(A version of this post first appeared at The Splintered Mind.)
Citation Rates by Academic Field: Philosophy Is Near the Bottom
by Eric Schwitzgebel
Citation rates increasingly matter. Administrators look at them as evidence of scholarly impact. Researchers familiarizing themselves with a new topic notice which articles are highly cited, and they are more likely to read and cite those articles. The measures are also easy to track, making them apt targets for gamification and value capture: Researchers enjoy, perhaps a bit too much, tracking their rising h-indices.
This is mixed news for philosophy. Noticing citation rates can be good if it calls attention to high-quality work that would otherwise be ignored, written by scholars in less prestigious universities or published in less prestigious journals. And there’s value in having more objective indicators of impact than what someone with a named chair at Oxford says about you. However, the attentional advantage of high-citation articles amplifies the already toxic rich-get-richer dynamic of academia; there’s a temptation to exploit the system in ways that are counterproductive to good research (e.g., salami slicing articles, loading up co-authorships, and excessive self-citation); and it can lead to the devaluation of important research that isn’t highly cited.
Furthermore, focus on citation rates tends to make philosophy, and the humanities in general, look bad. We simply don’t cite each other as much as do scientists, engineers, and medical researchers. There are several reasons.
One reason is the centrality of books to the humanities. Citations in and of books are often not captured by citation indices. And even when citation to a book is captured, a book typically represents a huge amount of scholarly work per citation, compared to a dozen or more short articles.
Another reason is the relative paucity of co-authorship in philosophy and other humanities. In the humanities, books and articles are generally solo-authored, compared to the sciences, engineering, and medicine, where author lists are commonly three or five, and sometimes dozens, with each author earning a citation any time the article is cited.
Publication rates are probably also overall higher in the sciences, engineering, and medicine, where short articles are common. Reference lists might also be longer on average. And in those fields the cited works are rarely historical. Combined, these factors create a much larger pool of overall citations to be spread among current researchers.
Perhaps there are other factors a well. In all, even excellent and influential philosophers often end up with citation numbers that would be embarrassing for most scientists at a comparable career stage. I recently looked at a case for promotion to full professor in philosophy, where the candidate and one letter writer both touted the candidate’s Google Scholar h-index of 8 — which is actually good for someone at that career stage in philosophy, but could be achieved straight out of grad school by someone in a high-citation field if their advisor is generous about co-authorship.
To quantify this, I looked at the September 2022 update of Ioannidis, Boyack, and Baas’s “Updated science-wide author databases of standardized citation indicators“. Ioannidis, Boyack, and Baas analyze the citation data of almost 200,000 researchers in the Scopus database (which consists mostly of citations of journal articles by other journal articles) from 1996 through 2021. Each researcher is attributed one primary subfield, from 159 different subfields, and each researcher is ranked according to several criteria. One subfield is “philosophy”.
Before I get to the comparison of subfields, you might be curious to see the top 100 ranked philosophers, by the composite citation measure c(ns) that Ioannidis, Boyack, and Baas seem to like best:
Nussbaum, Martha C.
Clark, Andy
Lewis, David
Gallagher, Shaun
Searle, John R.
Habermas, Jürgen
Pettit, Philip
Buchanan, Allen
Goldman, Alvin I.
Williamson, Timothy
Thagard, Paul
Lefebvre, Henri
Chalmers, David
Fine, Kit
Anderson, Elizabeth
Walton, Douglas
Pogge, Thomas
Hansson, Sven Ove
Schaffer, Jonathan
Block, Ned
Sober, Elliott
Woodward, James
Priest, Graham
Stalnaker, Robert
Bechtel, William
Pritchard, Duncan
Arneson, Richard
McMahan, Jeff
Zahavi, Dan
Carruthers, Peter
List, Christian
Mele, Alfred R.
Hardin, Russell
O’Neill, Onora
Broome, John
Griffiths, Paul E.
Davidson, Donald
Levy, Neil
Sosa, Ernest
Hacking, Ian
Craver, Carl F.
Burge, Tyler
Skyrms, Brian
Strawson, Galen
Prinz, Jesse
Fricker, Miranda
Honneth, Axel
Machery, Edouard
Stanley, Jason
Thompson, Evan
Schatzki, Theodore R.
Bohman, James
Norton, John D.
Bach, Kent
Recanati, François
Sider, Theodore
Lowe, E. J.
Hawthorne, John
Dreyfus, Hubert L.
Godfrey-Smith, Peter
Wright, Crispin
Cartwright, Nancy
Bunge, Mario
Raz, Joseph
Bostrom, Nick
Schwitzgebel, Eric
Nagel, Thomas
Okasha, Samir
Velleman, J. David
Putnam, Hilary
Schroeder, Mark
Ladyman, James
van Fraassen, Bas C.
Hutto, Daniel D.
Annas, Julia
Bird, Alexander
Bicchieri, Cristina
Audi, Robert
Enoch, David
McDowell, John
Noë, Alva
Carroll, Noël
Williams, Bernard
Pollock, John L.
Jackson, Frank
Gardiner, Stephen M.
Roskies, Adina
Sagoff, Mark
Kim, Jaegwon
Parfit, Derek
Jamieson, Dale
Makinson, David
Kriegel, Uriah
Horgan, Terry
Earman, John
Stich, Stephen P.
O’Neill, John
Popper, Karl R.
Bratman, Michael E.
Harman, Gilbert
All, or almost all, of these researchers are influential philosophers. But there are some strange features of this ranking. Some people are clearly higher than their impact warrants; others lower. So as not to pick on any philosopher who might feel slighted by my saying that they are too highly ranked, I’ll just note that on this list I am definitely over-ranked (at #66) — beating out Thomas Nagel (#67) among others. Other philosophers are missing because they are classified under a different subfield. For example Daniel C. Dennett is classified under “Artificial Intelligence and Image Processing”. Saul Kripke doesn’t make the list at all — presumably because his impact was through books not included in the Scopus database.
Readers who are familiar with mainstream Anglophone academic philosophy will, I think, find my ranking based on citation rates in the Stanford Encyclopedia more plausible, at least as a measure of impact within mainstream Anglophone philosophy. (On the SEP list, Nagel is #11 and I am #251.)
To compare subfields, I decided to capture the #1, #25, and #100 ranked researchers in each subfield, excluding subfields with fewer than 100 ranked researchers. (Ioannidis et al. don’t list all researchers, aiming to include only the top 100,000 ranked researchers overall, plus at least the top 2% in each subfield for smaller or less-cited subfields.)
A disadvantage of my approach to comparing subfields by looking at the 1st, 25th, and 100th ranked researchers is that being #100 in a relatively large subfield presumably indicates more impact than being #100 in a relatively small subfield. But the most obvious alternative method — percentile ranking by subfield — plausibly invites even worse trouble, since there are huge numbers of researchers in subfields with high rates of student co-authorship, making it too comparatively easy to get into the top 2%. (For example, decades ago my wife was published as a co-author on a chemistry article after a not-too-demanding high school internship.) We can at least in principle try to correct for subfield size by looking at comparative faculty sizes at leading research universities or attendance numbers at major disciplinary conferences.
The preferred Ioannidis, Boyack, and Baas c(ns) ranking is complex, and maybe better than simpler ranking systems. But for present purposes I think it’s most interesting to consider the easiest, most visible citation measures, total citations and h-index (with no exclusion of self-citation), since that’s what administrators and other researchers see most easily. H-index, if you don’t know it, is the largest number h such that h of the author’s articles have at least h citations each. (For example, if your top 20 most-cited articles are each cited at least 20 times, but your 21st most-cited article is cited less than 21 times, your h-index is 20.)
Drumroll please…. Scan far, far, down the list to find philosophy. This list is ranked in order of total citations by the 25th most-cited researcher, which I think is probably more stable than 1st or 100th.
Philosophy ranks 126th of the 131 subfields. The 25th-most-cited scholar in philosophy, Alva Noe, has 3,600 citations in the Scopus database. In the top field, developmental biology, the 25th-most-cited scholar has 142,418 citations—a ratio of almost 40:1. Even the 100th-most-cited scholar in developmental biology has more than five times as many citations as the single most cited philosopher in the database.
The other humanities also fare poorly: History at 129th and Literary Studies at 130th, for example. (I’m not sure what to make of the relatively low showing of some scientific subfields, such as Zoology. One possibility is that it is a relatively small subfield, with most biologists classified in other categories instead.)
Here’s the chart for h-index:
Again, philosophy is 126th out of 131. The 25th-ranked philosopher by h-index, Alfred Mele, has an h of only 27, compared to an h of 157 for the 25th-ranked researcher in Cardiovascular System & Hematology.
(Note: If you’re accustomed to Google Scholar, Scopus h-indices tend to be lower. Alfred Mele, for example, has twice as high an h-index in Google Scholar as in Scopus: 54 vs. 27. Google Scholar h-indices are also higher for non-philosophers. The 25th ranked scholar in Cardiovascular System & Hematology doesn’t have a Google Scholar profile, but the 26th ranked does: Bruce M Psaty, h-index 156 in Scopus vs. 207 in Scholar.)
Does this mean that we should be doubling or tripling the h-indices of philosophers when comparing their impact with that of typical scientists, to account for the metrical disadvantages they have as a result of having fewer coauthors, on average longer articles, books that are poorly captured by these metrics, slower overall publication rates, etc.? Well, it’s probably not that simple. As mentioned, we would want to at least take field size into account. Also, a case might be made that some fields are just generally more impactful than others, for example due to interdisciplinary or public influence, even after correction for field size. But one thing is clear: Straightforward citation-count and h-index comparisons between the humanities and the sciences will inevitably put humanists at a stark, and probably unfair, disadvantage.
Originally appeared on Daily Nous Read More