Norms for writing, research, and citations




In our newest “how can we help you?” thread, a reader writes:

It is a probably basic question about academic norms. When you write a paper, concerning any train of thought (claims, arguments, etc.), should you simply think about it on your own and write it out as your own plus a reference to similar work if you do find such work, *or* search the literature and try to find if someone else makes similar claims, arguments, etc., and then simply explain this person’s idea in your paper, not claiming it to be your own even if you come up with it yourself? I intend this to be a very general question—the choice in question can occur any number of times when writing a paper. Thanks in advance.

This is a great query, and it echoes a set of questions raised by NK in this thread a few months ago:

I wonder if it might be worth starting a more general discussion here about the *point* of citation. I, for one, would be interested to hear others’ views on when, and why, one *ought* to cite a given work.

One suggestion made above is that you ought to (try to) cite everything relevant. Another (to which I take Helen to have objected, though the following is an attempt to describe it a bit more charitably) would be that you ought to cite relevant work that you think is good/worth reading and engaging with. These obvious differ in at least one important way: on the latter proposal, citations would be prestige-markers; on the former, they wouldn’t.

Are there other options? And what is there to be said for and against each option?

I’m also curious what others think of the following sociological hypothesis: when you cite “low-prestige” work (whatever exactly that means), you thereby lower the perceived prestige of your own (i.e., the citing) work, in the sense that others are going to be less likely to read your work and/or take it seriously (if/when they take note of the character of your citations).

N.K.’s last point (that citing “low-prestige work” can lower the perceived prestige of your own) is one of the main reasons that I think authors are under an obligation to do some basic research–via doing some simple searches for key words in PhilPapers and Google Scholar–before writing, and to cite and/or discuss explicitly up front any relevant precursors (including similar arguments) to the one they intend to write about.

After all, suppose you know that citing low prestige work is likely to undermine the prestige of your work (and, say, your claim to originality), and you operate according to the rule of only citing work that you think is “good” or “worth reading.” In that case, here is something you might very well do: you might systematically avoid reading the work of “low prestige scholars” because “their work is not good or worth reading”, but then publish ideas similar to theirs (and taking credit for those ideas) because you didn’t bother to read them. That’s messed up, and it is (I think) a very real conflict of interest that researchers can face. We are not just thinkers: we are supposed to be scholars, and sound scholarship requires being aware of and engaging with what has been published on your topic of interest before you publish. For example, imagine that Einstein (a lowly patent clerk) published his 1905 paper on relativity in Annalen Der Physik, but then a more established German physicist didn’t bother to read physics journals and published basically the same paper in 1908, taking credit for the discovery. Then suppose that because of their more prominent stature, everyone cited the latter researcher rather than Einstein. I hope we can all agree that this would be pretty messed up! It’s something that norms of scholarship and publication should aim to avoid.

Of course, this raises the more difficult question of what sound scholarship requires. As N.K.’s query raises, how much is one supposed to read and be aware of? It’s impossible (and undesirable) to read and cite “everything,” as that would result in reference lists of infinite length. So, how much is one obligated to read, be aware of, and cite? Here, I go back to my first rule: at a minimum, you should do simple PhilPapers and Google Scholar searches for key words relating to your argument–and then, if there is anything like a Stanford Encyclopedia of Philosophy entry on the topic, you should be aware of material there too. Beyond that, my attitude is: we should do our best, and give a good faith effort to try to find (and cite) relevant material that predates your own–including if you come across it while writing. If, for example, you find only after writing a rough draft of your paper that someone did defend a similar idea before, you should go back and cite them explicitly in your introduction, giving them credit rather than claiming full originality–and then distinguish your argument from theirs (as arguments are rarely identical in all respects). Finally, I would just add this. None of us are perfect. We all make errors, and accidental omissions are probably inevitable. Yet, this is, I think, just yet another reason why academic disciplines should move toward an open, online, crowdsourced approach to peer-review. It’s hard for a single author to do sound scholarship all on their own–and referees are often helpful in pointing out omissions. The more “referees” a paper has (as in an open, online form of review), the more likely errors of omission are likely to be recognized and corrected before journal publication.

But these are just my thoughts. What are yours?

Originally appeared on The Philosophers’ Cocoon Read More