Utilitarianism and Reflective Equilibrium




In ‘Why I Am Not a Utilitarian‘, Michael Huemer objects that “there are so many counter-examples, and the intuitions about these examples are strong and widespread, it’s hard to see how utilitarianism could be justified overall.”  But I think it’s actually much easier to bring utilitarianism (or something close to it) into reflective equilibrium with common sense intuitions than it would be for any competing deontological view.  That’s because I think the clash between utilitarianism and intuition is shallow, whereas the intuitive problems with non-consequentialism are deep and irresolvable.

To fully make this case would probably require a book or three.  But let’s see how far I can get sketching the rough case in a mere blog post.

Firstly, and most importantly, the standard counterexamples to utilitarianism only work if you think our intuitive responses exclusively concern ‘wrongness’ and not closely related moral properties like viciousness or moral recklessness:

They generally start by describing a harmful act, done for purpose of some greater immediate benefit, but that we would normally expect to have further bad effects in the long term (esp. the erosion of trust in vital social institutions). The case then stipulates that the immediate goal is indeed obtained, with none of the long-run consequences that we would expect. In other words, this typically disastrous act type happened, in this particular instance, to work out for the best. So, the argument goes, Consequentialism must endorse it, but doesn’t that typically-disastrous act type just seem clearly wrong? (The organ harvesting case is perhaps the paradigm in this style.)

To that objection, the appropriate response seems to me to be something like this: (1) You’ve described a morally reckless agent, who was almost certainly not warranted in thinking that their particular performance of a typically-disastrous act would avoid being disastrous. Consequentialists can certainly criticize that. (2) If we imagine that somehow the voice of God reassured the agent that no-one would ever find out, so no long-run harm would be done, then that changes matters. There’s a big difference between your typical case of “harvesting organs from the innocent” and the particular case of “harvesting organs from the innocent when you have 100% reliable testimony that this will save the most innocent lives on net, and have no unintended long-run consequences.” The salience of the harm done to the first innocent still makes it a bitter pill to swallow. But when one carefully reflects on the whole situation, vividly imagining the lives of the five innocents who would otherwise die, and cautioning oneself against any unjustifiable status-quo bias, then I ultimately find I have no trouble at all endorsing this particular action, in this very unusual situation.

Utilitarianism clearly endorses our being strongly reluctant to murder innocent people (and respecting commonsense moral norms more generally).  While it’s possible to imagine hypothetical cases in which an agent ought (by utilitarian lights) to override this general disposition, it’s an open question what lesson we should draw from our intuitive resistance to such overriding.  If someone insists that they not only endorse the utilitarian-compatible claims in this vicinity, but additionally judge that the act itself “clearly” ought not to be done (even in the “100% reliable” version of the case), then I’ll grant that they find utilitarianism counterintuitive in this respect.  But then the question still remains whether they might find further implications of deontology to be even more counterintuitive.

Consider the poverty of the alternatives:

* Deontology prioritizes those who are privileged by default; but this violates the strong theoretical intuition that status quo privilege is morally arbitrary. (Why should the five have to die rather than the one, just because organ failure happened to occur in their bodies rather than his?)

* It rests on a distinction between doing and allowing that doesn’t seem capable of carrying the weight that deontologists place upon it. 

* It implies that we should often hope/prefer that others act wrongly: since, after all, impartial observers should want and hope for the best outcome.

* Worse, according to my new paradox of deontology, deontic constraints are self-undermining in the strong sense of being incompatible with taking their violations (e.g. the killing of an innocent person) to be particularly important.

* Most importantly, deontology makes incredible claims about what fundamentally matters.  It seems completely wild to claim that keeping a deathbed promise (to borrow one of Huemer’s examples) is seriously more important, in principle, than the entire lives of many innocent people.  So either deontologists are stuck making completely wild claims of this sort, or their normative prescriptions (concerning what we allegedly ought to do) bear no relation to what really matters.

Now, I think our deepest intuitions about what really matters are much more methodologically significant, and should play a greater role in determining our ethical theory, than superficial verdicts about the extension of the word ‘wrong’ in various highly-specified cases.  So that’s why I think (something close to) utilitarianism is actually the most intuitive moral theory.

Originally appeared on Good Thoughts Read More