The Normativity Objection to Deontology




‘Morality’ is ambiguous. It might be used to pick out either of the following (which are often presumed to co-refer, but conceptually could come apart):

(1) A certain social practice, with norms involving certain prohibitions, prerogatives, and practices of praise and blame.

(2) The fundamental, authoritative normative truths concerning what we really ought to care about and to do.

The question “Why be moral?” gets a grip on us due to meaning (1). You can’t sensibly ask why we ought to do what we really ought to do. But you can sensibly ask whether ordinarily accepted norms are actually authoritative, or worth (non-instrumentally) caring about. Indeed, it’s very important to ask this question, rather than uncritically accepting cultural norms and practices that may or may not be justified.

To anticipate: I think most ordinary morality is actually pretty good, on instrumental grounds, though there’s certainly room for improvement in places. But I think people go wrong when they attribute non-instrumental significance to deontic constraints. We get much more plausible verdicts overall when we appreciate that those norms have purely instrumental value, and that what we should care about non-instrumentally is just well-being. Put another way, we should place beneficence at the center of ethics, and see everything else as derivative of that. Competing norms cannot plausibly claim to be more important, in principle, than people’s lives and well-being.1

In what follows, I’ll briefly introduce the traditional (selfish) amoralist before expanding upon a new challenge specifically to deontological ethics: the challenge of the beneficent amoralist.

“The rules say you’ve got to stop helping people!”

Why be Moral: The Selfish Amoralist

Hume’s “sensible knave” has long haunted moral philosophers. It’d be nice to have an argument that would rationally compel the selfish amoralist to care about others. That’s probably not possible, but I do think it is more rationally coherent to have a broader circle of concern. After all, we generally take ourselves (and our loved ones) to matter. But we’re not unique. Whatever property makes our interests normatively considerable (most plausibly, our sentience) is a property shared by many others too. So our overall patterns of concern are more unified and coherent if expanded widely and systematized in this way, rather than making exceptions for ourselves. I think this line of argument goes a fair way towards addressing the traditional “Why be moral?” challenge in a non-question-begging way.

But we might also be comfortable enough with some question-begging answers. For example, I think it is just obviously true that other people’s interests are genuinely worth caring about. This seems self-evident—not in the colloquial sense that everyone will necessarily agree with it, but in the philosopher’s sense that it doesn’t need to be justified by reference to anything else: simply understanding the intrinsic content of the claim provides sufficient justification for believing it. Crucially, it doesn’t seem mysterious in any way that others’ interests are worth caring about. We can comfortably take this as bedrock; it doesn’t call out for further explanation.

Why be Moral: The Amoral Saint

But I think there’s a new version of the challenge that specifically afflicts deontologists. Imagine a selfless and beneficent individual, who cares deeply (perhaps even equally) about all sentient beings, but has no independent (non-instrumental) concern for other norms of “morality”. From the perspective of this saintly amoralist, deontological “morality”—like conservative sexual morality—looks like a potentially harmful practice, fetishizing arbitrary and objectively irrelevant properties to the detriment of people’s real interests. “Why do you give moral weight to things other than making people’s lives go well?” she asks you. “How do you justify giving those other features so much weight that you insist upon letting vast numbers die unnecessary, or otherwise have people’s lives go significantly worse?”

These seem like good questions! Indeed, they seem like morally pressing questions. And they suggest that there is something mysterious about deontological distinctions. They can’t just be taken as bedrock; they really do call out for further explanation and justification.

Hume’s sensible knave is a selfish jerk, who fails to care about much of what really matters (namely: other people). But our beneficent amoralist isn’t like that at all. She cares deeply about others. So much so that she’d be willing to suffer the psychological trauma of pushing a guy in front of a trolley if that would truly help others even more. What a saint! The rest of us feel free to disregard the suffering that results from our “permissible” (in)actions; but not her. If she’d let the five die, their screams would have haunted her dreams forever, just as the death of the one she killed now will. She sees them all in their full humanity, and never turns away.

What can you say against this saint? What deficit of character or moral motivation does she display in her extreme beneficence? “She violated the rights of the one!” you say. But she just looks at you confused, like you’d started speaking in tongues. “I’m very regretful that the one was harmed at all,” she assures you, “but why aren’t you comparably concerned about the five?” Why indeed? When rights lack instrumental value, or fail to promote the overall good, they are in effect a mechanism for prioritizing some people (specifically, those with a certain kind of status quo privilege) and disregarding others (those in a less advantageous default position). Why would you endorse such an invidious social practice, when instrumentally harmful?

Narrow vs Wide Reflective Equilibrium

The standard justification for deontological moral theory is that it meshes with “commonsense intuitions” about morality. But this implicitly draws upon our first—more sociological—sense of ‘morality’. Narrow reflective equilibrium is the project of systematizing our first-order moral intuitions: addressing how to most intuitively apply the words ‘right’ and ‘wrong’ across different cases. Deontology may be a plausible solution to the project of narrow reflective equilibrium. But this narrow project misses the central point of morality, that it is supposed to be genuinely normatively authoritative. This is a higher-order fact about morality that arguably clashes with many first-order intuitions.

Ordinary moral intuitions are often influenced by what we find disgusting or disturbing, for example. “Yuk factor” thought experiments involving incest, eating roadkilled pets, etc., describe acts that many intuitively consider to be “wrong” even in distant possible worlds where it’s stipulated to be harmless. Previous generations might have added gay and interracial relationships to the list. Clearly, we cannot just take moral intuitions at face value. We need to reflect more deeply on whether a candidate moral norm has rational support that makes its putative significance intelligible as something that’s genuinely worthy of non-instrumental concern, and not just something that systematizes our (possibly arbitrary) cultural norms. Even when a norm is worth endorsing, theorists need to understand whether this is for instrumental or non-instrumental reasons.2

The Normativity Objection

Consider the inconsistent triad:

(i) Morality generates genuinely normative reasons
(ii) Morality enshrines deontological distinctions
(iii) Deontological distinctions are arbitrary, and lack genuine normative significance.

Arguments for (ii) tend to undermine (i), because of (iii). Deontologists may indeed be accurately describing a coherent system of norms that people are accustomed to talking about and using to guide their behaviour. People may have strong intuitions about what’s permitted or required by that familiar system of norms. But I’m not really interested in raising a merely “internal” challenge about whether that system is better described as having utilitarian roots. I’m suggesting that we need to question the system itself. Maybe still endorse it for instrumental reasons, insofar as it happens to be conducive to overall well-being. But don’t pretend that the system itself generates authoritative normative reasons, if it rests on indefensible foundations.

Why think the deontological system rests on indefensible foundations? Well, just look at it. The Doctrine of Double Effect claims that there’s more (intrinsic) reason to kill people as “collateral damage” than as a direct means to doing good. Isn’t that plainly arbitrary? Nobody on the receiving end could sensibly share this concern about the precise causal means by which you kill them. (If anything, I’d prefer for my death to serve some useful purpose.) DDE may yield intuitive verdicts about what to do, but as a matter of principle, it’s an absurd thing to care about.

Or consider Thomson’s famous distinction between redirecting an existing threat vs initiating a new threat. We’re told it’s OK for a president to redirect a foreign nuke away from a big city on to a small town, but not OK to nuke the small town while the foreign nuke is overhead so as to pulverize the latter.3 Supposing there were no instrumental differences between the cases (no risk of missing the foreign nuke, etc.), how could the remaining difference possibly merit intrinsic concern?

It may be that some rules along these lines are instrumentally useful norms for fallible people to follow. If so, that’s fine. (“Don’t nuke your own towns” seems like a pretty good rule for presidents to follow in general.) But it’s at least clear that these sorts of distinctions cannot carry any non-instrumental weight, right? They’re not things that can credibly compete with people’s lives and well-being as matters of intrinsic concern.

Or return to any classic “killing one to save five” case. Some of these are thought to constitute intuitive “counterexamples” to utilitarianism, but it’s actually very obscure why anyone would endorse the deontologist’s verdict upon reflection:

However terrible it is for Chuck to die prematurely, is it not—upon reflection⁠—equally terrible⁠ for any one of the five potential beneficiaries to die prematurely? Why do we find it so much easier to ignore their interests in this situation, and what could possibly justify such neglect? There are practical reasons why instituting rights against being killed may typically do more good than rights to have one’s life be saved, and the utilitarian’s recommended “public code” of morality may reflect this. But when we consider a specific case, there’s no obvious reason why the one right should be more important (let alone five times more important) than the other, as a matter of principle. So attending more to the moral claims of the five who will otherwise die may serve to weaken our initial intuition that what matters most is just that Chuck not be killed…

If you asked all six people from behind the veil of ignorance whether you should kill one of them to save the other five, they’d all agree that you should. A 5/6 chance of survival is far better than 1/6, after all. And it’s morally arbitrary that the one happens to have healthy organs while the other five do not. There’s no moral reason to privilege this antecedent state of affairs, just because it’s the status quo. Yet that’s just what it is to grant the one a right not to be killed while refusing the five any rights to be saved. It is to arbitrarily uphold the status quo distribution of health and well-being as morally privileged, no matter that we could improve upon it (as established by the impartial mechanism of the veil of ignorance).


There are good reasons to be wary of naïve utilitarian decision procedures. Robust norms against harming others plausibly have higher expected value than blindly following naïve calculations, in which case following those more reliably-good norms is precisely what prudent utilitarianism entails.

Endorsing such norms does not require embracing deontology as a moral theory. (I think the theory gains a lot of unearned credibility from this conflation.) Deontologists theorize that those norms have non-instrumental significance. But this is very implausible, when you examine them more closely. It’s far more substantively plausible (i) that we should ultimately care more about people’s well-being than about subtle causal distinctions, and (ii) that we should ultimately prefer what everyone affected would prefer from behind a veil of ignorance rather than arbitrarily privileging status quo beneficiaries. Insofar as we have reason to embrace deontic constraints (despite their intrinsic absurdity), this must be for extrinsic, purely instrumental reasons: that doing so will ultimately help us to better achieve what really matters, namely, saving and improving lives.


I always want to ask deontologists, “Do you really think this is more important than people’s lives and well-being?” But few seem willing to give a straight answer. (My sense is that it isn’t a question they’re used to even considering. I hope to change that!)


I actually think most apparently deontological norms, including anti-incest ones, are best understood in this purely instrumental, utilitarian-compatible way. I suspect many people become deontologists by mistakenly imbuing instrumentally-good rules with intrinsic significance. I argue against this intrinsic significance. But I often enough agree with their rules, just on purely instrumental/utilitarian grounds. It’s a difference in interpretation, not practice (for the most part).


Thomson (1976), p. 208.

Originally appeared on Good Thoughts Read More