Don’t Valorize the Void




One of my most fundamental ethical commitments is to the pro-humanity, anti-nihilist idea that sentient lives can be better than nothing. Utopia, for example, is much better than a barren world utterly devoid of life.

For a normative theory to deny this datum is, IMO, instantly disqualifying—akin to claiming that torture is the only good. There’s just no way that that could be right. Yet falling into nihilism-adjacent views along these lines seems a surprisingly common error-mode in ethical theory. In this post, I’ll step through a few examples, and suggest better alternatives. I argue that the impetus behind common harm/benefit asymmetries needs to be reframed around a positive goal (securing sufficiently good lives) rather than a purely negative one (avoiding bad lives). Otherwise, you’re apt to end up embracing the void.

Don’t go there!

Negative Views are Insufficient

“Negative” ethical theories are ones that specify bads to be minimized, but no goods worth positively promoting. Negative utilitarianism (minimizing suffering) is perhaps the paradigmatic example, but narrowly person-affecting views in population ethics can have similar implications (as Benatar notoriously exploits in his arguments for anti-natalism).

Now, I don’t know how to argue for a claim as basic as that utopia is better than a barren rock against someone who is determined to deny this.1 I think they’ve basically stepped outside of the space of reasons at that point—like someone with a strong prior in solipsism, radical skepticism, or natural law theory. Saying this risks coming off as insulting, but I don’t mean it that way (I’m sure many solipsists, radical skeptics, and negative ethicists are perfectly lovely people!), I’m just explaining my flat-footed dialectical approach here. When you have a philosophical disagreement this basic, you can’t necessarily expect to be able to argue about it. Sometimes, the best you can hope for is to try to set out your perspective in a way that others might share, and might thereby come to understand why the other view is so misguided in its fundamental rationale. Negative ethicists are neglecting the entire category of positive intrinsic value: things that make life worthwhile, and thereby make worlds containing such worthwhile lives better than nothing.2

So, to be clear, I’m just taking it as a premise that positive intrinsic value is possible (utopia is better than a barren rock), that it’s insane to deny this premise, and hence that purely negative ethical views are insane. While I’m aware that others contest this, I nonetheless regard these as important moral insights, essential to forming a reasonable moral theory.

One important implication of this starting point is that we can instantly know that any theory formulated in purely negative terms (e.g. harm-minimization) cannot possibly be correct. After all, such a theory implies that empty worlds are the very best possible. (You can’t get any less than nothing.) And that’s just crazy.

Two quick examples:

(1) DALYs are a negative measure, i.e. of years in perfect health lost, which implicitly direct medical interventions and public health policy towards the goal of averting DALY costs. This implies anti-natalism: with no more lives, there would be no more live-years lost. That’s messed up! Lives that contain some imperfect health are not thereby bad; they’re simply less good than one might have liked. Instead of minimizing DALYs, we should want to promote a positive measure like QALYs gained (where negative values are restricted to those rare conditions so bad that the life really isn’t worth living).

(2) Or consider Christopher Meacham’s (2012) ‘Person-affecting views and saturating counterpart relations’. It’s a cool paper! Hilary Greaves has described the theory as “the best [she’s] seen” for a person-affecting approach. But arguably the most important feature of the view is that it is structured as a Harm Minimization View, and so (despite its other neat intricacies, nicely summarized here) it again implies that empty worlds are the very best possible. So that immediately rules out the view as a non-starter.

Replace Risk-Aversion with Adequacy-Enticement

Some views flirt with nihilism in a different way: they recognize intrinsic goods to some extent, but just give the category so little weight (in comparison to intrinsic bads) that they still end up getting sucked into the void.

An interesting example of this is risk aversion. Suppose we face a choice between (i) immediate extinction, and (ii) a tiny chance of dystopia, a somewhat larger chance of a correspondingly-good utopia, and an overwhelming likelihood of a middling-decent future. Option (ii) has extremely positive expected value, as well as a positive “median”-likelihood outcome. It’s very unlikely to end up dystopian, and that slight risk is more than balanced by the corresponding utopian possibilities. Nonetheless, as Richard Pettigrew argues in ‘Longtermism, Risk, and Extinction’, risk-averse decision theories could easily imply that the risk of dystopia should dominate our decision-making, and lead us to prefer extinction.

Unlike Pettigrew, I take this to constitute a decisive objection to those risk-averse decision theories. Whatever “commonsense” intuitions motivate the move to risk-averse decision theories in the first place, clearly do not support the particular systematization that leads to this crazy result.

Commonsense intuitions hold that it isn’t worth gambling away a decent state for a 50/50 chance of utopia vs dystopia. But there are two very different ways that one might try to generalize from this:

(1) One might infer that bad-states count for more than good-states, such that immense weight and priority must be giving to avoiding (even the slightest risk of) extremely bad states.


(2) One might infer that good states above a sufficient level have diminishing marginal value. On this view, rather than merely avoiding bad states, we should really want to be in a sufficiently good state.3

I don’t think others have sufficiently distinguished these two, because they’ll coincide in a wide range of “ordinary” cases. Both can explain why we shouldn’t risk everything just for the chance of even more well-being (above the sufficient level). But they have very different implications in cases where non-existence is on the table.

Standard risk-aversion embraces the void in order to avoid (even tiny) risks of dystopian outcomes. Non-existence is seen as an adequate alternative, with no strong reason to want anything better than that. My alternative view, by contrast, pulls us towards securing an adequately good state. Dystopia is not that, and certainly warrants strong aversion. But likewise, to a lesser extent, for non-existence: we should also be very averse to extinction. So option (ii) above, leading to good results in all likelihood, and overall positive EV, is a bet well worth taking—far better than fearfully resorting to certain extinction as in option (i).

I think this verdict provides decisive reason to opt for the second of our two possible generalizations from “commonsense intuitions” about risk—that is, to opt for adequacy-enticement rather than risk-aversion. When the two come apart, it turns out that risk aversion seems rather pathological.

Promoting Sufficiency vs Avoiding Harm

For similar reasons, I think moral theorists have misgeneralized their intuitions when positing a harm/benefit asymmetry, according to which being in a non-comparatively bad state has extra weight compared to “pure benefits”. The problem, once again, is that such theorists have failed to distinguish between pure benefits above the sufficient level (what we might call “luxury benefits”) vs pure benefits that bring you up (or closer) to sufficiency (what we might call “basic benefits”).

Consider death. Being dead cannot be a bad state to be in, because it isn’t a state you’re in at all (you no longer exist at that point). Even if we posit some intrinsic harms to (unwanted) death, the overwhelming bulk of the harm is comparative: the loss of the goods of life. So life-saving is, or is very close to, a pure benefit: something that (strictly speaking) provides goods, rather than averts bads.

This means that, if we are to generally discount pure benefits, we would have to discount saving lives. But that’s crazy. Saving lives is one of the most important things we can do. So we should not generally discount pure benefits: that was a philosophical mistake. Proponents of the harm/benefit asymmetry should instead embrace a below/above sufficiency asymmetry, discounting only luxury benefits. Maybe it’s more important to ensure that people are in adequately good states than to bring others into even better states. That seems a decent view (even if I might ultimately disagree with it). But death is not an adequate outcome. Some “pure benefits” are morally essential, and cannot be decently discounted.


Even for those who want to discount luxury benefits as less important than relieving ordinary harms, it’s crucial to formulate moral views as aiming at something positive (e.g. a sufficiently good state), rather than in purely negative terms (e.g. avoiding harm). Because the void avoids all, yet does not warrant our embrace.


Though if one were reluctantly led to this bad view in order to solve or avoid some other problem, we might turn a critical eye on this background reasoning, and suggest better alternatives. See, e.g., my discussion of how to avoid certain problems for total utilitarianism without resorting to narrow person-affecting views.


At most, person-affecting theorists are apt to recognize conditional value: something one has reason to want for a person conditional on their existence. Better to have happy lives than miserable ones, they will allow. But to be indifferent between wonderful lives existing vs not existing at all is to fail to appreciate the respects in which a wonderful life is intrinsically valuable: strictly better than nothing.


A further question: should we be so enticed by the prospect of being in an adequately good state that we should be willing to take negative expected value bets (e.g. bad gambles from a negative starting point) that give us a shot of reaching this goal? I don’t think I’d go that way myself, but it seems an interesting idea to explore.

Originally appeared on Good Thoughts Read More