In ‘Against Longtermism’, Eric Schwitzgebel writes: “I accept much of Ord’s practical advice. I object only to justifying this caution by appeal to expectations about events a million years from now.” He offers four objections, which are interesting and well worth considering, but I think ultimately unpersuasive. Let’s consider them in turn.(1) There’s no chance humanity will survive long-term:All or most or at least many future generations with technological capabilities matching or exceeding our own will face substantial existential risk — perhaps 1/100 per century or more. If so, that risk will eventually catch up with us. Humanity can’t survive existential risks of 1/100 per century for a million years.If this reasoning is correct, it’s very unlikely that there will be a million-plus year future for humanity that is worth worrying about and sacrificing for.This seems excessively pessimistic. Granted, there’s certainly some risk that we will never acquire resilience against x-risk. But it’s hardly certain. Two possible routes to resilience include: (i) fragmentation, e.g. via interstellar diaspora, so that different pockets of humanity could be expected to escape any given threat; or (ii) universal surveillance and control, e.g. via a “friendly AI” with effectively god-like powers relative to humans, to prevent us from doing grave harm.Maybe there are other possibilities. At any rate, I think it’s clear that we should not be too quick to dismiss the possibility of long-term survival for our species. (And note that any non-trivial probability is enough to get the astronomical expected-value arguments off the ground.)(2) “The future is hard to see.” This is certainly true, but doesn’t undermine expected value reasoning.Schwitzgebel writes:It could be that the single best thing we could do to reduce the risk of completely destroying humanity in the next two hundred years is to almost destroy humanity right. . .
News source: Philosophy, et cetera
Post Views: 117