Summary: The Case for Strong Longtermism

This is a summary of the GPI Working Paper "The case for strong longtermism" by Hilary Greaves and William MacAskill. The summary was written by Elliott Thornley.

In this paper, Greaves and MacAskill make the case for strong longtermism: the view that the most important feature of our actions today is their impact on the far future. They claim that strong longtermism is of the utmost significance: that if the view were widely adopted, much of what we prioritise would change.

The paper defends two versions of strong longtermism. The first version is axiological, making a claim about the value of our actions. The second version is deontic, making a claim about what we should do. According to axiological strong longtermism (ASL), far-future effects are the most important determinant of the value of our actions. According to deontic strong longtermism (DSL), far-future effects are the most important determinant of what we should do. The paper argues that both claims are true even when we draw the line between the near and far future a surprisingly long time from now: say, a hundred years.

Axiological strong longtermism

The argument for ASL is founded on two key premises. The first is that the expected number of future lives is vast. If there is even a 0.1% probability that humanity survives until the Earth becomes uninhabitable – one billion years from now – with at least ten billion lives per century, the expected future population is at least 100 trillion (1014). And if there is any non-negligible probability that humanity spreads into space or creates digital sentience, the expected number of future lives is larger still. These kinds of considerations lead Greaves and MacAskill to conclude that any reasonable estimate of the expected future population is at least 1024.

The second key premise of the argument for ASL is that we can predictably and effectively improve the far future. We can have a lasting impact on the future in at least two ways: by reducing the risk of premature human extinction and by guiding the development of artificial superintelligence.

Take extinction first. Both human survival and human extinction are persistent states. They are states which – upon coming about – tend to persist for a long time. These states also differ in their long-run value. Our survival through the next century and beyond is, plausibly, better than our extinction in the near future. Therefore, we can have a lasting impact on the future by reducing the risk of premature human extinction.

Funding asteroid detection is one way to reduce this risk. Newberry (2021) estimates that spending $1.2 billion to detect all remaining asteroids with a diameter greater than 10 kilometres would decrease the chance that we go extinct within the next hundred years by 1-in-300-billion. Given an expected future population of 1024, the result would be approximately 300,000 additional lives in expectation for each $100 spent. Preventing future pandemics is another way to reduce the risk of premature human extinction. Drawing on Millet and Snyder-Beattie (2017), Greaves and MacAskill estimate that spending $250 billion strengthening our healthcare systems would reduce the risk of extinction within the next hundred years by about 1-in-2,200,000, leading to around 200 million extra lives in expectation for each $100 spent. By contrast, the best available near-term-focused interventions save approximately 0.025 lives per $100 spent (GiveWell 2020). Further investigation may reveal more opportunities to improve the near future, but it seems unlikely that any near-term-focused interventions will match the cost-effectiveness of pandemic-prevention in the long-term.

Of course, the case for reducing extinction risk hangs on our moral view. If we embrace a person-affecting approach to future generations (see Greaves 2017, section 5) – where we care about making lives good but not about making good lives – then a lack of future lives would not be such a loss, and extinction would not be so bad. Alternatively, if we expect humanity’s long-term survival to be bad on balance, we might judge that extinction in the near-term is the lesser evil. 

Nevertheless, the case for strong longtermism holds up even on these views. That is because reducing the risk of premature human extinction is not the only way that we can affect the far future. We can also affect the far future by (for example) guiding the development of artificial superintelligence (ASI). Since ASI is likely to be influential and long-lasting, any effects that we have on its development are unlikely to wash out. By helping to ensure that ASI is aligned with the right values, we can decrease the chance that the far future contains a large number of bad lives. That is important on all plausible moral views.

While there is a lot of uncertainty in the above estimates of cost-effectiveness, this uncertainty does not undermine the case for ASL because we also have ‘meta’ options for improving the far future. For example, we can conduct further research into the cost-effectiveness of various longtermist initiatives and we can invest resources for use at some later time.

Greaves and MacAskill then address two objections to their argument. The first is that we are clueless about the far-future effects of our actions. They explore five ways of making this objection precise – by appeal to simple cluelessness, conscious unawareness, arbitrariness, imprecision, and ambiguity aversion – and conclude that none undermines their argument. The second objection is that the case for ASL hinges on tiny probabilities of enormous values, and that chasing these tiny probabilities is fanatical. For example, it might seem fanatical to spend $1 billion on ASI-alignment for the sake of a 1-in-100,000 chance of preventing a catastrophe, when one could instead use that money to help many people with near-certainty in the near-term. Greaves and MacAskill take this to be one of the most pressing objections to strong longtermism, but make two responses. First, denying fanaticism has implausible consequences (see Beckstead and Thomas 2021, Wilkinson 2022) so perhaps we should be fanatical on balance. Second, the probabilities in the argument for strong longtermism might not be so small that fanaticism becomes an issue. They thus tentatively conclude that the fanaticism objection does not undermine the case for strong longtermism.

Deontic strong longtermism

Greaves and MacAskill then argue for deontic strong longtermism: the claim that far-future effects are the most important determinant of what we should do. Their ‘stakes-sensitivity argument’ employs the following premise:

In situations where (1) some actions have effects much better than all others, (2) the personal cost of performing these actions is comparatively small, and (3) these actions do not violate any serious moral constraints, we should perform one of these actions.

Greaves and MacAskill argue that each of (1)-(3) is true in the most important decision situations facing us today. Actions like donating to prevent pandemics and guide ASI development meet all three conditions: their effects are much better than all others, their personal costs are small, and they violate no serious moral constraints. Therefore, we should perform these actions. Since axiological strong longtermism is true, it is the far-future effects of these actions that make their overall effects best, and deontic strong longtermism follows.

The paper concludes with a summary of the argument and its practical implications. Humanity’s future could be vast, and we can influence its course. That suggests the truth of strong longtermism: impact on the far future is the most important feature of our actions today.

References

Nicholas Beckstead and Teruji Thomas (2021). A paradox for tiny probabilities and enormous values. GPI Working Paper No. 7-2021.

GiveWell (2020). GiveWell’s Cost-Effectiveness Analyses. Accessed 26 January 2021.

Hilary Greaves (2017). Population axiology. Philosophy Compass 12(11):e12442.

Piers Millett and Andrew Snyder-Beattie (2017). Existential Risk and Cost-Effective Biosecurity. Health Security 15(4):373–383.

Toby Newberry (2021). How cost-effective are efforts to detect near-Earth-objects? Global Priorities Institute Technical Report T1-2021

Hayden Wilkinson (2022). In defense of fanaticism. Ethics 132(2):445–477.

Other paper summaries

Summary: Staking our future: deontic long-termism and the non-identity problem (Andreas Mogensen)

In “The case for strong longtermism”, Greaves and MacAskill (2021) argue that potential far-future effects are the most important determinant of the value of our options. This is “axiological strong longtermism”. On some views, we can achieve astronomical value by making the future population of worthwhile lives much greater than it would otherwise have been…

Summary: Against the singularity hypothesis (David Thorstad)

The singularity is a hypothetical future event in which machines rapidly become significantly smarter than humans. The idea is that we might invent an artificial intelligence (AI) system that can improve itself. After a single round of self-improvement, that system would be better equipped to improve itself than before. This process might repeat many times…

Summary: Doomsday rings twice (Andreas Mogensen)

We live at a time of rapid technological development, and how we handle the most powerful emerging technologies could determine the fate of humanity. Indeed, our ability to prevent such technologies from causing extinction could put us amongst the most important people in history. In “Doomsday rings twice”, Andreas Mogensen illustrates how our potential importance could be evidence…