Summary: The Epistemic Challenge to Longtermism

This is a summary of the GPI Working Paper "The epistemic challenge to longtermism" by Christian Tarsney. The summary was written by Elliott Thornley.

According to longtermism, what we should do mainly depends on how our actions might affect the long-term future. This claim faces a challenge: the course of the long-term future is difficult to predict, and the effects of our actions on the long-term future might be so unpredictable as to make longtermism false.  In “The epistemic challenge to longtermism”, Christian Tarsney evaluates one version of this epistemic challenge and comes to a mixed conclusion. On some plausible worldviews, longtermism stands up to the epistemic challenge. On others, longtermism’s status depends on whether we should take certain high-stakes, long-shot gambles.

Tarsney begins by assuming expectational utilitarianism: roughly, the view that we should assign precise probabilities to all decision-relevant possibilities, value possible futures in line with their total welfare, and maximise expected value. This assumption sets aside ethical challenges to longtermism and focuses the discussion on the epistemic challenge.

Persistent-difference strategies

Tarsney outlines one broad class of strategies for improving the long-term future: persistent-difference strategies. These strategies aim to put the world into some valuable state S when it would otherwise have been in some less valuable state ¬S, in the hope that this difference will persist for a long time. Epistemic persistence skepticism is the view that identifying interventions likely to make a persistent difference is prohibitively difficult — so difficult that the actions with the greatest expected value do most of their good in the near-term. It is this version of the epistemic challenge that Tarsney focuses on in this paper.

To assess the truth of epistemic persistence skepticism, Tarsney compares the expected value of a neartermist benchmark intervention to the expected value of a longtermist intervention L. In his example, is spending $1 million on public health programmes in the developing world, leading to 10,000 extra quality-adjusted life years in expectation. L  is spending $1 million on pandemic-prevention research, with the aim of preventing an existential catastrophe and thereby making a persistent difference.

Exogenous nullifying events

Persistent-difference strategies are threatened by what Tarsney calls exogenous nullifying events (ENEs), which come in two types. Negative ENEs are far-future events that put the world into the less valuable state ¬S. In the context of the longtermist intervention L,  in which the valuable target state S is the existence of an intelligent civilization in the accessible universe, negative ENEs are existential catastrophes that might befall such a civilization. Examples include self-destructive wars, lethal pathogens, and vacuum decay. Positive ENEs, on the other hand, are far-future events that put the world into the more valuable state S. In the context of L, these are events that give rise to an intelligent civilization in the accessible universe where none existed previously. This might happen via evolution, or via the arrival of a civilization from outside the accessible universe. What unites negative and positive ENEs is that they both nullify the effects of interventions intended to make a persistent difference. Once the first ENE has occurred, the state of the world no longer depends on the state that our intervention put it in. Therefore, our intervention stops accruing value at that point.

Tarsney assumes that the annual probability r  of ENEs is constant in the far future, defined as more than a thousand years from now. The assumption is thus compatible with the time of perils hypothesis, according to which the risk of existential catastrophe is likely to decline in the near future. Tarsney makes the assumption of constant partly for simplicity, but it is also in line with his policy of making empirical assumptions that err towards being unfavourable to longtermism. Other such assumptions concern the tractability of reducing existential risk, the speed of interstellar travel, and the potential number and quality of future lives. Making these conservative assumptions lets us see how longtermism fares against the strongest available version of the epistemic challenge.

Models to assess epistemic persistence skepticism

To compare the longtermist intervention L to the neartermist benchmark intervention N, Tarsney constructs two models: the cubic growth model and the steady state model. The characteristic feature of the cubic growth model is its assumption that humanity will eventually begin to settle other star systems, so that the potential value of human-originating civilization grows as a cubic function of time. The steady state model, by contrast, assumes that humanity will remain Earth-bound and eventually reach a state of zero growth. 

The headline result of the cubic growth model is that the longtermist intervention has greater expected value than the neartermist benchmark intervention N  just so long as is less than approximately 0.000135 (a little over one-in-ten-thousand) per year. Since, in Tarsney’s estimation, this probability is towards the higher end of plausible values of r, the cubic growth model suggests (but does not conclusively establish) that longtermism stands up to the epistemic challenge. If we make our assumptions about tractability and the potential size of the future population a little less conservative, the case for choosing L  over N  becomes much more robust.

The headline result of the steady state model is less favourable to longtermism. The expected value of exceeds the expected value of N  only when r  is less than approximately 0.000000012 (a little over one-in-a-hundred-million) per year, and it seems likely that an Earth-bound civilization would face risks of negative ENEs that push r over this threshold. Relaxing the model’s conservative assumptions, however, makes longtermism more plausible. If L  would reduce near-term existential risk by at least one-in-ten-billion and any far-future steady-state civilization would support at least a hundred times as much value as Earth does today, then r  need only fall below about 0.006 (six-in-one-thousand) to push the expected value of above N.

The case for longtermism is also strengthened once we account for uncertainty, both about the values of various parameters and about which model to adopt. Consider an example. Suppose that we assign a probability of at least one-in-a-thousand to the cubic growth model. Suppose also that we assign probabilities – conditional on the cubic growth model – of at least one-in-a-thousand to values of no higher than 0.000001 per year, and at least one-in-a-million to a ‘Dyson spheres’ scenario in which the average star supports at least 1025 lives at a time. In that case, the expected value of the longtermist intervention L  is over a hundred billion times the expected value of the neartermist benchmark intervention N. It is worth noting, however, that in this case L’s greater expected value is driven by possibly minuscule probabilities of astronomical payoffs. Many people suspect that expected value theory goes wrong when its verdicts hinge on these so-called Pascalian probabilities (Bostrom 2009, Monton 2019, Russell 2021), so perhaps we should be wary of taking the above calculation as a vindication of longtermism.

Tarsney concludes that the epistemic challenge to longtermism is serious but not fatal. If we are steadfast in our commitment to expected value theory, longtermism overcomes the epistemic challenge. If we are wary of relying on Pascalian probabilities, the result is less clear.

References

Bostrom, N. (2009). Pascal’s mugging. Analysis 69 (3), 443–445.

Monton, B. (2019). How to avoid maximizing expected utility. Philosophers’ Imprint 19 (18), 1–25. 

Russell, J. S. (2021). On two arguments for fanaticism. Global Priorities Institute Working Paper Series. GPI Working Paper No. 17-2021.

Other paper summaries

Summary: In defence of fanaticism (Hayden Wilkinson)

Suppose you are choosing where to donate £1,500. One charity will distribute mosquito nets that cheaply and effectively prevent malaria, in all likelihood your donation will save a life. Another charity aims to create computer simulations of brains which could allow morally valuable life to continue indefinitely far into the future. They would be the first to admit that their project is very…

Summary: The scope of longtermism (David Thorstad)

Recent work argues for longtermism–the position that often our morally best options will be those with the best long-term consequences. Proponents of longtermism sometimes suggest that in most decisions expected long-term benefits outweigh all short-term effects. In ‘The scope of longtermism’, David Thorstad argues that most of our decisions do not have this character. He identifies three features…

Summary: Staking our future: deontic long-termism and the non-identity problem (Andreas Mogensen)

In “The case for strong longtermism”, Greaves and MacAskill (2021) argue that potential far-future effects are the most important determinant of the value of our options. This is “axiological strong longtermism”. On some views, we can achieve astronomical value by making the future population of worthwhile lives much greater than it would otherwise have been…