Summary: The paralysis argument

This is a summary of the GPI Working Paper “The paralysis argument” by William MacAskill and Andreas Mogensen. The summary was written by Rhys Southan.

For consequentialists, the outcomes that follow from our actions fully determine the moral value of our actions. Actions are right to the extent they bring about good outcomes and wrong to the extent they bring about bad outcomes. If, as many philosophers believe (Greaves & MacAskill, 2021), the best outcomes we can bring about involve improving the long-run future for sentient life, this is what consequentialists are morally required to do. 

Non-consequentialists think the outcomes of our actions matter morally but are not all that matter morally. Respecting moral constraints such as not lying is also important. Many assume this gives non-consequentialists moral flexibility. Non-consequentialists who think improving the long-run future is the best thing they could do, do not necessarily believe it is what they are morally obligated to do. They can devote their lives to improving the long-run future if they want, but they do not have to. 

In “The Paralysis Argument,” MacAskill and Mogensen dispute this alleged moral flexibility of non-consequentialism. They argue that non-consequentialists must either stay paralysed, alter some core aspects of their understanding of morality, or join consequentialists in improving the long-run future.

Six assumptions that lead to the paralysis requirement

How do non-consequentialists find themselves choosing between death and essentially behaving as if they were consequentialists? Let us first consider why non-consequentialism might entail the impermissibility of almost any actions we might take. This results from a combination of non-consequentialist beliefs, inconvenient moral claims that non-consequentialists may not be able to avoid, and empirical claims:

1. Doctrine of Doing and Allowing

Non-consequentialists believe it is morally worse to do harm than to merely allow harm of the same magnitude. For instance, it is morally worse to knock someone into a lake who then drowns than it is to let someone drown when they are not in the lake because of you.

2. Harm-benefit asymmetry

Non-consequentialists believe it is more wrong to do harm of a certain magnitude than it is right to bring about benefits of the same or even somewhat greater magnitude. For instance, it is wrong for an ambulance to run over one person to save two lives. This means non-consequentialists cannot offset the wrong of a harm they do by bringing about a benefit of a similar magnitude.

3. Symmetry of doing and allowing benefits

Unlike with harms, allowing benefits is as morally good as making it the case that someone benefits. For instance, if we could allow Ava to save Bree by using the last vial of a drug, or we could use the vial ourselves to save Zoe, we do not have a strong moral reason to save Zoe ourselves rather than allow Ava to save Bree. This is not necessarily an ‘official’ non-consequentialist stance, but it is compatible with non-consequentialism and more intuitive than an asymmetry in doing and allowing benefits.

4. The moral importance of our actions is primarily determined by their indirect long-run effects

We bring about an inconceivably large number of indirect effects through our everyday actions. For instance, driving to the grocery store can significantly alter the next billion years (Greaves 2016). Most obviously, by changing traffic conditions and affecting the schedules and thoughts of strangers, driving affects when other people have sex and conceive children. Since our identities are determined by which particular sperm fertilises which particular egg and sperms especially are in flux, changing when people have sex alters the identity of the children people have, which determines which grandchildren, great-grandchildren and so on will populate the future (Parfit 1984). The particular people who exist in the far future would not exist if we hadn’t acted precisely as we did, and each of them will create numerous harms and benefits—all of which are traceable back to our actions which were necessary conditions of their existence, making their effects our effects. We have “done” this outrageously large cascade of harms and benefits and are morally complicit in it. Furthermore, these indirect effects are far more morally significant than our direct or immediate effects, just by sheer numbers alone.

Non-consequentialists might want to resist that having some causal responsibility for distant indirect effects equates to moral liability for those effects, and so deny that our indirect effects are morally important. After all, the processes by which these effects come about are complicated, inscrutable, and facilitated by the voluntary actions of other agents. MacAskill and Mogensen propose different ways of raising such objections and conclude they all fail. Non-consequentialists must either accept the vast moral importance of our indirect effects or offer a stronger counter-argument than those that MacAskill and Mogensen consider.

5. We cannot assume our everyday actions cause significantly more benefit than harm

We cannot predict the long-run value of the indirect effects our actions bring about. Sometimes we can foresee some of the more immediate indirect effects, but we quickly lose track of even these as they ripple on, and since all our actions potentially echo through the end of time, most indirect effects are unforeseeable. It is safe to assume these include both benefits and harms, but we have no reason to believe the indirect effects of our everyday actions produce more benefit than harm.

6. We allow far-future outcomes rather than do them when we enter a state of voluntary “paralysis”

We can avoid causal and thus moral responsibility for the value of the far future if we freeze in place until we die from a lack of sustenance. Non-consequentialists might counter that going into voluntary paralysis does not allow us to fully dodge causal chains that extend throughout the far future, and so does not count as merely allowing future outcomes. MacAskill and Mogensen are sceptical of this objection but concede there may be some better strategy for allowing future outcomes other than paralysis. However, they suspect any such strategy will be about as undesirable as voluntary paralysis.

Why these assumptions require non-consequentialists to stop moving

By moving around, we make ourselves morally liable for the value of the far future by actively contributing to causal chains linked to a vast number of morally significant harms and benefits (4). We have no basis for thinking the benefits of our everyday activities significantly outweigh their harms since we do not know what most of our indirect effects are (5). Even if we arbitrarily assumed the indirect effects of our everyday actions were a bit more beneficial than harmful overall, this would not help, because causing slightly more benefits than harms is morally worse than causing nothing at all (2). That we cannot know whether anyone’s voluntary paralysis reduces or increases future harms compared with their going about their everyday lives also does not help. The problem is, there is no moral upside to doing rather than allowing benefits (3) while there is significant moral downside to doing rather than allowing harms (1). So, even if future harms and benefits are identical in expectation whether we participate in causal chains or not, non-consequentialists are obliged to allow the harms and benefits of the far future rather than help cause them. The surest way to accomplish this is to move as little as possible until succumbing (6).

Escaping paralysis by improving the long-run future

MacAskill and Mogensen offer non-consequentialists an alternative to voluntary paralysis if they accept two final claims.

         A. Doing both harms and benefits can be at least as moral as doing nothing when the benefits outweigh the harms by                     enough

The harm-benefit asymmetry does not ban non-consequentialists from doing any amount of harm whatsoever for any amount of benefit whatsoever. It is not immoral to do things that bring about some harms when this also brings about disproportionately huge benefits.

         B. Working tirelessly towards improving the long-run future is our only reliable method for creating a landslide of                           benefits that overwhelmingly outweigh the harms of our indirect effects

If we do little other than work to extend the lifespan of humanity or increase the value of existence for vast numbers of future people, the harms we also inadvertently bring about can be excused.

So, non-consequentialists can avoid paralysis by working tirelessly to benefit the long-run future. This may be little consolation to those who liked non-consequentialism in part for its moral flexibility.

References

Greaves, Hilary (2016) Cluelessness. Proceedings of the Aristotelian Society 116, 311-339

Greaves, Hilary, and William MacAskill (2021) "The Case for Strong Longtermism" GPI Working Paper, no. 5-2021

Parfit, Derek (1984) Reasons and Persons. Oxford: Oxford University Press.

Other paper summaries

Summary: In defence of fanaticism (Hayden Wilkinson)

Suppose you are choosing where to donate £1,500. One charity will distribute mosquito nets that cheaply and effectively prevent malaria, in all likelihood your donation will save a life. Another charity aims to create computer simulations of brains which could allow morally valuable life to continue indefinitely far into the future. They would be the first to admit that their project is very…

Summary: The Case for Strong Longtermism (Hilary Greaves and William MacAskill)

In this paper, Greaves and MacAskill make the case for strong longtermism: the view that the most important feature of our actions today is their impact on the far future. They claim that strong longtermism is of the utmost significance: that if the view were widely adopted, much of what we prioritise would change. …

Summary: Will AI avoid exploitation (Adam Bales)

We might hope that there is a straightforward way of predicting the behaviour of future artificial intelligence (AI) systems. Some have suggested that AI will maximise expected utility, because anything else would allow them to accept a series of trades that result in a guaranteed loss of something valuable (Omohundro, 2008). Indeed, we would be able to predict AI behaviour if…