Summary: Moral demands and the far future
This is a summary of the GPI Working Paper “Moral demands and the far future” by Andreas Mogensen. The summary was written by Rhys Southan.
Consequentialism is the view that good and right coincide: right actions are those which maximise good and minimise bad. The best-known form of consequentialism is utilitarianism. By inviting morality to override all else in our lives, utilitarianism hence inspires what is known as the demandingness objection: that utilitarianism asks far too much of us and so is unacceptable as a moral theory.
In “Moral demands and the far future”, Mogensen argues that discussions on demandingness in moral philosophy have either misunderstood the problem or failed to recognise important dimensions of it. The potential value of the far future brings these oversights into focus. If this is properly taken into account, various aspects of the moral demandingness debate might need to be revised. Some of the arguments that fall apart under this consideration include: latitude for self-consideration, utilitarianism's supposedly reduced demandingness in “morally normal words”, “fair share” arguments, and passive burdens on those who suffer in the absence of aid.
The value of the future
Since at least Singer’s 1972 paper “Famine, Affluence, and Morality,” morality’s most excessive alleged demands were thought to come from the power of the relatively privileged to improve the lives of poorer people across the world. Recently, however, moral philosophers have started thinking that the huge number of possible future people impose an even greater moral burden on us (Beckstead 2013, Ord 2020, Greaves & MacAskill 2021). In short, there are so many possible beings who could exist throughout the future that their interests outweigh ours due to their sheer overwhelming numbers.
Moral demands and the far future in philosophy and economics
The idea that any given generation may need to prioritise future generations above all else is new to philosophy, but economists have debated this for around a century. Economists frame the question as how much each generation must save to optimise economic output. Their models suggest that if we do not discount the value of future generations at all, every generation should save anywhere between 50% to 97.5% of their net income for optimal intergenerational growth. This seems excessive.
Some economists therefore suggest “pure time discounting”—treating the good and bad things in the lives of future people as less significant solely because future people are born later. However, it is hard to think of a principled justification for this, and even if there were one, it would not really help, as these future goods and bads must be discounted to unbelievably low levels to avoid excessive savings demands.
A second approach could be to more strongly reject inequality. This might allow us to favour ourselves over future generations if we think future generations are generally richer than earlier ones. However, this ultra-strong rejection of inequality must be wildly disproportionate to cancel demands for improving the far future, which just shifts excessive demands back to our own time. Everyone now would be required to give up most of their resources to help worse-off people by the tiniest amounts, demanding even more than global-poverty-centric utilitarianism was originally accused of doing.
Is this just economics being overly demanding by aiming for optimisation instead of sufficiently decent outcomes? We might hope philosophy could help us here. Unfortunately, much of what philosophers have done to address moral demandingness falls apart once we recognize the far future of sentient life as the source of our moral demands.
Allowing self-consideration
One well-known suggestion for reducing moral demandingness is to abandon utilitarianism’s impartiality and allow everyone to weigh their own personal interests more heavily. This implies, for instance, that if someone is mildly hungry, it is not morally wrong for them to eat even though there is someone else who is hungrier.
This does appear to reduce demands on the rich to help poorer contemporaries. But it parallels the problem with pure time discounting in that it cannot put a dent in demands from the far future unless we tilt the balance obscenely in our favour. We would need to think it acceptable to weigh our own lives a hundred million times that of a future life in order to favour our own interests over those of future people. If we suppose future beings are very much like us, just with different cultures and more advanced technologies, this is very hard to justify.
Fair shares
Utilitarianism assumes that demands of beneficence depend entirely on how much good can be done. In this way, it incentivises moral freeloading; if some people do very little good, others who have already done a lot are still morally obliged to pick up their slack. It would seemingly be less demanding, and fairer, if moral obligations were divided by how much each of us would need to do if everyone contributed—and then left those obligations fixed despite the reality of widespread moral laziness and thus much more good to be done. But this backfires when moral commands are beamed back to us from the far future.
Full moral compliance includes compliance of future generations. If we expect upcoming generations to let the world implode, there would not be much value in the future no matter what we do, so we might as well focus on ourselves. If we instead imagine that all generations from now on will work devoutly to extend and improve sentient existence, it is much more likely this goal will be achieved—which ironically increases the overwhelming obligation to help achieve it. Rather than relieve us of obligations, imagining full moral compliance increases the expected value of the far future and thus our obligations regarding it.
Morally normal worlds
Some philosophers argue utilitarianism only seems demanding because we are in an unusually bad world. If we were in a “morally normal” world which already had more equitable wealth distribution, less oppressive institutions, and was not in a constant state of emergency, maximising the good and minimising the bad would not be so hard.
Again, having to think about the far future undermines this argument. An overwhelmingly high value in improving the far future need not imply moral dysfunction. Perhaps the future might be good, just and equitable without our interventions, but could be even more glorious and long-lasting if we devote ourselves to its betterment. The demand to devote ourselves to its betterment remains.
Passive burdens
Another way of questioning utilitarianism’s demandingness is to point out that while it may seem to place stifling burdens on relatively privileged people to help the worse-off, these “active” burdens are minor compared to “passive” burdens on the less fortunate who are left to suffer in poverty. In practice, then, utilitarianism should reduce burdens overall by compelling the rich to relieve the burdens of the poor.
This argument obviously has global wealth disparity in mind. Reckoning with the far future upends this argument in at least two ways. One is that utilitarians are now expected to ignore the suffering of their poorer contemporaries to focus their attention on the not-yet-existent. A second is that improving the value of the far future by increasing the lifespan of sentient existence could have the unintended consequence of increasing future burdens as well. Even if average well-being rises dramatically in the future, we might expect some future lives will be miserable out of sheer bad luck. Increasing the number of future individuals by reducing existential risks would therefore also increase the absolute amount of harms and overall bad lives there will be.
Conclusion
Re-examining the demandingness objection to utilitarianism in light of the future’s vast potential undermines some previous arguments in defence of utilitarianism. Then again, rejecting utilitarianism does not seem to help us. At this point, philosophers have just started to recognize some of the problems that arise when we take the interests of future people into account. There is clearly a lot more work to be done.
References
Nicholas Beckstead (2013). On the Overwhelming Importance of Shaping the Far Future. PhD thesis, Rutgers University.
Paul Christiano (2014). We can probably influence the far future. Rational Altruist.
Hilary Greaves & William MacAskill (2021). The case for strong longtermism. GPI Working Paper (No. 5-2021).
Andreas Mogensen (2020). Moral demands and the far future. Philosophy and Phenomenological Research, 1–19. doi:10.1111/phpr.12729
Toby Ord (2020). The Precipice: Existential Risk and the Future of Humanity. London: Bloomsbury.
Peter Singer (1972). Famine, affluence, and morality. Philosophy and Public Affairs, 1(3), 229–243.
Other paper summaries
Summary: Simulation expectation (Teruji Thomas)
At some point in the future we may invent sophisticated simulations. If we do so, we could run millions of simulations of minor variants of the 21st century, each inhabited by simulated people. To those simulated people, it will appear as if they really lived in the 21st century. But that is exactly how our world appears to us, and perhaps we live in a simulation. …
Summary: Longtermist institutional reform (Tyler M. John and William MacAskill)
Political decisions can have lasting effects on the lives and wellbeing of future generations. Yet political institutions tend to make short-term decisions with only the current generation – or even just the current election cycle – in mind. In “longtermist institutional reform”, Tyler M. John and William MacAskill identify the causes of short-termism in government and give four recommendations…
Summary: Will AI avoid exploitation (Adam Bales)
We might hope that there is a straightforward way of predicting the behaviour of future artificial intelligence (AI) systems. Some have suggested that AI will maximise expected utility, because anything else would allow them to accept a series of trades that result in a guaranteed loss of something valuable (Omohundro, 2008). Indeed, we would be able to predict AI behaviour if…