Andreas Mogensen | What does the future ask of us?
Presentation given at the Global Priorities Institute
December 2019
ANDREAS MOGENSEN: (00:06)
So a back-of-the-envelope calculation due to the philosopher Nick Bostrom who suggests that if the earth remains habitable for another one billion years and can sustainably support a population of at least a billion people at any point in time, then there can exist at least 1016 lives of normal duration in our future. And that's only counting human beings of course. So given the total number of morally statused individuals who could potentially populate the long-run future, the value at stake in choosing among actions that impact on the long-term trajectory of Earth-originating civilization seems to be astronomical. So what demands are placed on us as a result?
Now moral theorists have typically discussed the demands of beneficence under the assumption that these represent obligations for those of us who are wealthy to transfer our resources to our poorer contemporaries. Insofar as moral philosophers have discussed obligations related to future generations they tended to focus on the question of what, if anything, we owe to future people as a matter of justice? They largely neglected this question of how the value of the long-term future shapes the demands of beneficence. As a result I'm going to argue moral philosophers have to a large extent misunderstood the problem of moral demandingness and much of the discussion of the demandingness objection to utilitarianism has been based on false presuppositions.
Now in claiming that the value at stake in choosing among actions that impact on the long-run future is astronomical, we are forced to make value comparisons among outcomes in which the size and/or composition of the population varies. Now making these comparisons is notoriously difficult. Fortunately for us, the claim that astronomical value is at stake when choosing among actions that impact on the long-run future turns out to be robust across a range of minimally (02:00) plausible population axiologies. Now for the sake of brevity, in this talk I'm going to restrict myself to arguing that the claim is supported by just two otherwise quite different axiologies namely Total Utilitarianism and the Axiological Asymmetry.
So we'll start with Total Utilitarianism. So on this view the value of an outcome is just the sum of the welfare of every individual existing in that outcome. Stated a little bit more formally, if you have an outcome O which contains n people and ui denotes the interpersonally comparable and cardinally measurable welfare of person i, then the total utilitarian value function VT is given by this formula on this slide here.
Now the number of people who could exist in the future is obviously much much greater than the number of people who currently exist. Because of this it seems that total utilitarianism should lead us to believe that differences in the expected value between the actions available to us are going to be far more sensitive to differences in the probabilities of significant long-term outcomes than to even the most significant short-term events. So that's Total Utilitarianism.
Let's instead consider the Axiological Asymmetry. So according to the Axiological Asymmetry between two outcomes O and O* differing only in that there exists an additional person in O* but not in O, O* is said to be neither better nor worse than O if this person has a life worth living, but worse than O if this person does not have a life worth living. So the actual axiological asymmetry is supposed to reflect this widely-held intuition that while there is no moral reason in favor of making happy people, there are strong moral reasons not to bring into existence people who will experience lives that are not worth living.
Now we are able to argue for the overwhelming importance of the long-run future given the axiological asymmetry so long as we are permitted to (04:00) assume that the badness of additional suffering lives does not diminish. Given this assumption we can argue that astronomical value is at stake when it comes to choosing between actions that impact the long run-future because it's reasonable to expect that there will, even in spite of our best efforts, exist astronomically many future individuals who will be unlucky and have lives that are worse than nothing and the disvalue of these lives we are assuming is an increasing linear function of the total amount of suffering that they contain. Now we could talk about factory farmed non-human animals here. There are also some human lives which seem to be sufficiently bad that they may be deemed not worth living. So an example might be children who are killed by infantile Tay-Sachs disease. Now fortunately Tay-Sachs disease is quite rare. It strikes only one in 320,000 newborns in the US general population. However, if the total future population is going to be at least 1016 people, then an incidence of a mere one in 320,000 is going to mean an aggregate future population of infantile Tay-Sachs sufferers that runs into the tens of billions and it will turn out that reducing the incidence of Tay-Sachs disease or infantile Tay-Sachs by mere hundredths of thousandths of a percentage point would spare more future children from a life marred by suffering than there are people currently living.
Now, while I quite obviously have not canvassed every possible theory, I hope I have said enough here to render plausible the claim that a range of minimally plausible population axiologies will support the verdict that astronomical value is at stake when choosing among actions that impact on the long-run future. Different theories might yield different verdicts about how best to go about improving the value of the future, so total utilitarianism is probably going to assign greater importance to reducing the risk of human extinction while the (06:00) axiological asymmetry is going to prioritize actions that reduce the risk of astronomical suffering. But nonetheless, both of these approaches agree on the overwhelming importance of posterity.
Now for concreteness and simplicity, I will from now on, proceed under the assumption that Total Utilitarianism is the correct population axiology unless I state otherwise. That's just because of the theories discussed in the literature, this one just struck me as the most plausible, which is not to say that its implications are always easy to swallow. But given what I've just discussed, much of what I do say in the remainder of this talk can be generalized to other theories given suitable modifications.
Okay. So one of the stock objections to utilitarianism is that it is too demanding by requiring us to subordinate our own lives to promoting the impartial good. And the demandingness of utilitarianism is typically illustrated by have the obligation to support charitable causes that help badly off people in the here and now. So typically the many millions of people who live in extreme poverty in developing countries.
However, if the argument that I've just given is on the right track, then utilitarianism may be thought to rule out any obligation to help people who are currently badly off in favor of actions that are directly targeted at improving the long-run trajectory of Earth-originating civilization. So we therefore have some grounds to believe that moral philosophers may have misunderstood perhaps quite radically what utilitarianism concretely demands of us.
Now, by contrast the demands placed on the current generation in maximizing an intergenerational utilitarian social welfare function are a well-worn subject of debate among economists, especially in the theory of optimal growth and optimal saving pioneered by Ramsey. In Illustrating the problem, I'm going to follow the presentation in (08:00) Partha Dasgupta’s paper discounting climate change.
So we're going to assume that there is an indefinite sequence of populations or generations, sorry, and each generation we will assume is of the same size as the previous. We'll also assume that each generation is perfectly homogeneous and this means that the generational welfare at time t can be summarized by the utility of consumption of a representative agent which we denote u ( ct ). We're going to assume then this model that there is a constant exogenous per period probability of extinction given by the δ parameter and if this extinction event does occur then the value in each subsequent period is going to drop to zero. We also assume for simplicity that there's no other uncertainty in our model. Then from the perspective of the current 0th generation the expected value of an indefinite consumption stream { ct } is given by the following formula:
Now as a standard, we are going to assume that the utility function here has the so called isoelastic form. So it's governed by this parameter η, which denotes the elasticity of a marginal utility of consumption. If we follow the Stern report and say η = 1, this means that the same proportional increase in consumption is equally desirable regardless of a person's status quo consumption.
So we're going to operate with a very simple pure capital model where at the beginning of each period some portion of the inherited capital stock is consumed and the remaining portion then earns a rate of return which is given by a positive constant r . It turns out that the optimal savings output ratio within this model is then approximated by the formula:
So, if η = 1 and if δ is very small in comparison to r (δ ≪ r), this means that nearly the (10:00) entirety of total output must be saved. For example if we say r = 4% and δ = 0.1%, then we should save about 97% of total output.
Now, a standard reaction among many economists to this sort of result is to insist that we ought to adopt a practice of pure time discounting. That is, we ought to down weight the utility of future people over and above the adjustments that we've already incorporated within our model to take into account the probability that some exogenous extinction event might simply wipe out the human race.
On the other hand, many other economists and philosophers feel very strongly that discounting future utility merely due to the passage of time is morally indefensible. Now we can instead try to avoid imposing excessive sacrifices on the current generation by increasing our aversion to consumption inequality as represented within the model. So future generations are expected to be better off or richer on average than current people and therefore greater aversion to consumption inequality can serve to rein in the extent to which the current generation ought to save so as to augment future consumption and within this Ramsey model, the level of aversion to consumption inequality is governed by the parameter δ. So Dasgupta has suggested that if we want to avoid demands for excessive accumulation, well then maybe we ought to reject Stern's assumption that η = 1 in favor of a value for η in the range of 2 to 4.
Unfortunately, this has the effect of just sort of shifting the proverbial bump in the carpet. As Dasgupta himself acknowledges, if we do increase our aversion to consumption inequality in this way, this seems to have the effect of making the requirements on people who are currently well-off to benefit people who are currently badly off even more extreme than they are generally thought to be (12:00) in utilitarianism.
So as Derek Parfit suggested in “Reasons and Persons”, a more sensible response to these problems may be to simply reject the assumption that we are morally required to maximize the good. We should instead try to impose some kind of limit on the sacrifices that we can each be asked to make for the sake of promoting the good by benefiting future generations. So by analogy philosophers who thought about the traditional demandingness objection to utilitarianism, the one that focuses on what those who are wealthy owe to those who are poor, they generally do not suppose that that projection can be satisfactorily answered by revising our conception of what human welfare consists in, nor even by revising our conception of what constitutes the impartial good, looking to goods beyond welfare, so long as we remain wedded to the maximizing Act-Consequentialist Theory of Right Action. Instead what we need is we need to find some alternative criterion of right action, one that avoids the unpalatable implications that seemed to follow whenever the maximizing Act-Consequentialist Theory of Right Action is married to any minimally plausible axiology.
But if we reject the assumption that maximally promoting the good is morally required whenever it is morally permissible, is there any other plausible theoretical account of the demands of beneficence that we could put in its place? Well that's been a crucial question in normative ethics over the past 50 odd years. But that discussion of how, if at all, to limit the demands of beneficence has been conducted largely against the background of the assumption that the demands of beneficence are obligations for those who are wealthy to transfer resources to their poorer contemporaries. Recognizing instead that an impartially benevolent agent would be principally concerned about maximizing the probability of a good long-run future for humanity and the other sentient individuals with whom we share this Earth forces us to view key (14:00) aspects of the problem of moral demandingness in a new light as I will now argue.
So perhaps the most natural solution to the demandingness of utilitarianism is that proposed by Samuel Scheffler in the rejection of consequentialism, namely, to adopt a revised theory which incorporates a so-called agent-centred prerogative, which allows the agent to weight her own interests or her own welfare more heavily than others.
So if the theory that we are revising starts out incorporating a total utilitarian axiology alongside the maximizing act-consequentialist criterion of right action then the most straightforward revision of that theory that incorporates an agent-centred prerogative is going to be one which says that an act is morally required for agent i only if it maximizes the following, that u function where k is here some constant which is greater than one. And so the basic thought is that by choosing a value for k that is suitably great, we can limit the sacrifices that each person is asked to make on behalf of others.
Now, a key concern that has attached itself to Scheffler’s view is whether it is possible to specify a value for k that suitably limits the demands of beneficence without putting what appears to be extreme weight on the agent's interests. And I think that when we take into account our ability to impact the astronomical value that's at stake over the long-run future, this concern becomes even more pressing. And that's because the sheer size of the future is liable to overwhelm even values for k that strike us as beyond obscene.
So for example suppose there could be 1016 people in our future but for a range of extinction risks such as nuclear war or bioengineered pandemics. So given these assumptions Bostrom has calculated that in a choice between reducing the risk of extinction by ever so slightly more than one millionth of one percentage (16:00) point and saving a hundred million human lives without altering the risk of extinction, Act-Consequentialism in conjunction with Total Utilitarianism entails that we ought to choose the former. We ought to reduce the risk of extinction ever so slightly and let the hundred million die. So it follows that the postulate of an Agent Centred Prerogative is going to acquit you of an obligation to sacrifice your own life in order to reduce the risk of extinction by ever so slightly more than one millionth of one percentage point only if you value your own life at more than 100 million times that of a stranger. And to make matters worse this result depends on adopting a reasonably conservative projection of the potential future population.
So giving a mere 1% credence to less conservative estimates which take into account the potential for humanity to spread to the stars and for future minds to be implemented in computational hardware, Bostrom goes on to calculate that the expected value of reducing the risk of extinction by as little as one billionth of one billionth of one percentage point is about 100 billion times the value of a billion human lives. It follows that to acquit yourself of an obligation to stand ready to sacrifice your life in order to achieve a minute reduction in the risk of human extinction you would need to assign astronomically greater agent relative importance to your own welfare.
Okay. So I want us now to reflect on the significance of the observation sometimes made that it is contingent whether any given moral theory is highly demanding. So it presumably could have been the case that utilitarianism asked very little of us. If other theories turn out to make only modest demands of us, this too may be thought to be contingent. So you might think that deontological theories would impose very serious costs on us if securing our basic needs required violating deontological constraints. Utilitarianism might be less demanding on us (18:00) in those circumstances.
Now I assume the foundational moral theories aspire to be necessarily true and so attach no special significance to which world happens to be the actual world. Therefore, I think the plausibility of a foundational moral theory like utilitarianism can't depend on whether the world just so happens to be the kind where the theory is highly demanding. So I don't think it can be a serious objection to utilitarianism that it is more demanding than other theories given the way the world actually is.
A natural alternative is to interpret the demandingness objection as insisting that a theory should not be extremely demanding in worlds that are, in some suitable sense, morally normal. So for example in explaining his own moderate conception of the demands of beneficence, Scheffler states his conviction that “under favorable conditions morality permits people to do as they please within certain broad limits.” Now this of course shifts the discussion to the question of how to characterize morally normal worlds or favorable conditions and invites the obvious response that the demandingness of utilitarianism in the actual world is not to be blamed on the theory, but on the fact that the circumstances handed to us are morally deficient.
So this is exactly the view taken by Elizabeth Ashford in her defense of utilitarianism against the criticisms leveled against it by Bernard Williams. According to Ashford, “The source of the extreme demandingness of morality is that the current state of the world is a constant emergency situation; there are continually persons whose vital interests are threatened, and given modern communications, the relatively well-off are continually able to help them.” If not for this constant state of emergency represented by widespread extreme poverty, Ashford argues, there would be no incompatibility between the demands of impartial beneficence and William’s Integrity. (20:00) Similarly Peter Railton blames the demandingness of consequentialism on “how bad the state of the world is,” noting that the theory would not be nearly so disruptive to our personal projects and commitments if wealth were more equitably distributed and/or political systems were less repressive and more responsive to the needs of their citizens.
Now I think that once we recognize the future oriented character of utilitarian beneficence, it becomes very plausible that Ashford and Railton are just mistaken to suppose that the demandingness of utilitarianism depends on the existence of extremes of wealth and poverty existing side by side. So recall the Ramsey growth model that we looked at earlier. Within that model we assumed that wealth was exactly equally distributed within each generation but nonetheless we were able to derive results concerning the required rate of saving, that most people regard as excessively demanding on the current generation. The world represented by that model is not one in which there exists some kind of constant emergency situation and the ability of agents within that model to generate very significant benefits for distant others, arise not from extreme disparities in consumption within a generation, but rather from the productivity of capital and the resulting possibilities that are afforded by economic growth across generations. So the optimal rate of saving within the model was not derived by assuming any kind of short-lived historically unique circumstances and the high rates of saving that were derived within the model are in fact time-invariant.
So the lesson here is that in reflecting on whether utilitarianism is extremely demanding in so-called morally normal worlds, we have to avoid assuming that utilitarianism is highly demanding of us only because there exists extreme disparities of wealth. We need to address whether the (22:00) conditions that are faithfully modeled by the Ramsey growth model that we looked at earlier are to count as morally normal or as favorable. Now because the essential features of the model that were used to drive a very high rate of optimal savings seemed so innocuous, even optimistic, I think we're under much greater pressure to answer ‘yes’ than when we reflect on worlds that are marred by intergenerational extremes of poverty and wealth.
Now, as has been pointed out, a demand to maximize intergenerational aggregate utility wouldn't be nearly so heavy a burden on those who heed its call if everyone else could be expected to comply with this demand as well, even granting that there are many millions of people who live in extreme poverty. The thought is that if everyone who could help did their bit, then each of us might be required to make only relatively modest sacrifices.
And so a number of philosophers argue that a key failing of utilitarianism in specifying the demands of beneficence is its failure to moderate its demands in the face of others’ non-compliance. So when others who could help refuse to do so, utilitarianism requires us to pick up the slack.
A number of philosophers have suggested that a more plausible conception of beneficence would instead index what is required of us under conditions of imperfect compliance to our fair share of the total effort as defined under conditions of perfect compliance. And the most sophisticated development of this idea is that due to Liam Murphy. Now as a standard, 2Murphy in his discussion foregrounds global poverty in considering the demands of beneficence. If we reflect instead on the Ramsey growth model that we analyzed earlier, everything seems to turn on its head. Here it is in fact the assumption of perfect compliance that imposes extreme demands on us. But the extreme demands (24:00) that are derivable within the theory, within the model, sorry, depend on the assumption that the savings behavior of every subsequent generation is also going to conform to the utilitarian criteria of rightness. This allows current savings to keep on paying returns indefinitely, yielding extraordinary benefits over the long-run. If instead some future generation is expected to defect and consume everything that we have saved, well then there wouldn’t be a similarly stringent demand for accumulation imposed on the current generation.
So when it comes to the theory of optimal saving the extreme demandingness of utilitarianism apparently cannot be blamed on the failure of the theory to moderate its demands in the face of others’ non-compliance and a theory of beneficence that indexes what is required of us under conditions of imperfect compliance to our fair share of the total effort defined under the conditions of perfect compliance would be no less demanding it seems.
However, the discussion so far has only taken into account the compliance or non-compliance of future people. We should consider whether taking into account the compliance behavior of previous generations could perhaps serve to mitigate the demands on the current generation to save for the future and I think there's a prima facie plausible case for thinking that it would. But we clearly do not observe perfect compliance with an impartial welfarist principle of accumulation over the course of human history. And we might think that if previous generations had all complied with such a principle then we would be much much richer now than we actually are.
Suppose then in addition, that any plausible principle of beneficence must satisfy a “compliance condition”, which says that “the demands on a complying person should not exceed what they would be under full compliance with the principle”. We may interpret this to mean that we cannot (26:00) be required to reduce our own expected well-being under conditions of imperfect compliance to a level below what it would be under perfect compliance. Therefore, we may argue, we are not required to save nearly so great a percentage of total output as we would have been required to save under conditions of full adherence to the utilitarian principle of accumulation throughout all prior history because in doing so we would render ourselves worse off than we would have been had all previous generations and the current generation adhered to that principle.
However, this argument rests on mischaracterizing the implications of Murphy's compliance condition. So when Murphy does spell out the compliance condition fully, it takes on an explicitly forward-looking character. So Murphy says… It's spelling out the condition, “A person's maximum level of required sacrifice is that which will reduce for expected well-being to the level it would be, all other aspects of her situation remaining the same, if there were to be full compliance from that point on.
So in his conception, the compliance condition is motivated by the thought that when I know that you will not fulfill your obligation, it nonetheless remains yours alone and does not become mine. But that presumes that your responsibility is a live one. It must be one that you could fulfill and which I should not be asked to take on in your stead. For this reason Murphy understands the compliance condition as not governing irrevocable failures to fulfill past obligations.
So I conclude that when it comes to the significance of non-compliance, our initial assessment seems to be on track when thinking about the demands of beneficence in respect of obligations to save for the benefit of future generations, the significance of non-compliance may well be the inverse (28:00) of what it's generally thought to be with these demands becoming more and more burdensome the nearer we approach conditions of perfect intergenerational compliance starting from the present time.
So last but not least in my discussion I want to focus specifically on the role played by passive effects in assessing the demandingness of moral theories. And by passive effects I mean benefits and costs that are conferred on individuals not as a result of their compliance with the moral theory’s demands, but as a result of other people's compliance with the various demands.
So as Murphy and David Sobel emphasized discussions of moral demandingness tend to neglect these passive effects focusing almost exclusively on the active demands of theory. Once we do take into account what a moral theory asks some people to bear passively, utilitarianism may not seem so demanding compared to other moral theories in permitting us to spend money on luxuries while other people's basic needs go unmet. These other theories ask people who are living in poverty to shoulder very heavy burdens. The active demands that may be placed on wealthy Westerners by utilitarianism are not nearly so harsh, we might think.
From this observation David Sobel infers that the demandingness objection to utilitarianism must presuppose that people have greater claim against aiding others than they have for claiming aid from others. Sobel argues that this kind of presupposition begs the question against consequentialists who reject the Doing-Allowing distinction. He therefore concludes that the demandingness objection is ineffective as a complaint against utilitarian moral theories.
Now I think that Sobel's argument is on shakier ground when we take on board the idea that agents who comply with the utilitarian principle of beneficence will focus their efforts on improving the long-term future. That's because the Non-Identity Problem (30:00) makes it hard in many cases to see other theories as being more demanding on the intended beneficiaries of acts that comply with demands of utilitarianism… Maybe future people. In fact it makes it hard to speak of such people as beneficiaries. If we do succeed in positively affecting the long-term future of Earth-originating civilization, our actions are almost certainly going to change the signs and all composition of the future population. So if we don't perform these actions, well the outcome may be worse, but there may be no one for whom it is worse. Future people may have a lower quality of life but those same people would not have existed with a higher quality of life had we chosen otherwise. They simply would not exist.
It seems misplaced to speak of a theory that permits us to bring about these suboptimal futures as imposing heavy burdens on the people who exist in those outcomes unless they in fact have lives that are not worth living. So once we keep in mind that utilitarianism orients the demands of beneficence toward improving the long-term future, the claim that the heavy active demands of the theory need to be weighed alongside the heavier passive demands of other theories starts to seem devious.
In fact it's hard to escape the impression that utilitarianism is extremely demanding relative to other theories not only in terms of its active demands but also in terms of its passive demands. After all, these other theories at least permit us to help people who are living right now who are very badly off even if this is not what's best as considered from the perspective of the expected value of all future history. Utilitarianism instead seems to impose very heavy burdens on those people by requiring those of us who would help them to instead direct our energies elsewhere. So the theory now appears extremely demanding not only in terms of its active but also in its passive demands.
Okay. (32:00) So I'll wrap up. I've argued that moral philosophers have, to a large extent, misunderstood the problem of moral demandingness and that much of the discussion of the demandingness objection to utilitarianism has been based on false presuppositions.
I've argued that once we take on board the assumption that an impartially benevolent agent would be principally concerned with maximizing the probability of a good long-run future, key aspects of the problem of moral demandingness take on a different character and have to be rethought. So, simply allowing the agent to weight their own welfare more heavily than the welfare of other people is not going to avoid imposing extreme demands unless we introduce weights that are beyond obscene. The demandingness of utilitarianism in the actual world can no longer be so easily blamed on unfavorable circumstances. The significance of imperfect compliance may perhaps turn out to be the opposite of what we have thought and taking account of passive effects in estimating the demandingness of a theory no longer seems to favor utilitarianism.
So it seems that if we want to gain a concrete understanding of the demands of beneficence and their moral significance for how we conduct our lives then we're going to need to think again.