Orri Stefánsson | Should welfare equality be a global priority?
Parfit Memorial Lecture 2021
14 June 2021
The working paper that was the basis for the lecture can be found here.
HILARY GREAVES: (00:06) Hi, everyone. We're very pleased to welcome you to the second Parfit Memorial Lecture. This is an annual distinguished lecture series established by the Global Priorities Institute in memory of the late Professor Derek Parfit. The aim of the lecture series is to encourage research among academic philosophers on topics related to global priorities research, that is, research that speaks especially directly to, is especially crucial for the decision problems faced by an impartial agent trying to be appropriately responsive to facts about what would do the most good.
(00:38) Orri Stefánsson is Associate Professor (Docent) of Practical Philosophy at Stockholm University, Pro Futura Scientia Fellow at the Swedish Collegium for Advanced Study and Advisor for the Institute for Futures Studies where he's part of a project on climate ethics. Orri's current research concerns decision making under extreme uncertainty, distributive ethics, population ethics and catastrophic risk. He is published extensively in journals such as Mind, the Australasian Journal of Philosophy, Synthese, Erkenntnis and Economics and Philosophy and his excellent book, co-authored with Katie Steele entitled Beyond Uncertainty: Reasoning with Unknown Possibilities, is forthcoming with Cambridge University Press. We're delighted to have Orri here today to present this second Parfit Memorial Lecture and his title is Should Welfare Equality be a Global Priority? Over to you, Orri.
ORRI STEFÁNSSON: (01:29) Thanks so much, Hilary, and thank you very much for inviting me. I just started sharing my screen. I hope you can…Can you see that now? Can I just check that you actually do see my screen? Is that…? Yeah.
HILARY GREAVES: (00:06) Yes, we do.
ORRI STEFÁNSSON: (01:29) Great. Yeah. Thanks so much for inviting me. Thanks for the introduction. It's a great pleasure to be here and it's a great honor to be able to honor Derek Parfit by giving this talk.
(02:06) So I'm going to start my talk with an example that illustrates the types of trade-offs that I'll be concerned with in this talk. In this example, I'm going to ask you to imagine that some near-future generation, maybe the next generation from now. Let's call it F1. Imagine that this generation can make some investment that would hugely benefit some generation further into the future, maybe 100 years from now. Let's call the second generation F2. Now, just to make things as simple as possible, I'm going to assume that there's perfect equality within the two generations and to make things even more manageable, I'm just going to assume that the two generations are of the same size. And in addition, if F1 does not make an investment, then there will also be perfect equality between the two generations. If F1, however, does make an investment, then that would hugely increase the welfare of F2. In fact, it will increase it by an infinite magnitude or if you like, the largest metaphysically possible magnitude. The investment, however, will be costly to F1, so it will have a significant effect on F1’s welfare, but I'm going to assume for now that it makes sense to talk about percentages of welfare and that the cost to the welfare of F1 will be less than 10%. This assumption that it makes sense to talk about percentages of welfare is an assumption that I'll come back to later on.
(03:46) Okay. So we have this possibility that F1 makes the investment. It will be significantly costly to F1, but nothing in comparison to the huge gain to F2. Now, I suspect that most people will think that, at least if F1 is sufficiently… or the people at F1 are sufficiently well-off anyway, then most people, I think, would think that it would be better if F1 made the investment than that they don't make the investment. So by the way, I won't be talking about here whether F1 should make the investment. I'll just be talking about which distributions are better and worse. And then I'm going to leave it as an open question, how should or whether deontic claims derive or not from the betterness claims.
(04:36) But when it comes to betterness, I think, that most people would agree that at least if F1 is sufficiently well-off, then it would be better that they made the investment than that they don't make the investment. However, it turns out that if a prioritarian thinks so, in other words, if someone who holds what Parfit called the priority view thinks that it would be better that F1 made the investment despite the increased intergenerational inequality that that creates, that particular judgment constrains the types of attitudes that the prioritarian can take to inequalities when less is at stake or I will call small-scale inequalities. In particular, a prioritarian who thinks that it will be better if F1 makes investments, has to give up what seems to be reasonable aversion to inequality when less is at stake. In other words, there's a seemingly reasonable attitude to take to the extreme example where there's a possibility of an infinite welfare gain, that seemingly plausible judgment makes it impossible for the prioritarian to have seemingly reasonable attitudes to inequality when less is at stake.
(05:523) So to take an example of the type of attitudes or the type of judgment that a prioritarian cannot make if they hold the seemingly most plausible judgment on the first example, consider the following example where we are considering two individuals, I call them Ann and Bob, and to make things very simple, I'm just going to assume that Ann and Bob's welfare is linear in their lifetime income. Now presumably, for most of us, that's not true but that's an assumption that might be true of people at the bottom end of the income distribution. In addition, I'm just going to assume that they lead equally long lives or at least have equally long working lives. Now, given that assumption, then what seems to be the most plausible judgment on the first example, in other words, the judgment that will be better if F1 made the investment rather than not making the investment, commits the prioritarian to the following: so when comparing the two income distributions on the one hand and unequal income distribution where Ann has an average annual income of £20,200, while Bob has one of £19,820, when comparing that to the equal distributions where they both have an average annual income of £20,000, the prioritarian who makes the non extreme or the seemingly plausible judgment on the first example, cannot say that the equal distribution is better than the unequal one in this case. And I think that's somewhat counterintuitive because I think that if we compare these two distributions, although the unequal one has a slightly higher total income, the difference is only £20. That is, the difference is so low, I think that many people who have prioritarian intuitions will think that intuitively, the equal distribution is better than the unequal one, but that they cannot say if they are willing to accept the inequality in the previous large stakes example.
(08:04) And so the general point that this particular example illustrates, and the general point that I'll be making over and over in this talk is that what seems to be moderate inequality acceptance when huge welfare gains are at stake, like accepting the inequality between the two generations in the first example, that acceptance commits the prioritarian to give up what seems to be reasonable and moderate inequality aversion when smaller welfare gains are at stake. So the prioritarian cannot be both what I'll call reasonably inequality averse when small welfare gains are at stake, without being what I would call extremely and even maybe absurdly inequality averse when more is at stake.
(08:54) And the same holds for what I'll call the Gini egalitarian or the generalised Gini egalitarian. Some of you might be familiar with this view as the rank dependent egalitarian view, which is by the way, an instance of what Parfit called “moderate” egalitari2anism because it doesn't accept leveling down. In other words, it implies the Pareto principle. The same kind of dilemmas that I'll be giving for the prioritarian holds for the Gini egalitarian and I'll explain that a bit later on and I'll also, in a moment, explain these two views in more detail.
(09:32) But before we talk about these dilemmas, let me just clarify some of the terms that I'll be using. So first of all, I'll be talking a lot about welfare and I assume that most people will be familiar with roughly what that means. But basically when I say welfare I simply mean a measure of how well a person's life is going or how good her life is for her. I will assume that welfare can be measured in a pretty precise way. Now I will make different assumptions about the structure of the welfare measure and in fact, in the paper on which this talk is based, we prove the results that I'm going to be presenting for different assumptions about the structure of welfare. And the reason why that's important is that the views I'm going to be discussing, they have different implications about the structure of welfare. I'll get back to that, but for now, just think of welfare as a somewhat precise measure of how good a person's life is, how good it is for her.
10:35) Now, when it comes to then the question of how welfare should be distributed, there are basically three main views, at least within a broadly consequentialist framework. First of all is the Utilitarian view, which basically says that the distribution as such doesn't matter. What matters is simply the sum total of welfare and in fact, you should maximize this sum total welfare, according to the utilitarian. Now the Prioritarian or the priority view, as Parfit called it, says that we should maximize the sum total of “priority weighted” welfare. So I’ll make that a bit precise in a moment but essentially, the priority weighting ensures that when we're thinking about whom to benefit, which person to benefit, then the priority view, all things being equal, favors us benefiting the worse off rather than the better off. Now, Egalitarianism has a similar implication. It says that we should maximize the sum total of “rank weighted” welfare, but unlike the prioritarian, the egalitarian says that it's bad that people are worse off than others, whereas the prioritarian says it's bad that people are badly off. So the badness of people being badly off according to the prioritarian does not depend on them being worse off than others.
12:01) So we'll get back to these views in a moment, but an assumption that we'll be making and unfortunately, I won't defend this particular assumption, but I'll get back to it at the end of the talk. This assumption is that rational global priority setting is based on one of these views and the results that I'm going to be discussing, why I take them to have implications for global priority setting.
(12:34) Okay. So I think that many of us, maybe most of us, have pre-theoretic intuitions that could be seen as being evidence in favor of Egalitarianism or Prioritarianism over and above Utilitarianism. And the example that I have here on the slide now illustrates that intuition. So what we have here is two possible distributions of welfare. So the numbers here now should be interpreted as units of well-being. And these two distributions affect or the choice here affects two individuals. So if we go for the equal distribution, then two individuals Ann and Bob will have an equal amount of welfare, 100 units minus some δ (100 – δ), if we go for the unequal distribution, then Ann will have 105 units of welfare, whereas Bob will only have 95 units of welfare. So no matter what δ is, as long as it's positive, the unequal distribution has a higher sum total of welfare and because of that, the utilitarian says that the unequal distribution is better than the equal distribution and that holds no matter how small δ is.
(13:58) Now, the prioritarian and the moderate egalitarian says that, the answer here depends on the size of δ, and in particular, if δ is sufficiently small, as I assume it can be because I assume that welfare can be measured on a predefined scale. In that case, when δ is sufficiently small, the equal distribution, especially the non-equal distribution, say moderate egalitarians and prioritarians and I think common sense… So many people's intuitions support that judgement. Many people, I think, would think that the cost to total welfare, if it's very, very small (in other words if δ is very, very small), then that cost is worth accepting for the sake of perfect equality.
(14:49) Now, the message of this talk is that the seeming intuitive advantage of Egalitarianism and Prioritarianism is actually limited. It's limited in the sense that these views can only accommodate moderate inequality aversion when little is at stake. In other words, when what we're looking at are relatively small differences in welfare. We can only accommodate this type of inequality aversion, if it's extremely inequality averse, when stakes are larger. So like, for instance, when an infinite amount of welfare is at stake. So for instance, in the example on the last slide, preferring equality over inequality even if you prefer it only for very, very small δ implies extreme attitudes when larger welfare differences are at stake. And so this in fact follows from a set of calibration theorems that Jake Nebel and I present to prove in a paper called “Calibration Dilemmas in the Ethics of Distribution”. And some of these, in particular, one of these theorems is very, very closely based on a similar result of Matthew Rabin’s for expected utility theory. Some of the theorems go a bit beyond Matthew Rabin’s results, some of them, because they have different functional forms from the functions that… Matthew Rabin’s result was about others because we need to make different assumptions about the measurability of the quantity that is of interest here. In our case, that contradicts welfare. In Rabin’s case, that quantity was money. But in any case, for those of you who are familiar with the logic behind Matthew Rabin’s results will understand the logic behind the type of results that I'm going to be discussing here because the logic is pretty much the same. So the option is that either egalitarians and prioritarians are almost utilitarians when stakes are low, in the sense that they make almost the same or in most in most cases, they will make the same judgment when stakes are low. So either that's true or they will be extremists when stakes are high. So they will be extremely inequality averse when stakes are high.
(17:29) Okay. So let me now state Prioritarianism a bit more formally or what's called the priority view. So what we have here are two distributions of well-being. So bold font w is one distribution, where w1 is the welfare level of person 1 and so on up to the welfare level of person n denoted wn. And so we have two welfare distributions involving the same number of individuals. And a prioritarian says that in order to compare these two welfare distributions, you do the following: so you apply a priority weighted function, which is strictly increasing and strictly concave (I'll explain what that means in a moment). So you apply that same function to each well-being level and when you've done that, what you get is priority weighted well-being levels. You sum up all of these and then the better population is the one that has the higher sum total of priority weighted well-being. So you simply sum up these priority weighted well-being levels and you choose the one that has the higher according to prioritarianism. And so, an implication of this is that increasing a person's welfare is always a good thing, always makes the population better and because the function is strictly increasing. But given a fixed welfare increase, the moral value is higher or the moral gain is greater if you give that benefit to the worse off rather than the better off. That follows from ϕ being strictly concave.
(19:26) Now, this family of view is actually consistent with an infinite number of more precise views, or to put it differently, this general view will say very little about how to compare most distributions because that will depend on the precise shape of this priority weighted function. So what we have here on the next slide is a representation of, on the one hand a typical priority weighted function and on the other hand, something that's not quite a priority weighted function but something more like a [inaudible 20:13] critical Sufficientarianism. So focus first on the Strict Prioritarianism or what’s typical called simply Prioritarianism. This is a strictly concave function and the important fact about that function is that if you look at what happens to priority weighted welfare, when you go to the right here on the welfare axis, you see that as you go further and further to the right, the increase in priority weighted welfare becomes smaller and smaller, even given a fixed increase in welfare. So for instance, going from zero to 20 units of welfare results in a pretty large increase in priority weighted welfare. Going from 20 to 40 units results in a smaller jump in priority weighted welfare and going from 40 to 60 results in a smaller jump still.
(21:12) Now what I call here Sufficientarianism corresponds to Prioritarianism up to a particular point, so in this case, up to 20 units of welfare. So the function here is strictly concave on this particular interval and then it becomes linear. And so the intuition here is that you should benefit the worse off when they are sufficiently badly off, or to put it differently, you don't have to prioritize the worse off when they are sufficiently well off. And the reason I mentioned this sufficientarian view as well is that, although I will from now on talk about Prioritarianism, actually, the results I'm going to be presenting, they also hold for the sufficientarian view, which we might call “weak” prioritarian because the function here is weakly concave. It's strictly concave up to a point and then it's linear after that.
(22:15) Okay. So in fact, I'm going to be focused in this talk on just one particular version of Prioritarianism, which is called the Atkinson Social Welfare Function, of course, named after Anthony Atkinson, but has more recently been defended at some length by Matthew Adler. So the important point about this particular version of Prioritarianism is that it satisfies what's called “ratio scale invariance”, which informally just implies that it's meaningful to talk about ratios between welfare levels and percentages of welfare. Some other prioritarian views don't have that implication. Some imply only that it's meaningful to talk about differences between welfare levels, not ratios between welfare levels. So in the paper, we have results for these different types of prioritarian views, but here I'm just going to focus on the Atkinson view.
(23:16) And so to see what the Atkinson view or what the calibration results say about the Atkinson view, consider the following small scale trade-off. So in the beginning, in the opening example that I gave, I started out with a large scale trade-off – you remember, where there was the possibility of an infinite gain and I asked what your judgment on the large scale trade-offs imply or what it constrains you to say about the small scale trade-offs.
(23:52) Here, I'm going to do the opposite or the contrapose and I'm starting with the small scale trade-off and ask what it implies for your attitudes to the large scale trade-off, if you are a prioritarian of the Atkinson kind. And so the small scale example concerns a choice between two distributions like before, where one distribution is perfectly equal. So in one distribution, both Ann and Bob have the same well-being level of w. In the unequal distribution, Ann has lost slightly in comparison to the equal distribution. She has lost 0.1% whereas Bob has gained 1%. So there's a tiny gain in aggregate welfare when you go for the unequal distribution. But of course, it's not equal, in some sense or as some people would make it, in one respect worse than the equal distribution.
(24:54) Now, I think that since the gain in total welfare is so small, many people who have prioritarian intuitions will think that the equal distribution in a small scale case is better than the unequal one. And so now the question is, what does that imply for the following example, given that the people making the judgment are Atkinson prioritarians. And what we're considering now is what I call a large stake trade-off. So we have here, not only two people, but a whole population consisting of n times two individuals where n is some integer. And so again, we have a choice between an equal distribution everyone, the n times two people, have a well-being level of w or we can go for an unequal distribution, where half the population has lost 8% compared to what they have in the equal distribution. But the other half, they have Gw, for some G where their welfare level is Gw. Now the question is, given the inequality averse judgment in the first case, in other words, given that the Atkinson prioritarian prefers that both people are w, that both Ann and Bob are w in the previous example, what does that imply for the largest n, in other words, the largest number of people and the largest G, such that they would have to prefer equality to inequality in this second case?
(26:45) Well, maybe somewhat surprisingly, the answer is that the value for G is infinite, and maybe less surprisingly, for any number of n. In other words, someone who holds the egalitarian judgment on the first case, or the inequality averse judgment on the small scale example on the previous slide, has to say that the equal distribution is better than the unequal one, no matter how large is the population and even when the gain to half the population under inequality is infinite, or is any arbitrarily large gain if you think that infinite welfare gains are impossible, just replace infinite with the largest metaphysically possible.
(27:35) And so that's the prioritarian dilemma. The prioritarian, in this case the Atkinson prioritarian, cannot be what seems to be reasonably inequality averse for small stakes without being… or seems to be extremely inequality averse for large stakes. So they can't have what seems to be a moderately inequality averse to the first case, without having an extreme attitude to this when it comes to this latter case.
(28:05) And so, here we have a table that illustrates some of these results, some of these implications of one of our prioritarian calibration theorems. So, what we have here is that… We know that some Atkinson proletarians prefer a distribution with two people at w rather than one person having lost l percent, which is the small l, which is the orange numbers here, while the other person has gained 1%. So we know that the prioritarian here prefers both people being at w, rather than one having lost l percent, the orange l, and the other one having gained 1%. Now, we know that for some w, the implication is that for any w and for any number of people, we must prefer that the whole population is at w rather than one half having lost the large L, the red percentages here, while the other half has gained the blue G which are these blue cells here or the percentages in the blue cell here.
(29:30) So to take an example, a different one from the one that we had on the previous slide, so suppose that l is, or the small l is 0.95%, then the example is that we know that the prioritarian prefers both people being at w rather than one person having lost 0.95% compared to w while the other one has gained 1% compared to w. Then that implies that they have to prefer that the whole population is at w, rather than one half of the population having lost 20% or has only 80% of w, while the other half has an infinite amount of well-being or the largest metaphysically possible amount. So, this is the prioritarian dilemma. You cannot be what seems to be moderately inequality averse for small stakes without being extremely inequality averse for larger stakes.
(30:34) Now, I mentioned previously that we get similar results for Moderate Egalitarianism, which I'm going to go through now in slightly less detail than the details that I gave for the prioritarian view. But to state more formally, the type of Egalitarianism that we have in mind, we need some additional notation. So, the bold font w[] here with this subscript, with this bracket, with this square bracket subscript, is a re-ordering of the original w population, where it has been ordered according to how well off the people are. So in particular, w[1] within brackets is the person that's best off and w[n] within brackets is the person that's worst off and more generally, people here have been ordered from the best off or the welfare levels have been ordered from the highest to the lowest.
(31:39) Now then, in addition, this particular view applies a sequence of non-decreasing rank weights. So, for instance, we have an equal number of weights to the number of welfare levels, and the highest welfare level gets assigned the lowest number and the lowest welfare number gets assigned the highest number, the highest weight, or more precisely, a person will never get assigned a lower weight than someone who is better off than her. And this is what will imply that a fixed welfare gain corresponds to a higher value when it's given to someone who is worse off rather than someone who is better off. And in addition, we assume that these weights are all positive, which means that everyone's welfare matters to some extent, but the welfare of those worse off than others matters to a greater extent, or gets a higher weight when it comes to the aggregation.
(32:51) And then what the Generalised Gini egalitarian says is that when we compare two distributions of well- being, or two populations, we weigh each welfare level by its weight, by the weight that corresponds to that level, and then we add up all of these rank weighted well-being levels and we simply choose the ones that have the highest sum, or the best ones, or a higher sum of rank weighted well-being is better.
(33:31) And so an implication of this, as I said before, is that it's always a good thing to increase people's welfare, no matter how well off they already are. But increasing the welfare of those who are worse off than others, corresponds to a greater jump than increasing the well-being of someone who is better off than others. In other words, this view also prioritizes the worse off not because they are badly off in absolute terms, but because they're worse off than others. And so the formal implication of this is that this view is not separable across persons or across well-being levels, unlike the priority view.
(34:19) Now, the dilemmas that we get for this type of view are a bit more complicated to state than the dilemmas we get from the prioritarian view, partly because how extreme the dilemmas are depends on how large the population is. We get more extreme results for larger populations. But to illustrate an example or a dilemma that the Gini egalitarian faces, consider the following example.
(34:48) So imagine that you are the social planner of some society and you expect that the population size of your society will be one billion in year 2100. At some previous time, time t, however, the population size was only 10,000 people and at that time, everyone was equally well off at some level w – 2. Now, given this situation, we consider now two possibilities. One possibility is that 10,000 people at time t are brought up to w. So each person gets two extra units of well-being. And one way to think of this more concretely is that they decide to use some huge amount of resources to make themselves better off rather than saving for the future. And so an effect of that is that the people at year 2100 will also be at level w. Alternatively, there's the possibility that the 10,000 people at time t, they stay at w – 2, so they don't increase their welfare and you can think of this more concretely as them deciding to save for the future, which has the effect that the people at 2100 are also all equally well off at this huge welfare level, namely w + 6.76 x 10449. So this is basically w plus almost seven with 449 zeros behind it, so it's a huge number, and the question is, which one of these would be better?
(36:43) Now, again, I suppose I assume that even people with egalitarian intuitions will, or at least many of them, will think that there is some level w such that they will prefer that the people at t save for the future, despite that not making them better off. In other words, I think that many egalitarians will think that as long as the people at time t are sufficiently… or are not too badly off anyway, then it would be better if they saved for the future than if they do not save for the future. But it turns out that if someone who is a generalised Gini egalitarian says that, then that constrains the attitudes that they can take towards smaller scale inequalities, for instance, within a generation.
(37:38) So to take an example of what the generalised Gini egalitarian cannot say if they think that it would be better if people at t saved for the future, consider now again, a population of 1 billion people and we focused on the 10,000 people that are worst off within that population, so that bottom 0.001%. So we look at the 10,000 people who are worst off in that population. Now someone who says that it would be better for people at t to save for the future, cannot say that when we look at adjacently ranked individuals, two adjacently ranked individuals, in other words, when we look at pairs within this bottom 10,000 people in the distribution, we cannot say that for each of these pairs, there's some level w such that it would be better if both individuals of that pair are w rather than one being at w + 1 while the other one is at w – 0.9.
(38:53) I think that's counterintuitive because I think that a lot of egalitarians would want to say that when we look at these bottom 10,000 people, the 10,000 worst off, there has to be for each pair of adjacently ranked individual, there will be some w such that the equal distribution is better than the unequal distribution here where one has gained one unit and the other one has lost 0.9 units, but that they cannot say if they hold the seemingly reasonable attitudes towards the first example, namely, if they think that it will be better that t saves for the future rather than they don't save for the future.
(39:43) Okay. So here's a table that again, illustrates the dilemmas, the calibration dilemmas, but in this case for the egalitarian. I won't go through this table like I did with the prioritarian one, but maybe one thing that you will immediately see is that in this case, we never get to infinity because intuitively, we get to more extreme results more quickly. So we get to more extreme results more quickly but we never get to quite as extreme results, navigate to the infinite welfare gains, giving up on infinite welfare gains.
(40:25) Okay. So far, it might seem that what I've been saying is the good news for utilitarianism, at least when compared to these two alternatives of prioritarianism and egalitarianism. So, these calibration results that I have been discussing may seem to diminish the intuitive appeal that egalitarianism and prioritarianism has over utilitarianism. And I think one reason for why that would be the case is that the Gini egalitarian and the prioritarian view are meant to be non-extreme alternatives to what Parfit called “strong egalitarianism”, in other words, egalitarianism of the leveling down kind. But now we've seen that actually, they cannot be or they aren't non-extreme, unless they give up what seems to be reasonable aversion to small-scale inequality. And then one might think that, if we cannot accommodate our aversion to small-scale inequality even when we go for these egalitarian and prioritarian views, we might think that while their advantage over utilitarianism isn't what we thought it were.
(41:46) But actually, these results also bring some bad news for utilitarianism. In particular, it seems to undermine, or at least weaken, their favorite explanation of the bad of resource inequality, namely, the explanation that appeals to the “decreasing marginal utility” of resources.
(47:07) And so I suppose that most people will be familiar with this general idea of decreasing marginal utility. To explain it very briefly, this is I guess, an unquestionable, psychological fact that, at least within some ranges, some goods such as money, does have decreasing marginal utility, as seen by for instance, the fact that… So if you give $100 to, let's say a poor college student, then it might make their life a little bit better. It might allow them to have a nice evening. Whereas if you give the same amount to Bill Gates, it won't make his life better at all, or maybe by some very, very small degree. In other words, a fixed monetary amount given to a poor person will, in general, result in a greater welfare gain than giving the same amount to a better off, to a richer person.
(43:12) And because of this, most or many utilitarians tend to prefer an equal distribution of money over an unequal distribution. And some utilitarians go further and say that this phenomenon, decreasing marginal utility, completely explains our intuitive aversion to inequality. Now Pigou didn't go that far, maybe, but here's a quote from him, where he says that:
“it is evident that any transfer of income from a relatively rich man to a relatively poor man of similar temperament, since it enables more intense wants to be satisfied at the expense of less intense wants, must increase the aggregate sum of satisfaction.”
And so, this would be an explanation for why a utilitarian should prefer an equal distribution of income rather than an unequal distribution, given a fixed level of total income. But of course, many people think that if there's a choice between an unequal distribution and an equal distribution, and even if the equal distribution results in a slightly lower total income, one might still prefer the equal distribution. But if a utilitarian does that and tries to explain that by appealing to decreasing marginal utility, then they face a version of the prioritarian dilemma, and in fact, they face Rabin’s original dilemma slightly re-interpreted.
(44:53) And to take an example, consider the following (so this will be a small stakes trade-off). Consider the following example where let's just assume that these monetary amounts represent Ann in Bob's wealth. And so what we have here is a choice between an equal distribution, where Ann and Bob both have a wealth of £100,000. So there's a choice between on the one hand that and on the other hand the unequal distribution, which has a slightly higher total, but which also has this inequality because Ann has £101,000 and Bob has £99,500.
(45:38) Now, if utilitarians say that the equal distribution in the first case is better than the unequal distribution in the first case due to decreasing marginal utility, then they will have to say that the same holds in the second case for the same reason. And it's the ‘for the same reason’ part that I'm going to be focusing on. So in the second case, again, we have a choice between two distributions, let's say, wealth distributions. On the one hand, on the equal distributions, Ann and Bob both have £75,000. In the unequal distribution, Ann has £17 trillion and Bob has £67,000. So I think that it could well be that the equal distribution in the second case is better than the unequal distribution, but I don't think that that can be the case because of decreasing marginal utility. And so if it's not the case that decreasing marginal utility makes the equal distribution better than the unequal distribution, in other words, as I’ll argue or assume, since when we look only at decreasing marginal utility, we cannot say that the equal distribution is better than the unequal… we cannot say that in the first case, either the equal distribution is better than the unequal one because of decreasing marginal utility. So, in this second example, what we have in the unequal distribution is that Ann’s wealth has been multiplied by 227 million from £75,000. Bob, on the other hand, has only lost £8,000.
47:39) Now, I think that, if it is the case that in the large stakes example or the large scale trade-off, that the equal distribution is better than the unequal one, which I think it could be, then I think that will have to be because of something like the social or political or structural effects of great differences involved. So for instance, if we are imagining that Ann and Bob are part of a population where other people, let's say, also have a wealth of roughly £75,000, then if Ann becomes that much richer than everyone else, she will hold a lot of power that might be detrimental to the welfare of everyone else. And for that reason, it might be that the equal distribution is better than the unequal distribution in this case.
(48:32) But note that this is an explanation in terms of the relationship between Ann and Bob’s wealth. So it's not an explanation in terms of the decreasing marginal utility of money, which we'll have to look at the well-being that each dollar buys for Ann and Bob separately. So an explanation in terms of decreasing marginal utility, will have to look at first, say focusing on Ann, how much welfare does each additional pound buy her, then looking at Bob, how much does his welfare diminish when he loses each additional pound. In other words, an explanation in terms of decreasing marginal utility is an explanation in terms of Ann and Bob's, looking at them separably, whereas, the social and political explanation of why the equal distribution is better, does not work by looking at them separably. This is about looking at the relationship between them.
(49:38) So I think that the upshot of these calibration dilemmas for the utilitarian is that the utilitarian can no longer appeal to decreasing marginal utility in order to explain the inequality aversion that many of us have when looking at these small scale inequalities in, let's say, income or wealth. Okay. So I promised that I would come back to this assumption that I claim connects all of this to the global priority setting and so here is the connection:
(50:14) So I think that when we try to rationally decide what global problems to prioritise, we need some sort of framework within which we can compare these priorities. Otherwise, we are left with just brute intuition by looking at each potential priority, and so on. Of course, our moral intuitions are always going to play a large role, but I think that in order to rationally make these comparisons, we need some sort of framework in order to make these intuitions more concrete and more precise.
(50:59) And what I think is the most sophisticated framework for doing that is the social welfare framework, a framework based on social welfare functions. And these social welfare functions are, or at least, tend to be prioritarian, egalitarian or utilitarian. Maybe there are other contenders, but at least the social welfare functions that are traditionally used when comparing policies have either the prioritarian form or the Gini egalitarian form or the utilitarian form. Now, of course, even if you agree with all that, we can still ask a lot of questions about the social welfare function framework.
(51:45) So you might ask, “We'll, shouldn't we assume some sort of discounting, after all, many social welfare functions assume that we discount the well-being of future generations?” I'm not sure if we should do that, but for the purposes of this talk, the important thing to say is just that the dilemmas become even more extreme if we discount the well-being of future generations. So unless someone wants to ask about that, I won’t explain why. But that's just to say that discounting is not a way of getting out of the dilemmas. If anything, it makes the dilemmas worse.
(52:24) Now, another question that you might ask about the social welfare function framework, which I find harder to reply to is, “Shouldn't we apply different social welfare functions to different societies?” So you might think that… Remember, I opened with an example of an intragenerational trade-off and you might think that these social welfare functions, you should apply different social welfare functions to different generations – so one welfare function to each generation or maybe one welfare function to each society.
(53:05) Now, of course, this is going to be a hard case to make for the prioritarians who are… at least those who have been motivated by the arguments of Parfit who defended the priority view by appealing to this intuition that it's not bad that people are badly off because others are better off, it's simply bad that people are badly off period. If one holds that kind of view, then I think it's hard to motivate applying different social welfare functions to different societies. But maybe it's less ad hoc for an egalitarian to respond to these dilemmas in that way. Of course, if they do that, then they won't avoid the formal dilemmas, the dilemmas as such or the calibration theorems don't say anything about comparing different societies. But an egalitarian might say that these are extreme trade-offs that I have been talking about. They only arise when we are looking at trade-offs between generations or between societies. But they might argue, the attitudes that we take when it comes to trade-offs within a society don't constrain the attitudes we should take when looking at trade-offs between societies because we should have applied different social welfare functions to different societies. Maybe that's a potential response by the egalitarian to these dilemmas but I'm just going to leave that as a possibility. I'm not sure how exactly to respond to that.
(54:47) But in any case, I think that because rational global priorities, I think, is or should be based on one of these views, that the global priority setting does face these dilemmas, or has to find either a way to live with these dilemmas or simply respond to them in some way.
(55:10) And so in light of these dilemmas or in light of these calibration theorems on which these dilemmas are based, there are basically three options, all of which are perfectly possible. One is designed to give almost no priority to the worse off when little is at stake – so that's a one of the dilemmas. Or one could decide to simply accept that we should give extreme priority to the worse off when more is at stake. Maybe one might think that the demands of justice are actually just that much stricter than what we intuitively thought. Or one might simply reject prioritarianism and Gini egalitarianism.
(55:52) Now, I suspect that most people in the global priorities or effective altruism community would not be very happy with option two. And the reason I think that is that within these communities many people seem to find that it's important that we make it possible for humanity to reach its full potential, as it were, by for instance, making sure that there will be future generations, not only because the future generations will be so many, but partly or at least, to some extent because the future generations are likely to experience such extremely high well-being that it will be worth, just for that reason, to keep them around. So in other words, I think that many people in particular within these communities don't think that the only important thing is to increase the well-being of the badly off. It's also important to make sure that some people get the chance to experience extremely high well-being.
(57:08) Now of course, one could, as I imagine, go for the first option, go for that [inaudible 57:14] dilemma. But then I think that one might wonder how much of the intuitive appeal is really left of prioritarianism and egalitarianism in comparison to utilitarianism if we cannot even accommodate what seems to be reasonable inequality averse attitudes for small stakes, then we might wonder: is egalitarianism and prioritarianism really so much better than utilitarianism.
(57:46) Okay. That's all I have to say for now, so thanks a lot for listening. It's gone on for almost an hour. So I look forward to hearing your thoughts or questions. I guess I will stop screen sharing.
HILARY GREAVES: (58:07) All right. Thanks so much, Orri, for an excellent talk.