23 May 2019, Global Priorities Institute
HILARY GREAVES: (00:05) Thanks so much everyone for coming. This talk is run through the offices of the Global Priorities Institute, a relatively new Research Institute that was established here in Oxford in 2018, just last year. GPI exists to conduct and to promote academic research into issues that arise in the context of thinking about the question of what we should do with a fixed amount of resources insofar as our aim is to do the most possible good with those resources. The thing about the resource popping and screaming, “Finite,” is of course crucial because there are many problems in the world. If we have infinite resources we could solve all of them, but since we have finite resources we have to think carefully about how to prioritize between the different problems that we could try to take on. This enterprise draws naturally on central themes both in economics and in philosophy, so GPI is an interdisciplinary institute involving at the moment mostly those two academic disciplines. The aim of this Parfit Memorial Lecture Series is to focus specifically on the philosophy side and to exhibit and to encourage academic philosophy research into this enterprise that we call global priorities research. We also run a counterpart memorial lecture series in economics. That's the Atkinson Memorial Lecture Series and this year's Atkinson Lecture, for those of you who are interested in that and will be also in this building on June the 11th.
The name of Derek Parfit will of course be familiar to just about everybody in this room. He was of course a towering figure in moral philosophy both in Oxford and also globally. Parfit, as you all know, sadly passed away just a couple of years ago, in fact just about one year before GP I was formed. So sadly the life of Parfit and the life of GPI didn't overlap. If he had lived a little bit longer he would have been a very natural collaborator for GPI since he himself was very sympathetic to the enterprise of effective altruism. But in any case we have the honor (02:00) of presenting this memorial lecture series in his name.
Before I introduce today's speaker I'll just say a couple of quick things about logistics. The lecture will run until around about six o'clock then we have a half an hour Q&A session following that. You're very welcome to join us for drinks which will be served just around the corner in the common room area. And then tomorrow we have a follow-up, a more extended discussion session from 3:00 p.m. until 4:00 p.m. That will be in Seminar Room E also in this building just one floor and in particular if there are graduate students here who would like to explore the issues at a more leisurely pace with Lara, you are strongly encouraged to come to that session tomorrow as well as today's Q&A session.
Okay. So without further ado then this year we're delighted to welcome Professor Lara Buchak to deliver the first annual Parfit Memorial lecture. Lara is an associate professor of philosophy at the University of California and Berkeley. Her primary research interests are in decision theory, game theory and rational choice theory. Her 2013 book, “Risk and Rationality” concerns how an individual ought to take risk into account when making decisions and unlike standard Bayesian accounts, which tend to draw a sharp extensional separation between what the ideally rational person does on the one hand when dealing with risk and what real people with all their foibles do on the other hand. The approach that Lara prefers tends to vindicate the ordinary decision maker even from the point of view of ideal rationality and I think we'll hear a little bit about that in today's talk. Lara also has quite a diverse array of research interests beyond that. These include in the philosophy of religion in ethics and in epistemology. Among other things, she's written on distributive justice, on the relationship between partial and full belief, on issues of optimal stopping and on the nature of faith. Her title today will be “Should Effective Altruism Focus on Global Health or Existential Threats?” Please join me in welcoming her. (04:00)
[applause]
LARA BUCHAK: (04:06) Thank you. Can everyone hear me? Raise your hand if you can't hear me. So, idiotic instruction, but if you think someone can't hear me, raise your hand. So thanks so much for having me. Thanks to both the Global Priorities Institute and especially to Derek Parfit in whose memory this lecture is given. Unfortunately I never had the honor to meet him or talk with him but his work has been incredibly influential within my discipline, within ethics more generally and I think the sort of questions he was asking, the things he cared about, “What should we do?” “What should our life be about,” are the kinds of questions that really animate me as well. So, hopefully this lecturer will somewhat do him justice. Okay.
So what I'm going to talk about today is the question, the general question of, “How should risk and uncertainty affect our charitable giving?” This lecture is going to have four parts. First, I'm going to talk both conceptually and formally about how to think both about risk-aversion and its counterpart risk-seeking and uncertainty-aversion and its counterpart uncertainty-seeking. Next, I'm going to apply that framework to two different questions that effective altruists have been interested in. First the question of, “Should we diversify in our charitable giving?” So if we have a fixed amount of money to give, should I give it all to one charity, maybe whatever charity I think is the best in some respect? Or should I try to divide it among a number of charities? That’s the second part. Third part another question that animates effective altruists namely, “When I'm choosing between (06:00) giving to causes or programs that seek to reduce existential risk, roughly programs that have a very small and sometimes unknown probability of making a really big difference. So when I'm choosing between giving to those on the one hand and on the other hand giving to causes like health amelioration programs that generally have a really high and known probability of doing some amount of good that are important sort of it pales in comparison to the amount of good done by the existential risk reduction programs because it's successful?” So on both of these questions I'm going to talk about how our attitudes towards risk and uncertainty affect how we should answer these questions. And then finally, so far I will have just given a sort of… If we have these attitudes, here's what we should do, but as emphasis we care about something else namely, which attitudes should we have when it comes to moral decision-making. So I'm going to argue that when we're making our moral decision-making or maybe our moral decisions we should have a particular attitude towards risk and we should have a particular attitude towards uncertainty and these are the attitudes that should guide our charitable giving.
And I should say I realize there's somewhat of a mixed audience here. So there are ethicists that are maybe like a little bit familiar with formal material but don't love it and then on the other hand there are like economists for whom all the formal material might be completely old hat. So I'm going to try to split the difference and explain things in such a way that if you don't have a background in this you should be able to follow. I think what's important here today is more like the conceptual points than the particular mathematical formalisms that are going to be behind them. So, you know, if you look at the handout and you're like, “Uh, equations! I hate that!” (08:00) I'm going to kind of try explain things in a more informal way. Okay.
So just to start there's a standard theory of decision-making and then there are two variants on it. One allowing for different attitudes towards risk. This is the one that Hilary mentioned I've been arguing for and those allowing for a range of attitudes towards serving. Okay. So let's say we have… So I'll just sort of briefly explain the standard theory and then the modifications we might make to it. So let's say we have… It's working? Okay. So we have some gamble and a gamble just says, “What happens and how good that thing is under certain states of the world are possibilities?” So here we have three possibilities E, F and G and here we have the gamble that yields some consequence of utility value one, if E happens, some consequence of utility value two, if F happens and then some consequence of utility value three, if G happens. We can put this on a graph and the expected utility of this gamble is just going to be the area under the curve. And according to the standard theory, rational people maximize with respect to each other. So they pick the gamble that has the highest area under the curve. And the way to sort of conceptually think about this is… Okay, look. This gamble has three possibilities. Possibility one yields something of utility one. Possibility two, yields something of utility two. Possibility three yields something of utility three. We want to figure out how these possibilities come together to yield a single value for the gamble and we say, (10:00) “How valuable are each of these possibilities, its utility value?” “How much does that utility value count towards the overall value of the gamble?” Well, that can't buy as much as the probability that you'll get that. So, if you're going to get something of the utility value two with probability one-third, the contribution that makes to the value of the gamble is one-third times that total value. Okay. So, that's the standard theory.
Now, there's another way to think about what this theory says and that will sort of set the stage for why I think this theory doesn't actually fully describe rationality. So, as I've been talking about well, there are three considerations in this gamble. You might get one, you might get two, you might get three. But another way to think about what this gamble says is that there are three considerations of the following form. For sure you don't get one. There is some probability, in this case two-thirds, that you'll get another one util and finally there is some probability, namely one-third, that you'll get an additional util beyond the other two. And again, what expected utility maximization says is the weight that each of these considerations get in your decision-making has got to be its probability. So you get one, probability one. I might get an additional one with probability two-thirds. I might get an additional one with probability one. But I say, that doesn't follow from what instrumental rationality is supposed to be. So instrumental rationality is supposed to tell us how to take the mean to our ends. If we care about (12:00) things to a certain degree and we know various actions make those things happen with certain probability, what should we do? And I say just from knowing two facts, those two facts, we haven't got the answer yet. So just from knowing what utility you assign to consequences and what probability you assign to getting each of these consequences doesn't tell us what you should do. Why? Because there are a variety of rational ways to take these three considerations into account in your decision-making. For example, you might say, “Gosh, I'm just really concerned with how well things go if things turn out pretty poorly. So it matters a lot to me that I'm going to get one of these utils here. Now this two-thirds chance that I'll get an additional util, that matters to me, just not as much.” So we can sort of think, as a weight I give to this, it's not as much weight as its probability value would suggest. And the chance of getting this final util with probability one-third, you know, that matters to me too. It makes the gamble much better, but things that only happen in some of the best states, I just don't care about that, that much. So the weight of this consideration shrinks considerably and we can sort of look at the new area under the curve. Call this the risk weighted expected utility of the gamble for a person that's averse to risk. Or conversely you might say, “I really love risk. In fact the chance of getting three from this gamble is a really good making feature of that gamble. That means a lot to me. So what (14:00) I'm going to do… If I'm going to say, this gamble is worth proportionately more than its probability value would suggest. So maybe I'll weight it this much and this one is worth proportionately more too. So maybe I'll weight it this much. So the basic idea here is that, just as your utility and probability function are things you need to know when figuring out what to do, you also have to know the weight of these boxes, which is to say you have to know how important the top p-portion of outcomes are to you when you make decisions. So are you the kind of person that places more value on worst cases and not a lot of value on say, what happens in the top 10% of states or are you the kind of person that places not that much value on things that only happen in worst states but a lot of value on things that happen in better states? So those two are the basic conceptual ideas behind the theory here here. There are three things involved in choice, a utility function which measures how much you like various consequences, a probability function which measures how likely these consequences are to a team and a risk function which measures how much you care about the top p-portion of outcomes in decision-making. And we can think of this risk function in one or two ways. We can think of it as the way you make trade-offs between your worst-off future self and your best-off future self. So maybe you care a lot more about what happens to your worst-off future self than what happens to your best-off future self or vice versa. My claim, “I have a way of doing these things.” Totally fine and we need to know how you do it in order to figure out what decision you should make.
Another way to think about your risk attitudes here (16:00) and what I call the risk function is just sort of like how much these boxes get stretched or shrunk relative to their probability, so it's just given by RFP. Another way to think about the risk function is as a trade-off between two virtues of practical rationality. So on the one hand we have the virtue of prudence making sure that no matter what, things don't go too badly. On the other hand we have the virtue of venturesomeness, making sure there's some chance that things go really really well. And I say it's up to you how to trade these two things off or at the very least there's a wide range of ways you can do it. Okay. And so we can characterize what it is to be risk-avoidant, risk-inclined or risk-neutral or globally-neutral. So the risk of one person has a convex risk function. That means these boxes shrink more and more and they concern smaller and smaller amounts of probability. Or another way to put that, as the good thing happens in a smaller and smaller portion of states, you care about it less [inaudible 17:11]. And a risk-inclined person on the other hand has a concave risk function which is to say these boxes get stretched and they get stretched proportionally more and more as probabilities get smaller and smaller. So as things happen, as the good thing happens in a smaller and smaller portion of states, you care about it more and more. Or of course we have the globally-neutral person, that's just the expected utility maximizer for whom the width of these boxes is just their probability.
Another way to think about what the risk-avoidant person is doing in both the notion of a mean-preserving spread… Let me just sort of explain what this means. So here we have some (18:00) “gamble” that just gives you utility two. No matter what, we could spread the probability out by preserving the mean. So for example we could take equal sizes of this probability mast and move some to one and some to utility three so that we get this gamble that's an equal chance of one, two and three. That's called a mean-preserving spread. It preserves the mean while spreading the utility out. If you're averse to risk, you don't like that. You'd rather utility be less spread out among the mean, more concentrated towards the mean. So having a convex risk function is just equivalent to not liking mean-preserving spread. Okay. So whichever way it's sort of conceptually easier for you to think about it. You don't like spreading utility out or you don't care about utility that’s sort of only obtained in the best state as much as you care about the worst state. Both are sort of equivalently worth thinking about it. Okay.
Finally I want to mention the distinction that philosophers make between rational decisions and reasonable decisions. So according to effective utility maximization and my modification, risk weighted expected utility maximization, any values of these functions are permissible as long as they're consistent with each other. So that's what decision theorists say. They say you can have any utility function you want as long as you don't simultaneously like… Like chocolate ice cream better than vanilla and also like vanilla better than chocolate. The only constraints are constraints of consistency. Similarly you can have any probability or risk function you want as (20:00) long as they're consistent. So that’s sort of what decision theorists say. Ethicists and epistemologists don't see this. They want to think there's something wrong with some of these assignments. So one way to put this is in terms of the distinction between what's rational in the sense of consistency and what's reasonable in the sense of actually mapping on to the entities in the world we care about. So ethicists are going to want to say, “No. Not all utility functions are okay because it's not okay to prefer that somebody else dies to that you scratch your finger.” That's just not a moral preference. So the utility function is going to be constrained by what the good is. Similarly the probability function is going to be constrained by say, what scientists actually discover about the world and the risk function important for us is going to be constrained by which trade-offs between the relevant virtues are not just consistent but are also reasonable. So we might think, “Yeah, it's just unreasonable to put all the weight on prudence and not on venturesomeness.” So that for example you're never willing to cross the street because of the possibility of getting hit by a car. We also might think, “Not that reasonable to jump out of an airplane with a homemade parachute.” This is putting a little too much weight on venturesomeness, not enough on prudence. But within the sort of… But there is going to be sort of a range of allowable attitudes such that some people who are more prudent and some people who are more venturesome are both going to count as reasonable. Okay. So that's risk-aversion and risk-inclination.
There's another phenomenon that's going to be important to us. So far we've been talking about (22:00) cases in which you know or can reasonably assign probabilities to the various possibilities, but out in the world isn't like this. So for example, if I'm offering you this gamble that has three possible outcomes you might be sure that E, F and G each have probability one-third, but you also might not know. You might think, ”Uh! There's like a range of probabilities I can assign.” So, you know, maybe F actually or maybe E actually has a much lower probability. Maybe it has a much higher probability. So the boundaries here of E and F… This is going to be a range. We kind of don't know what the true probability is within the range and similarly the boundary between F and G is going to be a range. Okay. So that means that this might be the right possible graph. This might be the right possible graph and so on. So what should I do when I'm uncertain? Well I actually don't have a view about rationality, as you'll see later. I give a view about morality. But let's assume for the minute that lots of ways of approaching this gamble are rational and lots of them are actually reasonable as well. How should you think about this? Well, one popular way to think about this and this thing called the Alpha: Hurwitz criteria and what that says is just… Let's think about two possibilities. There's the sort of pessimistic possibility where this gamble has the worst expected utility it might have. So in this case it's… Yeah, it's this possibility where the worst case is as big as possible (24:00) and the best case is as small as possible. Okay. That's one possibility, the pessimistic possibility. Now think about the optimistic possibility. That's the possibility in which this candle has the most expected utility it might have. That's the one in which the probability of G is largest, the probability of E is smallest. So we have the optimistic possibility and the pessimistic possibility. Now, just so we've assigned some weight to each of these possibilities and average them together. So if you assign more weight to the pessimistic possibility, you're going to count as uncertainty-averse. You don't like uncertainty because you think in cases that have uncertainty I'm going to go with pessimism. I'm going to be like… Think it's a little more likely to be the worst, the probability distribution that's worse for me than the one on that’s best for me. On the other hand if you assign more weight to the best distribution you're going to count it uncertainty-seeking. So you're going to be a little optimistic. You're going to think, “Look, of the viewed distribution, the one which has the worst overall value for me and the best overall value for me, I'm going to like assign a little more weight to being the best overall value.” If you're uncertainty-neutral then you're going to give each of these distributions equal weight.
Now I should mention that for our purposes… So there's sort of an additional concept that's slightly different from uncertainty-neutral and that's no uncertainty or more sort of at resolving the uncertainty before you make the decision. So you might make the decision, for example, by first picking the middle probability and then just treating it like it's a gamble with known (26:00) probabilities. In our examples because they only involve two possible outcomes and the probability distributions aren’t that complex, uncertainty-neutral and there being no uncertainty are going to come to the same thing, but they might not always. So I just thought I’d give you a heads up that those things are like slightly different. Okay.
We can of course also apply, you know, take both these things into account at once. So, here we just think about the uncertainty, so these probability distributions are rearranged. We apply the risk attitude to each of the members for each side of the range and then we take the worst possible risk-weighted expected utility, give it some weight and the best possible risk-weighted expected utility, give it some weight. So we can have any combination of these two attitudes. We can be risk-avoidant but uncertainty-seeking. We can be risk-avoidant and uncertainty-averse or globally-neutral in that one and seeking in the other. That’s sort of independent. One is about what you do with the probabilities when you have a range of probabilities and you're not sure where the true probability falls and the other is about how much to take a possibility with a certain probability into account. So the first is sort of like the epistemic question, “What probability should I use for decision-making?” And the second is the practical question, “How should this probability factor into my decision-making?” Okay. So that is the, I think, the theoretical part (28:00) of the talk. That’s sort of like all you need to know about these two different concepts, risk-avoidance and uncertainty-aversion.
Now what we're going to do is apply these concepts to two important questions in effective altruism and the first is the question of, “Should I give all my money to one charity or should I diversify over a number of charities?” And this is sometimes mentioned as a little bit of a puzzle when economists, for example, talk about charitable giving because it looks like if you maximize expected utility, that is, if you're both risk-neutral and think that there's no uncertainty, then it looks like you shouldn't normally diversify over charities. Why? Because there are two possible reasons that expected utility maximization is going to acknowledge for diversifying over charities. One is this idea of diminishing marginal utility, which is to say that, you know, if I give a hundred dollars to one charity then giving a hundred more will make less of an impact than that first hundred dollars. So that's one reason someone might want to diversify in general. However, that just doesn't seem true of the charity types that effective altruists are interested in. Why? Because these are charities that save lives and these are charities where a given amount of money can reliably save a certain number of lives. So in order to like this explanation you'd have to for example think like, “Once I've saved a hundred lives, it doesn't matter that much, as much that I've saved a hundred more.” But most effective altruists, most ethicists don't think that. They don't think that utility diminishes marginally in number of life saved. So this would be an explanation for (30:00) charitable diversification that isn't going to work.
The other reason expected utility maximizer might diversify over charities is they might think that once they give a certain amount of money, the probability of making a difference goes down. So this might be true in the case of like… Say like a project where what the project is trying to do is… I don't know, find a cure for some particular disease or create a rocket that can go to Mars and sustain life there. If you think after each little bit of money, you know, the sort of first little bit of money really increases the probability of this project being successful, but once you give more, it increases the probability less and less, then you're going to want to diversify. You're going to want to give like a little bit of money to each of these projects probabilistically off the ground. However, this explanation is only going to work for very specific things. It doesn't look like it can be the right explanation for things like health charities where we basically again know that we can save a certain number of lives if we just give a certain amount of money. So again lives saved is linear in money. It looks like these are the kind of charities that most people do in fact diversify over. So what's happening? It looks like people are not being expected utility maximizers here.
So can risk-avoidance in the sense (32:00) I'm talking about or uncertainty-aversion in the sense I'm talking about explain why individuals might diversify? Can it give us a reason to diversify? So let's start with risk-avoidance. And just a simple set up here. Let's say you have two dollars to distribute and two charities A and B. You're not sure whether giving a dollar to A will increase the utility in the world by one or three and similarly for B. So you have a little bit of uncertainty over what the outcome is going to be. But again you know that utility is linear in money. And maybe this uncertainty comes from some empirical uncertainty. You don't know how effective the particular charity is. Maybe it comes from normative uncertainty. So you don't actually know how good some particular consequence is. You think it might be good by utility one. It might be good by utility three. So let's say we have… You know, again you can give a dollar here, is giving a dollar to each of these charities? Giving it all to A will increase the status quo by one, by utility one and giving a dollar or… Sorry! Or it will increase the utility of status quo by three and similarly with B? And there is some event that'll determine whether it increases the status quo by one or three. So again the event might be… In fact this charity really is hopeful it's saving lives. It saves three lives instead of one for each dollar. Or it might be some normative uncertainty like… In fact the ameliorating of people's health by this type of intervention sort of increases the moral goodness in the world by three instead of one. Okay. So now you face the choice again between giving two dollars (34:00) to both charities and giving a dollar to each. So you face the following choice sort of betting whether, betting on A being successful by putting two dollars there. That means your [inaudible 34:15] has increased the good in the world by two or six, betting that B increases the… B is the thing that gives you utility three again either increasing the utility in the world by two or six or giving a dollar to each and hedging your bet. So, you know, if both A and B are successful you’ll increase the utility of the world by six, if neither is successful by two, but if one is successful and the other isn't, you'll increase the utility in the world by four. Notice, by the way, that an expected utility maximizer should just give two dollars to whatever charity has the highest expected utility and if the probability of success of each charity is the same, they should be totally indifferent about which of these three options they take. Okay.
Well what about the person who avoids risk? Well, let's just start with a simple case in which the probabilities of success are the same. It turns out that for the risk-avoidant person this option is better. They should diversify over their charitable giving. There are two ways to see this. On one, again, sort of look at the other way of thinking about utility where we sort of shrink the weights of these events that are uncertain. Things with this probability are going to get shrunk by a lot more than things with this (36:00) probability which is very high. This gets shrunk a lot. So the overall area under the curve again for the risk-avoidant person it's going to be much smaller in these above examples. And again that's because the risk-avoidant person says, “Oh, something that only happens if E obtains or only happens if F obtains. That weighs a lot less in my decision than something that will happen if either one of them obtains.” Okay. So that's one way to see why the risk-avoidant person is going to prefer to diversify.
Another way to see why the risk-avoidant person is going to diversify is… Again like I do like this idea of the mean-preserving spread. So this is a probability utility graph of giving two dollars to the one charity. If instead I diversify, what does that mean? Well I take some of the utility from here and some of the utility from here and I instead put it in the middle. So I sort of… I hedge my bets in both the good and the bad way. I think, “Well, if one of the events obtains and the other doesn't, I want to get four rather than two even if that means not getting six if they both obtain.” So diversifying is a mean-preserving spread of just giving all your money to one. So a risk-avoidant person doesn't like it. Okay. So this is also going to hold, by the way, if the charities aren't exactly (38:00) equal in terms of their mean utility or expected utility. It's also going to hold if B is just a little bit worse either because B has a lower probability of being successful or B, if successful, will not quite be as successful as A is successful. In other words, this reasoning doesn't depend on the two charities being sort of exactly as good from an expected utility perspective. We can sort of see this by noticing that giving a dollar to A and a dollar to B is strictly better than giving two dollars to one or the other. So you'd be willing to sort of like lose a little bit of money in order to do that. So you know, giving say a dollar to A and 70 cents to B might still be a little better than giving a dollar to each of those charities. If that’s true, then there's going to be some wiggle room. So you can make A a little bit worse and the amount of money you arbitrage like a little bit less. The result is still [inaudible 39:13]. Okay.
Now by the way, this assumes that E and F are not perfectly correlated. In other words it assumes that A being successful doesn't mean that B is successful too. So if you're like… If they're both sort of like a bet on the same event then taking two copies of A is going to be just as good as taking two copies of B. Why? Because in that case it's just literally the same gamble described in a different way. So you're going to want to hedge across charities whose success depends on different facts. For example, let's say you have… You're not sure which (40:00) normative theory is correct. You're not sure for example, “I don't know if it's better to improve people's health during their lives or if it's better to improve the length of the life.” Even with slightly less help you're like, “I don't know. Some normative theory say the one, some say the other.” Then you're best off giving some money to a charity that just does this and some money to a charity that does this, rather than putting all of your eggs in one basket, that is if you're risk-avoidant. Of course if your risk-inclined it's probably be better to put all your eggs in one basket and it's going to be better to put all your eggs in one basket even if it's maybe you sort of think, even if like costs you some money to do. Okay.
So now important caveat, the reasoning that I've just given is only valid if the status quo is not a gamble itself or if the the relevant choices, giving twice to A or once to A and once to B, aren't equally risky relative to the status quo. And a way to think about this is… So say the status quo isn't just, “Hey, we know how things in the world are.” Say the status quo is like, we don't know whether there's a specific disease that will make people's lives even worse than they are now in some countries. That disease Mayaro, say the point is like if it really affects people's lives. We don't know whether that's true. If we're in a situation like that then (42:00) one of the gambles might sort of count as insurance relative to the status quo. So in that situation for example the gamble where we ameliorated health is sort of like hedging your bets against the status quo. Or on the other hand if we know that in the status quo everybody else is giving their money to A then giving all my money to B is actually buying insurance against the status quo plus what everybody else is giving. So this is to say that what we want is diversification from the effective altruists as a whole. It's not important to have diversification for individual effective altruists, although under some plausible assumptions, it also will be. So for example, if you just like have no idea of what everybody else is giving. It might be probable to give to the status quo. And it will (my own Christian [inaudible 43:11] has a paper about this) it will be important to actually sort of like how spread out the status quo is because if the status quo looks like it contains lots of really good and lots of really bad and what you do no longer is going… And diversification is no longer going to look like insurance relative to that sort of strange status quo, then you're no longer going to want to diversify. You're going to be indifferent. So again the idea is like, insofar as diversification looks like buying insurance against being wrong about the effectiveness of one of the gambles, it's going to be a good thing. Insofar as diversification doesn't have that (44:00) property insofar as it sort of like doesn't amount to hedging your bets, it's not going to work out. Okay.
So what if you're uncertainty-avoidant? What should you do? Okay. So remember again the uncertainty-avoidant person, when they're thinking about these two candles, they're going to notice, they're going to be like, “Uh, I don't know. P probably has some probability between 0.25 and 0.75, but I don't know what it is.” So there's sort of a range of probabilities this could be and similarly for F here. I don't know where things are in this range. And let's assume that the probability of E is somewhere in 0.25 and 0.75 and so is the probability of F. Okay. Now, expected utility maximization has this nice property where as long as the utilities are additive we can just sort of figure out the utility of each gamble by itself and add them up to get the utility of taking both which is an awfully easy way to see why taking two copies of A for the expected utility maximizer is just as good as taking two copies of B. So we can exploit that fact to think about how the person who doesn't like uncertainty will think about the examples. Well, they'll think the minimum value of each of these gambles is the value calculated using a probability of 0.25. So these gambles actually amount to the same gamble. So it sort of doesn't… And the maximum is going to be the same, (46:00) so actually it doesn't matter whether I diversify or not. However, if we have probabilities in these ranges but we know that the probabilities are somewhat anti-correlated, which means if we know that the probability of E is going to be a little higher just in case the probability of F is a little lower, then you are going to want to diversify. So if for example, again you thought that whether A is successful is going to depend on whether B is successful in an anti-correlated way, A is more likely to be successful if B is more likely to be unsuccessful maybe because they're based on opposite moral assessments, maybe because they're based on incompatible empirical assessments, then the worst value of 2A is still going to be calculated using this worst probability, but the worst value of A + B is calculated by taking the worst probability in A plus something that's not the worst probability of B because they can't both have their worst probability at the same time, and similarly for the maximum. So therefore, the uncertainty-averse person, if these probabilities are even a little anti-correlated, they're going to want to diversify as well. So we have a similar result as we had for the risk-avoidant person. Again this is really going to be true if the status quo is not itself probabilistically uncertain. So if taking one of these gambles, for example, is actually going to reduce the uncertainty in the world maybe by hedging against something that's already there that's already uncertain, then you're going to want to (48:00) take it. But if the status quo is not probabilistically uncertain or if it is, but A and B don't resolve that uncertainty differently, then if you know that most people are taking A, you should take multiple copies of B and vice versa. And again effective altruists as a whole should try to diversify over the… Or at least uncertainty-averse effective altruists as a whole should try to diversify over the gambles they take. Okay. So that's diversification both the risk-avoidant and the uncertainty-averse effective altruists should diversify under certain, at least somewhat probabilistically. Okay.
So, next I'm going to talk about the choice between programs, like health programs, that have effects that are fairly known, whose probabilities are known and the effects are fairly we'll-known and on the other hand existential risk reduction programs. Existential risk reduction programs again are programs where they affect, they bring about a small change in the probability of something that has massively good value or massively bad value, but we're not exactly sure of the size of change we've heard about. So an example a health program is something like buying mosquito nets for people in areas where there's a malaria problem. We know that that risk, very high probability. That thing applies. (50:00) There are actually two types of existential risk reduction programs or at least two ways to think about existential risk. So some of these programs could be thought of in one way or the other.
One program we might call existential insurance. So we're buying existential insurance when we have… We know some massively bad event might happen, but it's probability is small. Existential insurance is a program that lowers the chance of that massively bad event or makes things not so bad if that event comes to pass. For example, preventing de-extinction is like this. Preventing catastrophic global warming is like this and maybe there's a small chance of catastrophic global warming. We could embark on a program that will lower the chance of catastrophic global warming or make things not so bad for humanity if we do have catastrophic global warming. Similarly programs that seek to reduce the chance of nuclear war are like this. So there's a small probability of nuclear war. It would be really bad if that happened. Programs that try to make that chance smaller count as existential insurance programs.
We also have programs that we might call existential lottery tickets. So with an existential lottery ticket the setup is that the probability of a massively bad or at least not good relative to the thing you're buying the lottery for. It's close to certainty. An existential lottery ticket lowers the chance of the massively bad event or in other words raises the small (52:00) probability of a good event. So something like colonizing other galaxies is like this. In the absence of programs to do that, the chance of colonizing other galaxies is very very small, but if we buy the existential lottery ticket… Or to buy the existential lottery ticket is trying to increase the probability that we will successfully colonize other galaxies. The transhumanist project is like this. There's like a small chance that we can outlive our natural deaths or become the kind of species that is immortal. The transhumanist programs try to raise the small probability of this happening. Maybe raising the probability of various religious possibilities here and in the afterlife is like this. You might think there's a very small chance that some religious possibility is true, but if we raise the chance of it being true or raise the chance of people doing the thing they need to do (so Pascal's wager is sort of an example this), then we're buying an existential lottery ticket.
So now I want to know, how do risk-avoidance or inclination and uncertainty-avoidance or uncertainty-seeking affect the value of existential insurance and existential lottery ticket relative to the value of health programs? And we're going to… So what we're going to do is we're going to assume that all three of these programs to begin with have the same expected utility and then we're going to say, if we add in one of these attitudes, risk-avoidance, risk-inclination and so forth, what changes? Which programs get better and which programs get worse? I mean I want to flag (54:00) that built into this kind of setup is the assumption that the consequences aren't infinitely good or infinitely bad because if you think let's say the consequences of existential… Of the senior client who’s existential insurance against are into that or you think that the consequences of the thing you're buying the existential lottery ticket for are infinitely good, then it's not going to matter what probability is reassigned to those. The existential lottery ticket is always going to be better than the health programs and worse than… And the existential insurance is always going to be also better than health programs. So, you know, just a word about whether this assumption might be plausible. I know some long-terminists do you think that infinitely valuable for humanity to go on forever and/or merely finitely valuable anyway of humanity not going on forever. If you think that, then we could just assume we're talking about the value of each of these things within a particular time frame. So you could ask like, at any particular time in the time leading up to it, is it better to have bought the existential lottery ticket? Is it better to have bought the existential insurance? Or is it better to add heat [?] into the health program? Okay.
So let's just assume, making some assumptions that make all these things start with the same expected utility. So the status quo is like, there is some little chance of the bad thing happening. Let's say, nuclear war, say environmental catastrophe that adds utility -200. (56:00) Probably the neutral thing will happen. That's what's happening now. Humanity is still alive. Lots of people pretty badly off. There's also some possibility of the good thing happening, human immortality, colonizing other galaxies, whatever it is. That's the status quo. Here is what each of these things do. Health programs – I'm just going to assume health program has like a known utility like, with probability one in health. By this utility it's not going to matter to the sort of eventual analysis. Health programs just increase all these possibilities by utility two. So everyone gets a little health boost. Certain people that need it get a little health boost so that no matter what, the world goes a little bit better before we all die or before we successfully colonize other galaxies or become immortal. Okay.
Existential insurance on the other hand lowers the probability of this bad thing by some tiny amount. So it sort of like shifts some probability here to the neutral state and the existential lottery ticket shifts some probability in the neutral state to the probability of the good state. From the point of view of expected utility maximization these are all equal because what I've done is I've added two utils to this entire graph spread out in one way or another. The health program, just sort of added it at all everywhere. The existential insurance added it all here in the existential lottery ticket added it all here. Okay. So the EU Maximizer is going to find each of (58:00) these programs equally good.
What about the person who avoids risk? Well for the person who avoids risk, the existential insurance is going to get better. Why? Because I'm adding the two utils to a worst state and worst states get more weight. And the existential lottery is going to get worse. Why? Because I'm adding those two utils to the best state and the best state matters less again. So the risk-avoidant is going to prefer the existential insurance to the health program to the existential lottery and of course the risk-seeking person is going to have just the opposite preferences. They care more about what happens in best states than worst states, so they're going to want to do the programs that increase the probability, the really good thing happening, rather than the programs that decrease the probability of the bad thing happening or increase the probability of the neutral thing rather than the bad happening. Okay. So that's fairly straightforward.
What about uncertainty-aversion? Well, as I mentioned earlier the way to sort of think about this is health programs, not a lot of uncertainty involved about the probabilities. But these existential insurance and lottery ticket gambles, lots of uncertainty involved. So we can sort of think that in this case you're changing the probability of the bad thing in the case of existential insurance or the good thing in the case of the existential lottery by some unknown amount. Instead of changing it by 1%, you might be changing it by zero. You might be having no effect at all. On the other hand more optimistically maybe (60:00) you're changing it by 2%. Okay. So again the way to sort of think about this practically is these are actually intervals of unknown size and we want to calculate the expected utility relative to these intervals. So what adding uncertainty-aversion is going to do is to make both existential insurance and the existential lottery worse. I suppose once you study it, it becomes pretty obvious because the existential insurance and the existential lottery involve uncertainty whereas the health programs don't. So they're equal in terms of expected utility. And then you say well, existential insurance and the existential lottery might have worse expected utility, they might have better. We weigh the worst expected utility more heavily. Those things are going to get worse. And for the uncertainty-seeker things go in the opposite direction. Both the existential lottery and the existential insurance get better than the health program. Okay.
Now we can you can also sort of consider what happens when you have both uncertainty-aversion and risk-avoidance or any combination of these things and I won't go through all of the boxes here. The way to think about this is… Let me just mention for example, if you're risk-avoidant and uncertainty-averse, which is like what most people are, then one of two things happens. Well, we know the existential lottery is the worst because it's the worst for the risk-avoidant person and it's one of the worst things for the uncertainty-averse person (62:00) whether the existential insurance is better or worse than the health program. It's going to depend on whether the risk-avoidance is more operative or the uncertainty-aversion is more operative. So if you're like really really really risk-avoidant but not that uncertainty-avoidant, then the existential insurance is going to be better. But if you're sort of only mildly risk-avoidant but very uncertainty-avoidant, then the health program is going to be better. And so you can kind of… Yeah just go through and think conceptually about why what's in the boxes is going to describe the preferences that people will [inaudible 62:48]. And of course combining this sort of last section with this section, we actually think the more complicated choice… We might ask whether we ought to diversify over these lotteries? So whether I have to give some money to the existential lottery and some money to global health programs, for example, and you know the answer is like, as long as you have some degree of risk-avoidance or uncertainty-avoidance, diversification is going to look a little bit better. Maybe not that much better, so it might be that you shouldn't go 50/50 but like you should give lots of money for existential insurance and like give it all to global health programs or something like that. Okay.
Alright. So we will make a stretch. Okay. So that's the end of the discussion of what these different attitudes (64:00) are going to tell us to do in the case of charitable giving, but there's a really really important question for ethicists and that’s, “Which of these attitudes should we have? Sure you're telling me, yeah, here's your range of attitudes I could have and each tells me to do different things. Great! But what should I do?” Okay. I think this is a really important question. This is a question that hasn't been talked about a lot in philosophy because, primarily because philosophers when they talk, when they sort of use these formal models at all, use expected utility maximization rather than one of these other variants. So what I want to do is sort of start the discussion. I have some views. I'm going to argue for those views but this is also like the beginning of the discussion. So this is the part that I'm particularly interested in your questions and feedback and pushback, “No. I think those are the wrong moral principles.” This is something that we… Anyone who’s sort of interested in what's the most effective way to give really has to think about. Okay. So should we be risk-avoidant about moral decisions? By we, I mean as a whole people are concerned about doing the most good with their charitable giving. Okay. So, I think in the case of risk, the answer is, we should be maximally risk-avoidant within reason. So if there were a range of reasonable risk attitudes that the individual might adopt for self-interested decisions, then as a group we've got to adopt the most risk-avoidant of those attitudes. So why will… So I argue about (66:00) for this principle I call the risk principle and I'll briefly go through the sort of temporal arguments for it. So here's the principle. When making a decision for an individual, she was under the assumption that he had the most risk-avoidant attitude within reason. Unless we know that he has a different risk attitude, in which case she is using his risk attitude. So the main… So I'll sort of explain why I think this is the right moral principle that explains how it applies to the sort of, the decision for a group of people. The key thing to draw from this [inaudible 66:39] principle is that there is a real default when we're thinking about risk attitudes and risk-avoidance is the default for this. Risk-neutrality is not the default. I don't know, the risk attitude that the typical person has is not the default. Okay.
So a couple of arguments for this principle. First, the argument, for example, about not just how we do take moral risk but how we think we ought to take moral risk when they involve another person and not us. So imagine you're playing basketball with an acquaintance. He hurts his shoulder and is in moderate pain. You don't know whether it's a muscle spasm or a pulled muscle. Imagine these possibilities [inaudible 67:27] likely just to make it simpler. You could either apply heat or ice. Heat won't really help if it's a muscle spasm, but actually putting heat on a pulled muscle leads to intense pain. On the other hand, ice will sort of like not be that bad. It will do nothing for a muscle spasm but it'll provide mild relief for a pulled muscle. So we have this gamble. What I submit is that you might do either thing in your own case. You might sort of think, (68:00) “Yeah, like I really care about the… I just really care that I might be in intense pain. I want to avoid that,” or you might think, “Look, the possibility of relief is going to weigh really heavily for me, so perfectly fine to make either decision for yourself.” On the other hand if you don't know what the acquaintance wants, it looks like you ought to apply ice. You can't pick the riskier decision for him without knowing that that's what he would prefer. Why? Because… Now here's the sort of second argument. So you know, the first argument is like, I submit that we actually do this and not always, “Do we do this?” We like, we think it's the right thing to do. We don't just happen to do the sort of upon reflection going, “Uh, I don't know, we think, yeah we can't take risks for other people if they already know [inaudible 68:53].” Why? Because ending up in a relatively worse state requires stronger justifications. So just imagine what your acquaintance might say if you end up in a bad case of either of these procedures. Say you applied ice and it turns out to be a muscle spasm, so he's still in moderate pain, whereas heat would have given him relief. He might say to you, “Why didn't you apply heat instead? Given that I have a muscle spasm I would have relief instead of moderate pain.” You might say felicitously, “Because it might have been a pulled muscle and then heat would have caused intense pain.” That's like a justification for your doing the less risky thing. On the other hand, the symmetrical justification is not available to you if you choose to do the riskier thing because it looks like he has a complaint if you do the riskier thing and then it turns out to actually be the bad thing. (70:00) So say you have a pulled muscle. You put heat on it. Ouch! Intense pain. He says, “Why didn't you apply ice instead given that I have a pulled muscle?” And we have mild pain instead of intense pain. I guess we can imagine he's saying this through the pain [inaudible 70:13] sentences. And you say, “Because it might have been a muscle spasm and then the ice would have cause relief.” This doesn't actually seem like a good justification for what you did even though you did something that might have turned out really well, you took a risk that he didn't consent to. So it looks like again risk-avoidance is the default.
And finally it looks like there's some sort of reasonableness standard. So we might say, if no reasonable person would reject an option on the grounds that it is too risky, then we're justified in choosing that option, but if a reasonable person could reject it on those grounds then we are not justified in choosing it. So it looks like we're only justified in taking a risk for someone else if no reasonable person could reject that risk. Interestingly enough, it looks like there's an asymmetry between rejecting gambles because they're too risky and rejecting gambles because they're not risky enough. So for example, it doesn't look like, “That gamble was too risky for me,” is a valid complaint against someone who took a risk for you not knowing what you would want but “That gamble was not risky enough for me,” is not a valid complaint against someone taking a gamble that was not as risky as you would have done it.
Now notice by the way that while it appears that we have a default (72:00) risk attitude, the same isn't true for the utility function. So for example, if you're buying some ice cream for a friend and you don't know if they like vanilla or chocolate. It seems like there's no grounds on which to make this decision. We don't like to default to chocolate unless the person wants vanilla or vice versa. So this is a way in which sort of assessing someone else's utility function is different from using a risk function to make a decision for them. Okay. So what does this mean if you have a lot of people? Well, I'm not going to come down on the side of a particular view of how to aggregate different risk functions though I'm happy to talk about that in the Q&A if you want. But I think I can at least say the following weak principle. Risk-averse people's risk attitudes shouldn't count for less than other people's risk attitudes and also there are a lot of people in the future, so if you combine these two facts and notice that by the risk principle we have to assign the future people the default risk attitude, then it looks like when we're making decisions that primarily affect future people, we have to be very very risk-avoidant. As risk-avoidant or nearly as risk-avoidant as the most reasonable risk-avoidant person will be. So that means again we're going to be in the left-hand column of this chart on the previous page about what we should do, how we should rank the existential lottery versus the existential insurance versus (74:00) the health program. Okay. That's risk-avoidance. That's at least my sort of like initial stab at a moral view about what we should do, what risk attitude we should adopt when we're making decisions [inaudible 74:14] and it saves the future.
So second question. Should we be uncertainty-averse about moral decisions? And in particular I'm going to get into this question by asking whether a parallel principle is true in the case of uncertainty as is true in the case of risk. I'm actually going to argue that it's not. So here's the candidate uncertainty principle when making a decision for an individual, she was under the assumption that he had the most uncertainty-averse, that is, the most pessimistic attitude within reason. Unless we know that he has a different uncertainty attitude, in which case choose using his uncertainty attitude. Okay. So a couple of things to notice. First, it looked like arguments that are parallel to the above arguments about risk don't seem to work. So it doesn't look like there are… You know, in a similar example in which you don't know what the probabilities are, it doesn't look like the person would have cause to complain against you if say, you use the probability in the middle of the scale rather than the most pessimistic probability and it looks like… Because unlike the asymmetry between too risky and not risky enough, I can like complain that you're being too risky but I can't complain that you're being not risky enough. I might have… There's no asymmetry between (76:00) being too pessimistic and too optimistic. So it seems like I can’t complain about your being too pessimistic to the same extent I can complain about your being too optimistic. So a tentative conclusion here… And again I'd be interested to hear if anyone has arguments or considerations that they think point either towards we ought to be uncertainly-averse or we ought to be uncertainty-seeking. But the tentative conclusion is that we should be uncertainty-neutral when making moral decisions. And if we try to think about the underlying reasons why we should be risk-avoidant but uncertainty-neutral… We can recall that uncertainty is epistemic. So it's about which of the sort of set of possible beliefs or set of probability distributions, which one should I use? On the other hand your attitude towards risk is a practical thing. When I think about which thing I want to see in the world or which strategy I want to take towards realizing my aim, how should I take some of these possible future states into account? That means that uncertainty or ambiguity-avoidance is… It's sort of… It's essentially a fact about like our belief state or about how our beliefs relate to evidence. It's not a fact about something that’s out there in the world and how we should take it into account, whereas risk-avoidance is a fact about something out there in the world and how we should take it into account. So that's why these sort of have a different value. Okay. [inaudible 77:57]. Then when it comes to (78:00) the existential insurance, the existential lottery and the health program, we should be in this middle box at the left column. We should think, if all these things have the same expected utility, the existential insurance is better than the health program, is better than the existential lottery. Now of course it's not going to be the case that these things always have the exact same expected utility. So I'm not… The conclusion of my talk is not, “Okay. Buy the existential insurance. Don’t buy the other things.” The conclusion is, insofar as when we're making moral decisions, we shouldn't subject people, future people to risk if they didn't sign up for or might not sign up for it. The existential insurance should look a lot better to us. The existential insurance sort of… That's a consideration that tells in favor of it. So what that means is that the health program has got to be a lot better than the existential insurance in order to justify giving to that instead of the existential insurance. And similarly the existential lottery, insofar as we ought not to be very risky when it comes to choices that affect future people, the existential lottery is going to, in virtue of that fact, look a lot worse. So we're going to need more justification in terms of their usefulness towards the probability of its being successful in order to justify picking that over the health program or the existential insurance. Okay?
So what I've done is talk about a way to think about risk-aversion and a way to think about uncertainty-aversion. (80:00) I've applied these characteristics of individuals to the choice about whether to diversify when I'm giving to charity and to the choice about how which type of charities make the most sense to give to. I've then given at least a tentative argument for which of these attitudes we should adopt when making moral decisions and concluded that we should be risk-avoidant but uncertainty-neutral. Alright. Thank you very much for listening.
[applause]