June 2019
HILARY GREAVES: (00:06) Okay. Thanks so much to everyone for coming. We're very excited to host jointly with the Department of Economics here in Oxford for the Second Annual Atkinson Memorial Lecture. I'll just start by saying a few words about GPI where we have this lecture series before we introduce our speaker. The Global Priorities Institute is a relatively young research institute. We were established here in Oxford in January last year 2018. We exist to conduct and promote foundational academic research in economics and in philosophy into questions that are relevant to completely impartial resource prioritizations. If you have a fixed pot of resources you want to spend it in the way that would best improve the world, in a way that's completely impartial, completely neutral and principled between causes. What's the best way of doing that? This question is of course extremely hard. It's in principle and extremely broad question but it's somewhat amenable to progress using tools of economics and philosophy. So our aim is to encourage those within those two disciplines who have the relevant tools to take up more of those research questions.
This leads among other things to the purpose for us of this lecture series. We are honored to name the lecture series after the late Professor Sir Tony Atkinson. Atkinson's name will be familiar to most people in this room I expect. He was of course a towering figure in economics both here in Oxford and more generally across the world. He had a focus throughout his research in particular on the notion of inequality and more generally with large scale issues that are toward welfare concerns to society. Professor Sir Tony Atkinson sadly passed away in 2017 and had he lived a little longer he would have been an extremely natural collaborator for GPI. There are several threads of common interest there. But instead we are honored (02:00) to host this lecture series in his name.
On a quick logistical note, the lecture and the discussion session here today will run until four thirty. You are encouraged to ask questions as the lecture goes along, if you'd like to, and then we'll also have some time for Q&A at the end and then following that, we encourage you to join us for cake and coffee in the café balcony which is just around the corner outside this lecture theatre.
So without further ado then let me introduce today's speaker. Professor Marc Fleurbaey is Robert E. Kuenne Professor in Economics and Humanistic Studies. He is also a Professor of Public Affairs at Princeton University. In the history of Marc’s distinguished career he has also held appointments at the Universities of Cergy-Pontoise and Pau and at the National Center for Scientific Research in France and he's held visiting appointments at INSEE, the Center for Operations Research and Econometrics in Louvain-la-Neuve, the Institute for Public Economics in Marseilles and in addition here in Oxford. Professor Fleurbaey’s research interests range broadly over normative and public economics. They have applications in particular to income taxation, to indicators of wellbeing, health policy and climate change mitigation. He is the author of several monographs, both single authored and co-authored including Fairness, Responsibility and Welfare, Beyond GDP and A Theory of Fairness and Social Welfare. He is a former editor of the journal Economics and Philosophy and a coordinating editor on Social Choice and Welfare. He currently co-directs the Climate Futures Initiative at Princeton University and the International Panel on Social Progress. He has done many more things beside that but I think you've heard enough from me now. His title today is “Valuing the (far) future generations.” Please join me in warmly welcoming our speaker.
[applause]
MARC FLEURBAEY: (03:54) Thank you very much Hilary for the kind introduction and thank you for the invitation here. It's very exciting to get to know (04:00) more about the GPI and your project and all of that. It's a big honor of course to give a talk in the name of Tony Atkinson who was a good friend and someone very important in my field and I think I won't say much more about that. I think the greatest tribute we can give to a scholar is to use his ideas and when I prepared this lecture it was fascinating that how much of his concepts and tools come up naturally and are very relevant to a field which was not exactly a field where he actually contributed very much. He was more interested in redistributive issues within generations and across generations. But still nevertheless, the tools that he approached have been… That is used I suppose have been very useful there as well.
So the abstract I gave for this lecture is about this challenge of evaluating long term impacts of policies with the discounting problem that the long term looms very very small and looks very very small and after one century essentially everything vanishes in the typical evaluations. And so some people are skeptical about the economic approach and even some voices in economics say that perhaps we should throw this standard method out of the window because these are some, these small things that should be taken into consideration. And so I will review this a little bit and I would like to somehow defend the the classical tools but perhaps with some amendments, some more refinements, but still try to defend the fact that our classical tools have some very utile and perhaps broader robustness, broader scope than with everything. And I will introduce the issue of uncertainty.
So I would like to start right away with (06:00) an example which is the sort of standard example of long term evaluation, borrowing from Bill Nordhaus who just had a Nobel Prize for this kind of analysis of climate policy. So in his model, and I'm here borrowing from his DICE model, which is an integrated assessment model having a climate module and an economic module. And if you look at the model it runs a simulation over five centuries. So I'm not sure if that's long enough for the GPI [laughter] but that is long enough for most economists. So here I have taken an example of the growth path of GDP per capita or consumption per capita in the baseline simulation that you have on his model which is just, if you are interested, it's an Excel spreadsheet so it's very easy to play with it. And I've played with the scenario which is a very aggressive scenario leading to zero emissions in 2030. Right? So something that… But now in what's sobering is that when you play with the model now it's very hard to get below two degrees Celsius increase in temperature so even with that you don't get quite that but anyway… So the point I want to show is that if you look at it like that it looks like it's really a no-brainer because if we don't discount the future dollars we have… And so all of this here, the units, so in thousands of dollars. We have a big benefit in the future and something that looks like a tiny cost. But the cost is quite a long run count. It goes almost until the end of the century. But it looks very small compared to the big benefit that we have in our future. I'm sure you ask, you are wondering about… What is this glitch here? There is something weird about the savings, right, in this thing, but it's not a big deal. And so the… (08:00) Now if you stop discounting. So if you introduce a discount rate of 1.5% on the future dollars you already… So you have to rescale the vertical axis because otherwise you don't see much of what's happening. So this is the same. So this is the undiscounted curve that we had before and once you discount… You start of course exactly in the same way, but then whatever happens in the future is much closer to the horizontal axis than before. And so even though we had an increasing curve, quite increasing before. Now, discounting it the… Oh sorry. The discounting effect makes it look like everything becomes negligible in the future and that's with a very low discount rate, 1.5%. That was the rate that was used by Nick Stern in his Stern Review and was criticized for using a rate that is too low. So if you take rates which are a bit bigger than that… So again I've changed the scales because otherwise we don't see anything. So this is with 1.5%. This is with 3%. So 3% was the central value that was used by the previous American administration. It was also checking what happens with 1.5% and with 4.5%. And 4.5% is the value that was favored by Bill Nordhaus. And so you see with 4.5% you are in this area where the benefits essentially vanish and if you ask… I'm not sure if you are curious about the current American administration. They are favoring a 7% discount.
[audience reacts]
Now, so if you do that then… Well, that's what you get here. So this is the 7% discount rate and now even with a very very expanded vertical scale you don't see the benefits. The benefits just disappear and it's only your cost and so it's certainly not worth having an aggressive climate policy. It's probably not worth having a climate policy at all. So that makes sense from their point of view, certainly. Okay. So that's… And so what I'd like (10:00) to do on the basis of this is to ask the question first to briefly review, why or whether this approach is justified using these discount rates and elaborate a little bit on that.
So the first reason why you might want to discount is, and I apologize to those of you are specialists of this of kind of topic but perhaps not everyone is, is that the future may not exist. So there is a risk of disappearance of the future, at least disappearance of the human species. There might be some big catastrophe. But even if you take account of these possibility of disappearance of the future, that gives you a very small discount rate. So it doesn't give you a big enough… Certainly not in the ballpark that we have seen here.
Now another reason for discounting is that there is market, a financial market and so if you don't discount you are essentially ignoring the opportunity cost of the use of your funds. And that is an argument that's often used. But it's an argument that is deceptive because we are interested here in looking at consequences of people's consumption and it's not the same thing as the choice of the income stream that people might want to have and so if you are… If you have access to a perfect credit market, indeed if you want to choose the best income stream, you should use the market rate to determine, to choose the best position of your income stream because that will maximize your budget set or in terms of world budget sets. So it's just a pure… So a pure dominance on your budget set requires using the market rate. But that is not exactly the situation we are looking at here. We are looking at consumption paths over five centuries, so it's not quite the same as an individual having access to a perfect credit market. And so that may not be the right kind of question to look at. Now a third argument is, and that's the one that is put forth by Bill Nordhaus, for (12:00) instance, is to say okay, the market rate does reflect the population preferences on [inaudible 12:06]. So that's right. That's right, but it's right for the people who are trading on the market and most trades that people make on the market even if we think of an [inaudible 12:17]. People do that for their own purposes. So it's worth saving for their old days and this sort of thing. It's a bit different from preparing the future of their grand grand grand grandchildren and so that's not necessarily the kind of consideration they have in mind when they make their savings and their borrowing decisions. And so it's not clear that the current market rate is really capturing the relevant considerations for intergenerational transfers. Right? And that's the main rejoinder of Nick Stern to Bill Nordhaus. And so what people like Nick Stern have said is that we should rely on social welfare assessments and that is something that is useful not only to guide discounting, but it's also… It gives you a framework to analyze things that are larger than the discounting problem. And so that's what I will... So I will now dive into this fourth approach, to the social welfare approach and see what it gives us in this context. That's the plan. Is there any question about the orientation of the…? No? Okay.
Okay. So that's where Tony Atkinson is giving us some interesting tools to think about these issues. Right? So there is something called the the Atkinson social welfare function, which is a very kind of popular function because it is easy to handle. It has a very simple coefficient of inequality aversion that is reflecting the concavity of the function that transforms individual consumption into something that the economists call utility, but never mind. (14:00) It's really something that has to do with giving priority to people at different levels of consumption. And so this coefficient is very easy to interpret. It represents the elasticity of the marginal utility and so 1% more consumption will decrease an individual's marginal utility (and I call it priority as a marginal term) by A% and A is this coefficient of inequality aversion. So if you take this function but if you use it in an intergenerational setting and you put a pure time preference term in the function, then you get the function that is not called often the Atkinson social welfare function, but it's really the workhorse, the function that has been used by climate economists a lot, including Bill Nordhaus. So everyone has been using this thing. It is called discounted utilitarianism but I mean the labels don't matter very much. The structure of the function is really what matters. This function gives you a nice, a very kind of simple way of understanding why you should discount future consumption and this is the Ramsey, the so-called Ramsey formula which tells you that your discount rate of future dollars (and I'm talking about dollars not about future utility) future dollars should be discounted by a term that is the addition of a pure time discount (so that the thing you've added in your social welfare function, as I will just saying) and you have this coefficient A, this coefficient of inequality aversion times the growth rate of consumption and that's very natural. Right? So the growth rate of consumption, the average growth rate between now and the future period we are looking at represents how much richer the future is and we know that A represents how that decreases the priority of the future. Right? So that's exactly the thing that represents this effect of decreasing priority for the rich. And if so, (16:00) if the future is rich, that's the key justification for discounting the future. That's the key idea. But now the future might not be rich, so if there are some scenarios in which a future is actually poor, that could justify the formula. The same formula applies but if the growth rate is negative then you have something that is potentially giving you a negative discount rate. So this is something that could tell us that we should worry about catastrophes because perhaps the discount rate could be different. But in fact this formula is not quite… I would like to give a warning about… This formula is not quite fit for the analysis of catastrophes because it is more useful for small marginal changes in the consumption path. So when you are looking at big changes in the path due to catastrophes, we are a bit out of this marginal analysis and we need to go back to the function itself rather than looking at marginal changes.
And so I'd like to say a few words about the Dismal theorem. And so Martin Weitzman has been the key author who pushed this idea that perhaps we should throw the standard analysis when we are facing catastrophe. And so a catastrophe… Let me define it as something that has an arbitrarily large negative value, not necessarily an infinite negative value, but something that is as close as you want to an infinite negative value. And let me suggest… So there is small literature about the Dismal theorem, it's interpretation. So let me right away adopt a reformulation that is, for instance, one that has been suggested by [inaudible 17:48]. So something that would say that a finite probability of a catastrophe may justify making an arbitrarily large mitigation effort now. Right? And the idea (18:00) is that these very large negative value of a catastrophe, if you change it, then that may affect very substantially the amount of optimal effort you may want to make now, but with a strong dependence. So it's very hard to calibrate to make sure that you are doing enough for the future. If you make a small mistake then you may actually be way off. So that's where the evaluation becomes difficult to make and that's why it's a dismal theorem because you are no longer sure that your analysis is robust. Right? There is so much… You are so much close to infinity and the negative consequences that you are never sure you are doing enough now in terms of efforts.
So the idea of catastrophes is sometimes motivated in relation to tipping points in the climate system. So, when you talk to climate scientists about that, they say okay okay, but in fact tipping points… The most standard tipping points that trigger phenomena which takes centuries to really unfold. So if you think of sea-level rise, for instance, that takes a long long time before… And so it's such a long time that presumably we'll have a lot of time to adjust to this sort of thing. So it's not like a deluge, right, or something like that. So the geoscience catastrophes perhaps are not so relevant. I mean there are a few of them which might be, like a big methane release from the permafrost can be something, but otherwise it's not. But my impression (and it's not just my own) is that in fact you have tipping points in the socioeconomic system which may be triggered by stress coming from the environment, but these are mechanisms which can act much more quickly. Right? So we may have major conflicts, major wars that would be triggered by stress on the environment creating conflicts between populations on the management of migrations, water and all of that. Right? So that’s the… (20:00) So we should perhaps be indeed worried about catastrophes but coming more from social issues and political issues than from from the environment itself.
Now if we want to make a list more concretely of possible catastrophes, I'd like to make a distinction between catastrophes which are sometimes the focus of people when they look at the future. They imagine things that are close to extinction or close to bringing us back to stone age kind of very basic subsistence level. That is kind of popular in the press. Now I'd like to be a bit more skeptical about the importance of these catastrophes. Of course they are important, but in a way what has been done in the past cannot be erased. Right? So even if we kill humanity now, the past history will remain. So it's not as if we were eliminating the whole existence of history of humanity. Humanity will remain and so catastrophe can at most affect a subset of the whole generations of history and when you think in these terms you realize that if it's only a subset of the people, why focus on things which are generational. Right? So when we talk about subsets of people we should talk not just about subsets of generations but also about subsets of people and that includes intragenerational issues and in fact… So we should look not just at intergenerational catastrophes but also at intragenerational catastrophes and such catastrophes are actually with us. We are living in a catastrophic world. We have deep poverty which is still with us. We have conflicts, severe conflicts and all that, a lot of avoidable mortality and suffering. And so in terms of having a subset of people suffering (22:00) badly, we are already there and so worrying about the future may not be the only thing to think about. We should also think about the world that is already catastrophic.
There is a hand over there? Yes.
PARTICIPANT: (22:15) Yeah. I was just wondering if you're still using the word catastrophe in the way that it's situated there now?
MARC FLEURBAEY: (22:20) Yes.
PARTICIPANT: (22:20) Some of these things needs to involve, I would encourage about exchanging negative values.
MARC FLEURBAEY: (22:24) Exactly. That's the idea and I’m trying to say it more about now. Yeah. So essentially it's enough if we have one person having a big suffering. Perhaps that’s enough to give you a lot of negative value in the system. That’s the idea. Right? So that’s the point. But we have to look at the positive side as well if you have a long consideration of human history. So I’d like to elaborate on that. Yeah. Yeah.
HILARY GREAVES: (22:50) So if you want more kind of push back as well as clarify a few questions as you go along. So it seems like an obvious thing to say here is, yeah, in both cases you're talking about subsets of all the people…
MARC FLEURBAEY: (23:00) Yeah.
HILARY GREAVES: (23:01) Who ever exist or whoever could exist. But if one the subsets is an awful lot larger than the other one? That gives you the germ of a case for thinking that just in size terms the top ones are obviously bigger and therefore other things equal isn't that not more important than the…
MARC FLEURBAEY: (23:14) Right. Yeah. So we have to count… Exactly the masses with the relative masses and all that. That’s true. Yeah. Yeah. I agree with that. Yeah. Absolutely.
Yeah. Any other…? No?
PARTICIPANT: (23:25) Historically, the extinction of species was caused… Almost all species that ever existed to go. Right? And we are now creating that kind of situation. So I would call that potential catastrophe but I don’t think it's necessarily the tipping point. We now know that species die out very rapidly from quite small changes in their environment.
MARC FLEURBAEY: (23:46) Yeah. Yeah.
PARTICIPANT: (23:47) And we're creating not just climate environment, we're creating environmental destruction in a similar scale. So I would have thought that you're underplaying just how important these generational catastrophes could be.
MARC FLEURBAEY: (23:59) Yeah. No. I don't (24:00) want to underplay that but I want to make us aware that if we are focusing on human beings and so that would be the extinction of human beings. But if we are focusing on that (and I’ll talk about other species perhaps later) but then we shouldn't worry only about death and poverty in the future, we should already worry about this happening today. That’s the idea because it's all about subsets, the subsets of… But Hilary is right. We have to look at the size of the subset and some may be larger than others. Yeah.
Okay?
And so, let me have a look at the Atkinson utility function and see how it handles extreme values. And so it has this form and I did a term that is not always there sometimes. So this is the A parameter I was mentioning. C is the consumption level of an individual and Z will be a term that I'm adding to make things easier to analyze and also it makes sense because it might be that there is some level that is needed before you reach a positive utility or something like that. So it's a parameter that can be useful. And so this is the shape of this function when Z is equal to one for different values of the parameter. And what is… So I'm sure many of you are familiar with that. What is interesting about this function is this thing. So when the parameter is… So when you have A equals one, this is the logarithm which is a sort of frontier [?] between two cases. So with the logarithm and above, I mean with lower parameters. So these curves that are above the limit value of these scales is infinity. Right? They all go to infinity, whereas when you are at stronger parameters than one, then the curves are limited. So no matter how much good you have on consumption, that gives you a limited value. And on the other side here when we go to very (26:00) small consumption, when you have a low A parameter, you get two values which are bounded in terms of negative values here, but as soon as you are at the LOG or at stronger A coefficient, then you get to negative here, but as soon as you are at the long or at stronger A coefficient then you get two negative infinity. Right? So this is… So sometimes there is an interesting pattern of values here. So if A is small, so you have a small inequality aversion, growth may have an infinite value, but extreme poverty has a finite disvalue. If you are at the logarithm function then both have an infinite value or disvalue and if you have a strong inequality aversion growth has a finite value but poverty can have an infinite disvalue. So it's a particular pattern.
This is a kind of strange pattern that is not so obvious when we look at another concept that Tony Atkinson has been really proposing in a very good way which is this equally distributed equivalent. If we want to evaluate… So this is now a different set of axes. So I'm looking at two individuals or two generations, individual 1, individual 2 and I'm looking at the situation which is unequal so this 45 degree line you have equality and so what you do, you can construct the indifference curves of the social welfare function and the greater the coefficient of inequality aversion the more curve your function becomes and you read the value of the equally distributed equivalent at the intersection here and so when you go along the values of A, what happens is that your EDE goes smoothly to disvalue until when you have very high values of inequality aversion essentially you focus on the worst off people. So you go toward maximum. So you don't really see this pattern of infinite and finite values so clearly when you look at the EDE.
So again at the (28:00) risk of being speculative I'd like to argue that perhaps we should really think that everything has a finite value. So I know that will be provocative but let me try to say something about that. This is a bit more philosophical but perhaps we should think of growth as having a finite value and not just because we are mostly looking in our models at consumption, material consumption, because consumption can be immaterial in principle. Right? So it could be growth in scientific knowledge and things like that. But my impression is that growth has a finite value because human beings are somehow the bottleneck in the construction of value. I don't want to sound overly modest but I don't think that anything that can happen to me as good as it could be would ever have an infinite value and if you don't mind I think the same about each of you here. [laughter] So that's the idea. Right? And even if we think of other beings… So I know that this place is strong for artificial intelligence so even if you think of other beings, I'm not sure we can really imagine any being having an infinite value. So that's one thing.
And if we look at death… So death is just the shortening of something that has a finite value and so by definition it has to be a finite value, a finite disvalue as well. Right?
Now where I'm less sure is about suffering. Right? So is it true that suffering has a finite disvalue? This is harder to claim and I don't want to pretend that I have sufficient experience in this field to know about that, but if you look at casual… I mean yeah, experiences told by people who have suffered a lot, it looks like they can find compensation. And so my impression is that past suffering doesn't totally doom history and there has been a lot of suffering in the past, so I would like to suggest that even suffering as a finite disvalue, but I know (30:00) this can be very controversial.
Okay. So if we want to have a function that gives us finite… So the Atkinson function doesn't give us any, doesn't give us stuff. Right? So we have a choice between finite on one side or the other but we cannot have finite on both sides. But there is a very simple way of obtaining a finite disvalue. So we can combine a finite value here with a strong inequality aversion with a finite disvalue here if we shift the axis a little bit and and then we are sure that we have a finite disvalue for every level of poverty and it comes by adding a term to this function here. I mean… So this gives you a finite disvalue but they can still be very large. Right? So it depends on the value. So if you take a small parameter B here then you can still have a very large disvalue and if you take a high value for A we can really still go to a large disvalue. So that doesn't eliminate the problem of having to do the computation in a hard way if we have large disvalues that can occur in the computation. So it does not eliminate totally the prospect of having dismal perspectives and just to give us an idea of how that could reflect on our evaluation. So I think we should still, even if we adopt this idea of finite disvalue, we should still worry for the future and if we look at the number of… I was talking about poverty today. We have about 600 million poor people today and that has decreased a lot in a few decades but we could… It's presumably possible to eradicate poverty in a generation.
Yes, Hilary. Yes.
HILARY GREAVES: (31:51) Just a question on the previous slide. It seems like a more obvious way to get finite numbers on both ends is just to say, well relative risk aversion doesn’t have to be a constant. We only ever suggested that it might be a mathematical convenience anyway.
MARC FLEURBAEY: (32:04) Okay. Yeah. Perhaps. Yeah.
HILARY GREAVES: (32:07) We don’t know how the thing’s going to hang out.
MARC FLEURBAEY: (32:09) Right. Yeah. Of course this is just an example of a function. I think it gives us some flexibility, but yeah. You might have a change in the A parameter of course. Yeah. You could do that. I'm not sure I'm… I don't have a strong view on how to put a limit on the negative disvalues. Right? So that I don't know how to do that. So I'm a bit skeptical about infinities disvalue, but yeah. I'm a bit wary about putting a limit on this axis.
Okay. So I think we can still have a lot of worry about the future because we may have shocks in the future that will trigger a size in poverty that will be equivalent to what we have now. So really it's a… And if we have catastrophic risk, it's really something that can be more important. I mean when I say catastrophic here I mean in the usual sense. So big big events may really have a big impact in the future. If I may do a little bit of self-advertising, in my group in Princeton we have done some exercise looking at what happens when the poor in the future will suffer more than the rich from climate impacts. That can really shift the analysis and so what we've done is to take the parameters of Nordhaus’ analysis but introduce inequality within the regions of his model. So this is a model where he has 12 regions in the world. So if we introduce inequality then we can replicate… So these are mitigation… So emission paths and so with the sort of path that Nordhaus himself was recommending and a path that Stern was recommending with a lower discount rate. So what we've done is to keep the parameters including the discount parameter of Nordhaus but introducing this concern for this possibility that the poor (34:00) would suffer more from climate impact. And then we are… And so that's the curve here. And in fact we fall back on the same urgency as the Stern model. Right? So just by changing the possibility of having a catastrophic impact on the poor in the future, even with a strong discount curve, that gives us the same kind of conclusions. So just to give an example of where we can help is something. So I'm not sure… I should go quick on that because this is really speculative and just to have some fun with you, but I think indeed… So Hilary was talking about the large number of possible future generations and indeed I think this is something that should loom large in our computation because if we don't have infinite disvalue in suffering and in catastrophes then we have to take account of the size of the positive side of the computation. And so if that might be… If that is sufficiently important then it is something that may compensate a lot of suffering.
And so here there is the question of whether we can have a sustainable situation for a very long time that is really abundant. The curves that I've shown are a bit… We can be skeptical of these curves because the idea that the world would gross a consumption per capita, the world level would go at around $800,000 per year. That is something that is difficult to imagine now, but that’s what the curve was saying. So there is this thing, this idea that we could perhaps reach an age of abundance and it has been repeatedly something that has worried people or that has given people hopes and you may remember King’s discussion of economic possibilities for our grandchildren where he was imagining that the GDP per capita would be multiplied by eight and he was actually pretty right about what happened for Western Europe. (36:00) It's not quite that at the world level but almost. It's five. But he was very worried that people will be so comfortable that they wouldn't know what to do with their time. I was thinking of a world in which people would work 15 hours a week and when you look at the curve of our time we are way beyond that. We are… Even Germany with its reputation of strong work. In fact they are the laziest in the [inaudible 36:23]. [laughter] They are still working twice as much as King was considering for a possibility [inaudible 36:31]. By the way the Mexicans who have this bad reputation with my president in the US, they are clearly the most hard-working people.
Okay. So I think we are… Perhaps we should reconsider the notion of abundance and there may be ways in which we are actually close to abundance. If we could reduce inequalities and quickly really find efficient technologies to use the abundant energy, the renewable energy that is actually very abundant. We can probably imagine something like King's vision becoming true but we might have to think of preferences. So it might be that in the current type of consumer preferences that people have, everybody at some point wants to have his own private jet. And so that's probably not the kind of abundance we can imagine. So we should probably think of something a bit different than that, more invested in knowledge and social relations and this sort of thing. So reorienting consumer preferences may be necessary, but apart from that it looks like we are close to abundance. And so that's the reason to think that perhaps the prospect of abundance in the future, of a good, long term good situation for the future should loom large in our computation. But we have to look at… This is… All of that is very risky. Right? So we have risk of future poverty and the prospect of (38:00) abundance. So the question is how to assess risk and that's where I'd like to spend the rest of my talk. Right? So far there was no risk in my analysis and we need to put that in the picture.
So the standard… So the function that I mentioned in the beginning which is discounted utilitarianism is one where people are usually comfortable taking the expectation off and indeed Harsanyi has been famous for saying that it is the right way to deal with uncertainty and he basically had theorem which was based on the idea that the expected sum of utilities is also the sum of expected utility and that's very very nice and I'll say more about that in a minute. So that's a very potentially convenient criterion. There is one issue though, which is that it's a sort of straitjacket for evaluation of risks because it forces your risk aversion over consumption to be equal to your inequality aversion. The A parameter that I introduced as the coefficient of inequality aversion, if you take the expectation of the social welfare function that becomes your risk aversion over consumption. So that might be okay. If you look at the typical range of values that people think about for risk aversion between two and five and if you look at the range of values that people think about in terms of inequality aversion, it's… Perhaps two is actually a bit around the upper limit of what people are considering, but we could probably convince them to go a little bit beyond that. So it might be okay. But in terms of… So we have many philosophers here, so in terms of philosophical rigor, it's a bit bizarre that we would have to have these two parameters to coincide. So it would make much more sense to at least disentangle them and have some flexibility because the considerations (40:00) are guiding the choice of risk aversion presumably quite empirical considerations about people's risk attitude probably are very different from the considerations that will guide our discussion of inequality aversion, right, trading off between people. I know, I mean there are some veil of ignorance arguments which are very popular and which suggest that inequality aversion would reflect risk aversion but the veil of ignorance arguments aren't questionable because they somehow they want to borrow from trade-offs people are willing to make between possible future selves and to use that for trade-offs between actually existing people and again, so the same criticism. It's not clear that one trade-off is the same as the other, so it's not clear that we can really do that.
And so how can we disentangle the two if we wanted to? So there is one rationality principle that I'd like to stick to as much as possible, which is to satisfy statewise dominance or perhaps a little bit more eventwise dominance or at least stochastic dominance or even assuming that we are using the expected utility in the social evaluation. So let's take that and Harsanyi was very strong on the fact we should stick to that for social evaluation. That excludes some popular functions which take a form that is not amenable to an expected utility and so these functions will have to be abundant. But then we have this theorem by Harsanyi which is that if we take an expected utility at the social level. If we combine that with an ex ante Pareto principle, then utilitarianism is the only criterion that works. And this is because of this thing. So if you take the expected value of something and it has to be something that depends on the expected utilities of the people, then you get this linear structure. So you get utilitarianism. And so the question there is ex ante Pareto so compelling? It is not totally clear and I would be quick on that, but perhaps we could accept to relax Pareto a little bit because we are talking about ex ante Pareto, so it's not a situation of full information about the final consequences. It's only full information about the probabilities potentially with [inaudible 42:10]. So it's something where the Pareto principle may be slightly less compelling. And here there is this idea that Stéphane Zuber who is… So it's almost a joint presentation with Stéphane who is in the room because I’d be relying a lot on our joint work. So we have considered the idea of taking still the utilitarian criterion but applied not to people's personal consumption but to the equally distributed equivalent consumption. Right? So you take some inequality aversion and you compute the EDE over consumption and that's where you apply people's inequality. Sorry, risk attitudes, the VNM utility function. So that amounts to restricting the Pareto principle to the cases in which there is no exposed inequality because when there is no exposed inequality the EDE is the same as people's actual consumption and so you are respecting the risk attitude in this case. When you are not respecting the risk attitude, is when their risk-taking activities have some impact on inequalities. Right? And that's where you may want to be a bit more careful. So this criterion is potentially problematic for practical reasons. It's not separable across subgroups of populations, which means that if you want to do an evaluation of the kind of thing I have shown, the scenarios of Bill Nordhaus, in fact you would want to take account of the whole history of humanity from the beginning, perhaps 70,000 years ago or 200,000 years ago depending on some estimate, and up to way beyond the five century horizon. And so that is a bit of a daunting exercise to think about and so that looks very bad for this [inaudible 43:54]. I must say I've been very worried about this being publicly inapplicable because of that. (44:00) But in fact what happens with this kind of thing, even if you want to take account of the whole history of humanity, the relative priorities between people in a given state of the world are just a standard fix. So if you have a separable social way of computing the EDE… So if you take the Atkinson Social Welfare function first, you will do adjust all those relative priority of to enter those in the same state of the world will not depend on what happens to other people. So you are in the usual case. The only case in which you have something that is really not separable is when you look across states of the world and then what happens is that people's relative priority will be affected by how the whole path of humanity looks like in that state of the world. And so if that state of the world is very good, the EDE will be high and so people's priority at a given point will be greater because they are in a good… So they are more likely to be along the disadvantaged people in a high EDE state. And so that will shift your priorities between states of the world toward the states of the world in which you're not actually a good state. Right? So it's as if you were doing an evaluation where you shift your probability evaluation and you shift your probability mass towards a good state. So it's as if you were optimistic in your evaluation. Right? But apart from that… So there is this shift to make apart from that it's not a big deal in terms of evaluation. So it's actually possible to do it. Of course I mean it's possible with some, quotation marks because you still need to evaluate the EDE over the whole scenario and that is still something that's hard.
I wanted to mention the topic of catastrophe avoidance which is something that is interesting here. So depending on… So in this criterion you have your risk attitudes in the utility is here and you have inequality aversion in the EDE. So depending on whether risk aversion is greater than inequality aversion (46:00) or not, you may have aversion catastrophes or not and a catastrophe in that case is when you have the bad outcome for everybody. Right? And so when you have this greater risk aversion then you will want to handle risk by focusing, concentrating the risk on so-called suicide patrols. Right? You will focus the risk on some people in order to make sure that what happens at the macro level is more or less stable. Right? Because you have this risk aversion at the global level. Otherwise, if you care more about inequalities than about risk, it will be the opposite. You want to spread the risk across people so that everybody is in the same boat and you don't create inequalities and you don't have suicide patrols who are suffering. Right? So that's the thing.
So I think perhaps skip that which is… If you want you can come back to that illustration of this thing.
Now there are two puzzles that I'd like to mention and here I will share some doubts and open questions. So one is, if different individuals live in different states of the world should their risk attitudes matter because they themselves don't live in a risky situation? I mean imagine… It's a sort of stylized thing. Right? But imagine that there are different states of the world and in these different states of the world you have different individuals, different populations which may be the case in the far future. Right? Once we have different mating patterns across human beings and they have different children because of different kinds of events, at some point we have different populations in different states of the world. So in these states of the world an individual looking at what happens in his own or her own state of the world doesn't see any risk. But viewed from the point of view now, of now, we see a lot of risk. Right? So we may have very different fates for different point of view. So where should we take our risk attitude from? Through the [inaudible 48:00]. Right? So in view of themselves. They are no risk, but at the macro level we feel that there is a lot of risk. So that's the question. Where should we take our risk attitude from?
And the other thing is that, if populations differ across options, the individual VNM utility functions with the criterion I think proposing here will affect the interpersonal comparisons across the options even in the absence risk. So if you use this criterion, right, you will look at the utilities of the people applying to the EDE. So if we are considering changing the composition of the population across states of the world then we'll compare things in terms of the utilities of the people and that is something that again will give a big role to their risk attitude on the VNM functions even if they themselves don’t bear any risk.
And so I'd like to share… And I hope Stéphane will not be bothered by my sharing some ongoing work which is still unfinished and we are still puzzled by that. We have this strange impossibility that goes like this. So the idea is let's focus on this question of ranking pairs which consist in one bundle of consumption and a VNM utility function. Right? And the bundles of consumption I will assume they have only one dimension only one good to simplify. And so even in this very simple world, it's impossible to satisfy these three axioms at the same time. So this one is just statewise dominance. Okay. So the standard dominance when we face risk. This one is ex ante Pareto for one-person society. So forget about inequalities. When you have only one person in the world there is no inequality question, so why not respect ex ante Pareto. So they have risk attitudes. We accept that. And finally the last thing is the idea that when we look at, when we make comparisons in absence of risk (50:00) perhaps we should not be interested in risk attitudes of the people because there is no risk. Right? So that's an axiom that has been repeatedly proposed in the literature by various authors and indeed if you look at the big literature on defining, evaluating allocation without risk, usually this literature does not take into account risk attitudes of people. And so here is the proof of the impossibility and I'll go quick on that but I hope I can give you the intuition. So you have two individuals here, I mean two possible incarnations of an individual. The red one has a low risk aversion and the blue one has a high risk aversion and we are looking at the evaluation of this situation. So this bundle here which is red means that we are looking at this bundle combined with with these preferences. Right? So red means it's this way, so Z with the red color here means it's this bundle with these preferences. And so here we are on the the axis of the ray of no risk, so the certainty bundle… And so here we have the situation where individual risk attitude shouldn't matter because there is no risk for them and… Yeah sorry. I did not explain what the axes are. So here we have consumption in State one and consumption in State two and I have only one good. Right? But since I have two dimensions these bundles are two dimensional. And so the independence of risk attitudes means that the evaluation of such bundle should not depend on people's risk attitude and they, by Pareto, I mean an individual here would prefer this point here whatever his preference is. So obviously Z should be a ranked to both W. Now if you look at these bundles here… So these bundles have risk, but if we think of statewise dominance we have to look at everything… (52:00) I mean the evaluation in every possible state. So if you are in State one then consumption is higher in X than in Y and so there should be… And there is no risk once we are in State one and so again we should prefer X to Y in State one and for the same reason we should prefer X to Y in State two and so again by… So in every state we should prefer X to Y for the same reason, but by statewise dominance, we should therefore prefer the whole bundle to the X to the whole bundle Y, even if they are associated with these different preferences and then we are with a problem because the other cycle will prefer Z to W. W is as good as X by Pareto. X is better than Y and we are back to square one. Right? So we have a cycle. And so the question is which axiom, if we want to drop, if we have to… So we do have to drop one. So if you drop statewise dominance you can, for instance, rely on the certainty equivalent as a way of comparing people. That is something that is a bit like the social welfare function that rely on ex ante evaluation. So that's potentially popular. But you have to drop this rationality thing and that is a bit of a serious problem I would say. You could drop Pareto and then you completely abandon any idea of relying on people's risk attitudes and you can do it very easily. You can take a reference VNM function that doesn't reflect people's risk attitude. You apply now to a consumption index or at least here we have only one good that works and you could have a social evaluation that is the expected sum of these of things. I'm inclined to suggest but I don't know what Stéphane will say. So I'm inclined to think that perhaps the independence of risk attitudes is the weakest, it's the least compelling condition and if we do that well perhaps we can (54:00) accept to take the VNM functions of the people. But the problems that we have to compare them across people having different VNM functions. So we need to scale this VNM function. And that's the thing. And that would be associated with the function, the social welfare function that I've just presented before, right, which takes the utilitarian sum applied to the consumption.
So now how should we scale these functions and Stéphane and I, we have a proposal about that, which is to scale in a way that looks like the figures I've already shown, which is to say when people have different risk aversions. So I'm no longer looking at inequality aversion. These are potential VNM functions that people can have and the suggestion we have is to take a poverty level and to scale the function so that the level and the slope of these functions are all the same at this poverty level and that has two nice properties. So if you add people at the poverty level it's always a matter of indifference. Right? So it's adding people. It's not bad or good when you do that and that's the level where you have indifference. If you add people who are below, it's worse. If you add people above it's a good thing and if you… And that's a bit more controversial perhaps. So that's a critical level of consumption if you are familiar with the situation on population ethics. But what's more controversial is to say that if we are… If we have the choice between adding one more person to society and this person could be more or less risk-averse then perhaps we should prefer adding a person with less risk aversion. Okay. So risk aversion in this setting. So the more risk averse functions are below the others and that's something that is sort of handicap in utility product, so that may be controversial. Okay.
So to recap, the proposal that I'm making here, and I'm not sure it flies, but let me just make it for the sake of the discussion is to take the sum of expected VNM utilities (56:00) applied to a consumption EDE and to scale the VNM functions in the way I just showed. Okay.
Okay. So these notes are just… I don’t have to say more than that. I think I'm close to the end of the time, so let me quickly mention the question of deep uncertainty. So, so far I was assuming that we had, I didn't say it explicitly, but we had probabilities that were well-defined. If we don't have that, if we have ambiguity, so probabilities these are not well-defined. Then what should we do? So there is a debate going on about climate, especially about climate policy and ambiguity aversion. So ambiguity aversion is popular among not just decision theories but also climate economists and the problem is that ambiguity aversion is… If you use it in the standard way, the standard criteria that you find in the literature, these criteria violate the basic rationality axioms. They don't take the form of an expected utility, obviously, but they violate even more basic things like eventwise dominance. So that may be a worry and I have a working paper where I show that we can actually incorporate ambiguity aversion in a form of expanded or enriched expected utility approach, but then it appears as something that is put into the utility consequence, the utility value and it's a sort of phobia. So I mean I'm just making up this term here. Asafeia is ambiguity in Greek, but so it will be just a sort of phobia about ambiguous situations that would dampen, that would decrease people's utility whenever they their path crosses an ambiguous situation. Right? And so that is something that may happen to people. It is totally fine. Whether it should happen to policy makers is less obvious to me and so I'm a bit worried that in particular it triggers fear of information. Right? Because you can have, you can learn things that make, that increase (58:00) ambiguity and that is the kind of… So this fear of information makes sense for people who are phobic, but it may make less sense for a policymaker. Another thing that’s bizarre about ambiguity is that you can randomize your decision in order to reduce ambiguity. So I'm not sure it would be a good advice to policymakers to say, “Oh, if you are ambiguity averse, you just have to randomize before you make your decision.” That looks bizarre. Right? So I'm a bit skeptical about ambiguity aversion in this setting, so let me skip these illustrations.
And then there is a question of, if we if we insist on being orthodox and going all the way to the expected utility, you have to make up your probabilities, your poetic [?] beliefs and there, there is an interesting literature which I will skip. It's an interesting problem. How do you cook probabilities when you don't have them?
So let me briefly conclude to define problem discussion. So let me just summarize what I have said. Can we discount the future heavily? Yes. If the future is rich but knowing there is a probability of catastrophe in the sense of having a lot of poor people in the future. Okay. So that's how I think we should really think about catastrophe and not just necessarily a macro event or a kind of extinction-like event. So there is no Dismal theorem in the sense that the world is already dismal and yet it's not the end of it, if I may say. But we have potentially large negative values to take account of. Right? So that's the problem. So we are still in the situation which may not be comfortable for applications and empirical work. On the question of risk… So my conclusion would be either we rely on utilitarianism with risk aversion and inequality aversion at same level reasonably high or we want to (60:00) disentangle and then we can do it with this expected sum of utilities applied to the EDE. As I said it's not separable but it's potentially manageable. The interpersonal comparisons I'm less sure about that. That requires scaling the VNM utilities and the puzzles I have shown. Now if I want to, I refer you there. I conclude in a more personal note. My impression is that it seems possible to imagine a lot of good things happening in the future. Prosperity and happiness and all that to most people in many many generations even if we know that it's unlikely that it will go on forever, but yeah. But we are likely to miss this opportunity. Okay. And so if we disappear soon in our failure, something that would, given the context, we could call perhaps the “Hexit” which is the exit of the human species. [laughter] The positive note would be that there is still… I mean the earth will remain in the similar conditions for 500 million years. That's a lot of time for other species to take over and perhaps that's our ultimate hope. So if we… Yeah, if we fail badly, we might not be the only hope for a good future. Thank you very much.
[Applause]
Other videos
- « Previous
- 1
- …
- 14
- 15
- 16