Will MacAskill | What we owe the future
WILL MACASKILL: (00:00) Hi. Thank you all for attending this lecture and thank you so much Professor Pinker for inviting me to speak. I'm sorry that it's trying times and I'm not able to be there in person. So otherwise, ten years, I've spent pretty much all my time focused on this question of effective altruism. How can we use our time and money to do as much good as possible, and in this lecture I'm going to talk about what I regard as the most major kind of realization I've had over that time period and it's what I call Longtermism, which I will say is this claim that if we want to do the most good, we should focus on bringing about those changes to society that do the most to improve the very long-run future, and where the time scales involved here are much longer than people normally think about. I'm not talking about decades, instead I'm talking about centuries or millennia or even more. And this lecture is going to be making the case for longtermism. I'm going to make the case for that claim on the basis of four premises.
(01:09) So, the first premise, I hope, is going to be relatively uncontroversial among this audience. It's the idea that FUTURE PEOPLE MATTER, MORALLY SPEAKING. It doesn't matter in this view when you were born. People in the future people, people in a century time, a thousand time, a million years’ time – their interests count just as much morally speaking as ours do today. And like I say, I hope this is a relatively intuitive claim, but we can also think of a little thought experiment to help show how intuitive it is. So imagine that you're walking on a trail that's… You’re kind of off on a hike on a trail that's not very often used and at one point you open a glass bottle and you drop it and it smashes on the trail and you think… You're not really sure whether you are going to tidy up after yourself or not. Not many people use it and you decide, “It's not worth the effort. I'm going to just leave the broken glass there.” And you start to walk on. But then you suddenly get this call. You pick it up and it's a Skype call and you answer it. It turns out it's from someone 100 years in the future and they're saying that actually they walked on this very path and they accidentally put their hand on this glass and they cut themselves and so they're now asking you politely if you can actually just tidy up after yourself so that they won't have a bloody hand. Now you might be taken aback in various ways by getting this video call, but I think one thing that you wouldn't think is, “Oh, I'm not going to tidy up the glass. I don't care about this person on the phone. They're a future person. They're a hundred years’ time. They're interest sort of is no concern to me.” That's just not what we would naturally think at all. If you think that you've harmed someone, it really doesn't matter when that occurs.
(03:06) So that's the first premise and where this premise gets real bite is in conjunction with a second empirical premise – THAT THERE ARE IN EXPECTATION (so not certainly but with a significant degree of probability) VAST NUMBERS OF FUTURE PEOPLE and to see this we can consider the entire history of the human race.
(03:33) So Homo sapiens came on the scene about 200,000 years ago. We developed what archaeologists regard as behavioural modernity about 50,000 years ago; developed agriculture about 10,000 years ago; the first cities a few thousand years ago and just a few hundred years ago, really a blink of the eye by evolutionary timescales, we entered the industrial era. So that's how much time has been behind us but how much time is yet to come?
(04:03) Well the typical mammalian species lasts about a million years, suggesting we still have 800,000 years to come. Now that's the estimate for the typical mammalian species but Homo sapiens isn't the typical mammalian species. We have a hundred times the biomass of any wild large animal to have walked the earth across an enormous diversity of environments and most importantly we have knowledge, technology, science. And so there's really no reason why, if we don't cause our own untimely end before this point, that we couldn't last much longer than a typical mammalian species.
(04:48) The earth itself will remain habitable for about a billion years before the expanding sun sterilizes the planet and if someday we took to the stars, then we would be able to continue civilization for actually trillions more years.
(05:07) Now what's important for this argument is not any of the particular numbers, it's just the fact that on any of these, even on the most conservative of these estimates of the size of the future, it's absolutely enormous. The number of people who are yet to come is astronomical. So we can visualize this.
(05:24) Let's use one of these stick figures to represent approximately 10 billion people. So there have been about 100 billion people in the past, there are about eight billion people alive today.
(05:36) Even on the conservative end of these estimates, the number of people who are yet to come, they would outnumber us by thousands or even millions to one. So that's my second premise, THAT THERE ARE IN EXPECTATION VAST NUMBERS OF FUTURE PEOPLE. And that's so important because when combined with the first premise it means that in the aggregate, their interests matter enormously. Anything we can do to make future generations’ lives better or worse is of enormous moral importance.
(06:15) Third claim though is that society at the moment does not currently take those interests seriously. FUTURE PEOPLE ARE UTTERLY DISENFRANCHISED. So future people cannot bargain with us, for example. It's not like those who will be harmed by our burning of fossil fuels and emissions of greenhouse gases. It's not that they can sue us. It's not that they can bargain with us or send us a check for the liability. More fundamentally future people don't have a vote, so they can't affect the political decisions that we're making today that might impact them. And so I actually think that we can see concern for future generations as one step on the continuation of moral progress that we've seen over what is now thousands of years, which Peter Singer described as the expanding “moral circle”.
(07:07) So perhaps in the hunter/gatherer era the moral circle was quite confined just to our self, our family, our community.
(07:15) With the development of nation-states and big gods religions, this expanded to concern for every one of your nation or ethnic group or religious group.
(07:27) In the last couple of centuries this has expanded again. You start to take seriously the idea of cosmopolitanism, the idea that all nations and ethnicities, everyone in the world has equal moral value no matter where or what country you are born into.
(07:43) I think then the final stage, the next step in this progression, is to care not just about everyone in the world but everyone, no matter when they arise too – all generations. And what we should be thinking is that there is this great (08:00) injustice that there are so many people who are yet to come who are entirely left out of the decisions we make – whether that's in businesses or in politics or elsewhere. Okay. So those are my first three premises and I regard all of these as actually relatively uncontroversial.
(08:23) The final premise I'll spend the rest of this talk on, which might be the most controversial and it's that THERE ARE WAYS (IN EXPECTATION) TO POSITIVELY IMPACT THE VERY LONG RUN. And you might think that this is just a very bold thing to think. Perhaps we can affect… You know, it's hard to predict, you might think, events even a few decades out, let alone centuries or millennia to come. But I think as we'll see, we actually have good clearance for believing this and even though it might seem like any effect we could have is just very small. I think over the long run actually that can add up to a very enormous magnitude of effect. So the analogy I want you to have in mind is… As if there's a cruise liner that's leaving from London and heading to New York. This cruise liner represents civilization and we are just this little entity group of people trying to make a difference.
(09:23) So like this swimmer here trying to push this cruise liner and alter its trajectory. You might think that such a small amount of force on such a large ship, you couldn't possibly make a difference to the trajectory it goes on. But with about a day's worth of pushing, a days’ worth of swimming for the swimmer ( I did ask some physicist friends to do the math on this for me) by rotating the ship you could make a difference between it ending up in New York and it ending up in Venezuela.
(09:59) So I'm going to talk about three categories of ways in which we can influence the very long run. The first is most familiar to you. It was Climate Change. You've already had a lecture on this, so I'm only going to deal with that very briefly. Second is the idea of Civilizational Collapse – some very negative event that could result in us going back to the industrial era and never recovering and I’ll particularly focus on how that could happen as a result of war or pandemics. And then finally I'll talk about Values Change.
(10:38) So first is Climate Change and this is the future impacting issue that's obviously most well-known today. The thing that's striking about climate change, if you ever look at the economic and policy evaluation though, is that that evaluation focuses on approximately the impacts in the next century. It really doesn't look at impacts later than that date. But from this longtermist perspective, that's completely unjustified. The CO2 that we emit currently has an average atmospheric lifetime of 30,000 years, so even though most carbon dioxide is reabsorbed into the oceans after a few centuries, there's this very long tail and even after 100,000 years, 70% percent of the CO2 that we emit will still be in the atmosphere. And there are very many very long-lasting impacts of climate change.
(11:34) So irrevocable changes include species loss. So climate change will undoubtedly lead to the loss of many types of species. If we don't preserve their DNA then once you lose a species you can't get it back.
(11:50) Second is the loss of coral reefs. It's likely that coral reefs will die off even just within a few decades from existing climate change and it takes tens of thousands of years for these to replenish themselves.
(12:04) Another category is the loss of ice sheets and sea level rise. Even if we were to entirely stop fossil fuel emissions, fossil fuel burning and carbon dioxide emissions today, sea levels would continue to rise for thousands of years. The Greenland ice sheet is possibly already past the tipping point. Certainly if we get to three degrees of warming then there will be this very slow positive feedback loop where the melting of the Greenland ice sheet leads to more melting, which leads more melting and over the course of about a thousand years, it will entirely melt away raising sea levels by about seven meters.
(12:45) The final category impacts on us too, impacts which we would see in terms of impact on the economic growth rate. So economists normally think of climate change as just a loss of the level of economic productivity that we have but not an impact on how fast we're growing. But recent analyses have called that assumption into doubt and this makes a really big difference. So even the difference between the economy growing at a rate of 2% per person per year and 1.8% per person per year as a result of climate change, after a couple of centuries that becomes the equivalent of a catastrophe that wipes out half of the world's wealth. So like I say, because you've already had a lecture on climate change I'm not going to dwell on this too much but it's pretty clear that from a longtermist perspective climate change becomes among the very most important issues.
(13:46) The second category I want to think of as a generalization is Civilizational Collapse. So if there was some event that had an extremely negative impact on civilization, that is an impact that could potentially persist indefinitely and that's in one of two ways. So at the most extreme, if there was an event that literally killed everyone on earth, well then Homo sapiens would be like those species that we're driving to extinction. Once we go extinct that's not something we can come back from or just if a catastrophe put us back sufficiently far to a kind of pre-industrial state, perhaps it's the case that it could be a significant enough catastrophe that we would not be able to recover. So if we're looking at those, like this as a category, the natural way of thinking of what the most likely ways that civilizational collapse could occur is to look at history.
(14:49) So if we look at the deadliest events ever, the worst things that ever happened as measured by death count and this is death count as a proportion of the world's population, it's actually quite striking that two categories really leap out. The first is war and the second is pandemics. And yeah, this is one other. Genghis Khan was responsible for killing almost 10% of the world's population. So I'm going to use these as the two kind of possible causes of civilizational collapse that could occur within our lifetimes or within the zone of influence. And to emphasize, I'm not claiming that these are likely, but just that the consequences would be so enormous that even a low probability means that we should be really paying attention.
(15:42) So the first category is War. So it can seem kind of unimaginable that there could be another war between great powers of the magnitude of World War I or II or even greater within our lifetimes, the reason being that we've lived through what historians call the Long Peace, 70 years of unusual levels of peace and stability on an international level. But I don't think we can be too optimistic in the sense that I don't think we can rule out the possibility of another war between great powers, perhaps between the US and China, between the US and Russia, in our lifetimes. And part of the reason for this is just that war between the major powers in the world is absolutely a fixture of history and if you look at how the death rate from war over time has changed, well there's not much of a trend really. It looks pretty flat and the reason it kind of goes up and down is just because the death toll from wars are so driven by the very worst wars. It's what statisticians call power-law distribution or fat-tailed distribution. And so we have these 70 years of peace. I really really hope that continues. I really hope that's due to deep structural factors, but from a statistical perspective, we can't rule out that we haven't just gotten lucky. This becomes particularly important because since 1945 the potential destructiveness of the worst wars, I think, has increased quite significantly and that's because of the invention of nuclear weapons.
(17:33) So here in the 1940s… Let this green box represent the yield of a conventional bomb, 0.02 kilotons.
(17:45) When we invented the first atomic weapons, we developed weaponry that was a thousand times more powerful.
(17:55) And what is much less well-known is that the move from the atom bomb to the hydrogen bomb was just as great a leap forward in destructive power as the move from conventional weaponry to the atomic bomb. So the atomic bomb was a thousand times more powerful than conventional weaponry. The hydrogen bomb was a thousand times more powerful again. And then we subsequently built many thousands of such weapons and still have many thousands and we really did come close to using them too. So during the height of the Cuban Missile Crisis, John F. Kennedy for example, put the odds of an all-out nuclear exchange between the US and USSR at somewhere between one in three and even. And if there was an all-out nuclear exchange, there's a significant probability, though not guarantee, of nuclear winter where temperatures would drop five to ten degrees for a period of about a decade and we really don't know or can't be confident in what would happen to civilization and society after that point. And this of course is just only with the weapons that we have around today insofar as we're thinking about the events that might occur in our lifetimes over the next forty years. We have to think about the next generation of weaponry too, and I'll come to that in a second. So I think that the possibility of war in our lifetimes is one of the potentially most important events in this longtermist perspective.
(19:27) Second category I'll talk about is Pandemics. So obviously very timely at the current moment. We in the effective altruism community have been championing the cause of pandemic preparedness for many years. It's very clear that we are not sufficiently prepared for a pandemic and it's just a sad fact that that's now quite evident. And again like with wars, which follow this fat-tailed distribution, where the worst wars are far worse than a typical war is. The same is true with pandemics. The very worst pandemics are much much worse than merely bad pandemics and strange though it is to say it at this time, COVID is actually not one of the worst-case outcomes. The infectiousness of the disease could have been much higher, the fatality rate could have been much higher too.
(20:26) So to put this in perspective, here is a visualization of the worst pandemics over time. Here is COVID-19. Obviously, this is just the death toll at the moment. It will increase a lot over the coming year, but this just helps us appreciate just how great the death toll of the worst pandemics have been over time. In particular the Black Death killed something like 50% of the population of Europe. Spanish flu killed on the order of the same number of people as World War I and World War II occurring at about the same time.
(21:08) Now I think risks from pandemics in one way have gone down a lot over the last couple of centuries for various reasons. We have the germ theory of disease now. We understand how they work. We have modeling. We have the ability to design vaccines and anti-retrovirals. All of these things make us a lot safer, a lot more robust. But there is another trend that potentially cuts against that and that's developments in synthetic biology, namely the ability for us to create novel pathogens. There are evolutionary reasons why the largest possible death toll from a pathogen is kind of capped. So if you were a virus, you don't want to kill your host because that stops you spreading as much. And so viruses in general have a tendency to become less virulent over time. 20% of you in the audience, though you might not know it, have herpes. That's a very a successful virus and it's partially successful by not killing its host. And so you'll see in viruses in general there's this anti-correlation between how deadly viruses is and how well it spreads. But if we are able to create viruses in the lab, that correlation no longer needs to hold and so it would be perfectly possible, from a theoretical perspective and increasingly from a practical perspective, to create a virus that has the following three combinations of properties: the contagiousness of the common cold, the lethality of Ebola and the incubation period of HIV. If you were to create such a virus then or perhaps hundreds of such viruses, you could infect everyone in the world with a virus that's 99% fatal, and we only start finding out when people started dying, at which point it would already be too late. And this is technology that is really just around the corner. We're already able to make pathogens more infectious or more deadly. It's called gain-of-function research and this is actually one of the most rapid areas of technological progress.
(23:28) So this is a graph that represents just how cheap it is to do sequencing and synthesis over time. Note that on this side that's a multiple of ten as we go up, so it's known as a logarithmic scale. If progress were exponential then it would be a straight line as double-stranded DNA synthesis is. But it's super exponential. It's actually faster than that. So if you compare it to Moore's law, progress in synthetic biology as measured by this is considerably faster than Moore's law.
(24:06) And so this creates a possibility for malicious use of such viruses. We've been very thankful to have seen very little in the way of biological warfare but that could change in the future. Perhaps bioweapons would be the next generation of weapons of mass destruction. I want to say that… emphasize again, that if you look through history, you do find that civilization is surprisingly resilient. So again, I actually don't think it's very likely that there would be some event that would cause such a large catastrophe that we would never come back, but I also don't think we should be very confident that that wouldn't happen. I don't think it's something we can rule off the table, given the magnitude of the stakes.
(24:56) So as an analogy here is… You might think if you were living in this society, things are going well, population is booming. We could also put on the same graph incomes over time, were increasing a lot. You might think, “Oh, we're just on this steady exponential trend. Things are going to keep getting better and better.”
(25:17) But this graph represents the population of the city of Rome up until zero A.D. which then plateaued and collapsed. And actually Rome only regained its population at its peak in 1930. So 1900 years of collapse for them. So that's the possibility of civilizational collapse.
(25:42) The final category I want to talk about is Values Change. I actually think this might be the most important of all. It's just one way to start learning about some relevant background. Only very recently… And the key idea here is just that it's obvious that the moral norms that a society holds are very important in terms of how valuable do we think that society is. If a society owns slaves we might think it would be better if that didn't even exist. But the thing that's crucial is that changes to values seem to be surprisingly persistent over time and to see this just to ask yourself the question of, “What's the best-selling book this year?” And the answer is not ‘’Fifty Shades of Grey’. It's not ‘Harry Potter’. Instead it's the Bible, as it is every single year and by a large way. And the second best-selling book every year is the Koran. And this is just astonishing. The book that sells the most, potentially has the most influence, was written between 1300 B.C. and 100 A.D. And obviously it's the case that the interpretation of the Bible has changed dramatically over time. There is still evidence that there are certain moral norms that got locked in as a result of Christianity becoming so successful as a religion rather than other religions and some of these norms have been so successfully promoted that we might even just take them for granted now. So consider the idea of infanticide. Almost everyone in the modern world regards the killing of infants as utterly, morally opprobrious, just beyond the pale. But at zero A.D. it was actually quite a common practice in Greek and Roman society. It was regarded as just one of those things, part of life. Christians were quite distinctive insofar as they had a very strong prohibition against infanticide. As a result of the spread of Christianity, they carried that norm with them.
(28:07) But we don't need to just consider anecdotes because there's actually… there's a new and very exciting field of research. Actually the leaders of this field such as Nathan Nunn are based with you in Harvard, and the field is called persistence studies – looking at effects that have had very long run influence and we find over and over again these examples of moral change in the distant past still having impact even today.
(28:33) So one example is… Some countries adopted the plow as a method of agriculture. It turns out that those countries show lower participation from women in the labor force even today. And why is that? Well, plow agriculture as compared to the use of the hoe or digging stick involves large amounts of upper body strength and bursts of power. So countries that used the plow as a method of agriculture, they had much more unequal division of labor between the genders and as a result of that more inegalitarian gender norms. That cultural trait seems to have persisted even until the modern day and it's not just lower participation of women in the workforce, it's also lower ownership of firms by women, lower participation of women or lower political representation from women too. And this phenomenon even seems to hold among second generation immigrants into the US. So that's one example but there are many more and I'll just mention two.
(29:45) In other words then if you look across countries in Africa, those countries that had a larger proportion of their population taken as slaves in the colonial era, they show lower levels of social trust and lower levels of GDP per capita today. They're poorer today in part because of the slave raiding that occurred during the colonial era.
(30:07) A third example. If you look at those Indian towns that were trading ports in the medieval era, so there were some strong economic incentive to have a more cosmopolitan outlook. Those towns demonstrate lower levels of conflict between Hindus and Muslims over the 19th and 20th centuries. This religious tolerance that was developed in those trading towns again persisted to the present day. Then when we look to the future… Well, I don't think that the world has yet got its values right yet.
(30:44) Some particularly important values, from my view, includes Cosmopolitanism – the idea that everyone around the world count equally, has the same moral worth. Still from a global perspective, a very niche perspective.
(30:59) Second is Concern for non-human animals – concern for the suffering of animals as well as humans. This is another example where there are norms that we have that we take for granted. We think this is the way it is and so therefore this is the way it must be. But there are cultures… So in India a third of the population are vegetarian. Perhaps if Hinduism had become a more dominant religion, perhaps if India had had an industrial revolution, we would have much larger, much greater concerns for non-humans than we do have today. Perhaps factory farming, as we currently practice it, would not exist.
(31:38) Consequentialism – again most people around the world still think of morality in terms of following rules rather than attend the outcomes that we're trying to achieve.
(31:50) Liberalism – obviously, much more widespread, but certainly not destined to continue into the future. Particularly important if we have this long term perspective to be able to course correct, to be able to figure out, engage in experiments in living, figure out what the right way to live is and to hopefully converge on morally better ways of structuring society.
(32:15) And then finally Longtermism itself. This is a realization that has only come about in the last few years, certainly over the last couple of decades, something that's not widely held at all.
(32:27) And so not only is it the case that the gap between where we might like society's values to be and where they currently are, is very great. And not only is it the case that we have evidence for changes in values persisting for many centuries, even thousands of years. There are some reasons to think that could be even more persistent again into the future. The analogy here is between cultural revolution and biological evolution where for 600 million years we had a wide diversity of species of large, of land animals roaming the planet with none particularly having more power than another, certainly none being completely dominant. But then one species, Homo sapiens arose and became the dominant species on the planet. And that resulted in a certain amount of lock-in. It means the future of civilization is driven by Homo sapiens rather than the Bonobos that developed high intelligence, the Chimpanzees that developed high intelligence.
(33:33) Then in terms of cultural evolution… Well, we're at that stage where there's not yet a dominant culture. There's not yet one culture that's become sufficiently powerful to have taken over the globe. Francis Fukuyama suggested in his article “The End of History” that it was Liberal Western democracy but I think he called it a little bit too soon. Whereas if it is the case that over the coming centuries there becomes a convergence, for many possible reasons on one particular set of values in the world in one particular culture. Well then, there becomes very little reason today to deviate from that. Okay.
(34:14) The last thing I'll talk about is… What can you do? So if you're convinced of these arguments, if you want to turn your life to do as much good as you can and in particular with concern and with an eye to the very long-run future, the most important decision you can make is what you use your career to do. And because of that I set up an organization called 80,000 Hours, where the purpose of this organization is to help you to work out which career path is the one where you can have the biggest positive impact and help you really think through this question and I believe that Professor Pinker has highlighted the organization in the syllabus too, so thank you for that. And it's called 80,000 Hours because that's the number of hours you typically work over the course of your life. For most decisions, it seems reasonable to spend 1% of the length of time you might be debating how to use on thinking how you ought to use that on the question of prioritization. So if you're deciding to go out for dinner, you might spend five minutes looking up a restaurant. That seems kind of reasonable. Well applying that same logic to your career choice, if you spent 1% of your career on deciding how to spend the rest of your time, well that would be 800 hours. But I suspect that most people don't spend quite that length of time. But I think they should. I think it's that sufficiently important. And so via a podcast, via in-depth articles on the website and via the small amount of one-on-one advising that we're able to do, 80,000 Hours is trying to (36:00) give you the best advice we can in terms of how you can make a positive difference. I'll highlight four particular focus areas.
(36:09) I'll start with Global Priorities research because that's what I do. So this is research into this question of how can you do the most good. Is longtermism true? What are the arguments against it? If it is true, what follows? Which of these areas that I've mentioned are most important? And what are the ideas I haven't mentioned, that I haven't even thought about that are even more important today? And so the Global Priorities Institute is [inaudible 36:34] to Oxford that is interdisciplinary between philosophy and economics, but this sort of research is conducted across the effective altruism community.
(36:46) Second area is pandemic preparedness. So it's obviously particularly important at the present moment and 80,000 Hours actually plans to make this their main focus over the coming year. As I mentioned before, the effective altruism community regarded pandemic preparedness as a top-tier cause for many years and it just simply is the case that if we were better prepared from a policy perspective, if we had better technology, then we wouldn't be in the mess that we're currently in and as I say future pandemics could be even worse. And so I think the next few years could actually provide an unusual window of opportunity to make change in this utterly important area because what we should be saying after this pandemic is over is, “Never again!” And that gives us an unusual window of opportunity for leverage.
(37:42) The third cause area of highlight which I didn't get to talk about in this talk because it's really one full lecture or many lectures just of its own is, safety and policy around artificial intelligence. The thing that I'll just highlight is that it's not just about what you might have heard of the kind of Terminator scenarios, though there are people who are concerned about AI take-over scenarios. But it's actually quite a wide array of reasons why you might think the very very rapid progress that we're currently seeing in deep learning could be very important from a long-run perspective in terms of leading potentially to much faster economic growth, leading to disruption of existing balance of powers, being used as a vehicle for certain weaponry technologies, resulting in again altering the chance of war. And as well just in terms of the enormous benefits that we could potentially get from advanced artificial intelligence.
(38:50) Then the final area I want to highlight I think is the most important of all and relates to values change as I was discussing before and that's movement-building. That's an increase of the number of people who are taking the ideas of effective altruism and longtermism seriously and trying to grow that number of people. And this is something that is accessible to you too. It's something you can get involved with right away and I believe is the highest impact volunteering opportunity that you have. You can get involved through this organization “Harvard Effective Altruism”, the local group of effective altruism, endorsed, I think, by professor Pinker himself. And why is this so high impact? Well at Harvard you have possibly the densest concentration of future Leaders of anywhere on the planet and you have access to them. You can influence how they might spend their 80,000 hours of working time and if you can convince just one person to do as much good as you plan to do in the course of your life, well you've done your life's work. If you convince two other people, then to go on and do good yourself. You've trebled your lifetime impact. And so that's why promoting these ideas to try and convince other people to really think seriously about how they can do as much good in their life as possible, well that's just of enormous potential value. So yeah, thank you again for listening to this and if it all works out I'll see you in just a couple of minutes on Zoom for Q & A. Thank you.