Intergenerational equity under catastrophic climate change
Aurélie Méjean (CNRS, Paris), Antonin Pottier (Centre d’Economie de la Sorbonne), Stéphane Zuber (Paris School of Economics - CNRS) and Marc Fleurbaey (Princeton University)
GPI Working Paper No. 5-2020, published in Climatic Change
Climate change raises the issue of intergenerational equity. As climate change threatens irreversible and dangerous impacts, possibly leading to extinction, the most relevant trade-off may not be between present and future consumption, but between present consumption and the mere existence of future generations. To investigate this trade-off, we build an integrated assessment model that explicitly accounts for the risk of extinction of future generations. We compare different climate policies, which change the probability of catastrophic outcomes yielding an early extinction, within the class of variable population utilitarian social welfare functions. We show that the risk of extinction is the main driver of the preferred policy over climate damages. We analyze the role of inequality aversion and population ethics. Usually a preference for large populations and a low inequality aversion favour the most ambitious climate policy, although there are cases where the effect of inequality aversion is reversed.
Other working papers
In Defence of Moderation – Jacob Barrett (Vanderbilt University)
A decision theory is fanatical if it says that, for any sure thing of getting some finite amount of value, it would always be better to almost certainly get nothing while having some tiny probability (no matter how small) of getting sufficiently more finite value. Fanaticism is extremely counterintuitive; common sense requires a more moderate view. However, a recent slew of arguments purport to vindicate it, claiming that moderate alternatives to fanaticism are sometimes similarly counterintuitive, face a powerful continuum argument…
Are we living at the hinge of history? – William MacAskill (Global Priorities Institute, Oxford University)
In the final pages of On What Matters, Volume II, Derek Parfit comments: ‘We live during the hinge of history… If we act wisely in the next few centuries, humanity will survive its most dangerous and decisive period… What now matters most is that we avoid ending human history.’ This passage echoes Parfit’s comment, in Reasons and Persons, that ‘the next few centuries will be the most important in human history’. …
Will AI Avoid Exploitation? – Adam Bales (Global Priorities Institute, University of Oxford)
A simple argument suggests that we can fruitfully model advanced AI systems using expected utility theory. According to this argument, an agent will need to act as if maximising expected utility if they’re to avoid exploitation. Insofar as we should expect advanced AI to avoid exploitation, it follows that we should expected advanced AI to act as if maximising expected utility. I spell out this argument more carefully and demonstrate that it fails, but show that the manner of its failure is instructive…