Intergenerational equity under catastrophic climate change

Aurélie Méjean (CNRS, Paris), Antonin Pottier (Centre d’Economie de la Sorbonne), Stéphane Zuber (Paris School of Economics - CNRS) and Marc Fleurbaey (Princeton University)

GPI Working Paper No. 5-2020, published in Climatic Change

Climate change raises the issue of intergenerational equity. As climate change threatens irreversible and dangerous impacts, possibly leading to extinction, the most relevant trade-off may not be between present and future consumption, but between present consumption and the mere existence of future generations. To investigate this trade-off, we build an integrated assessment model that explicitly accounts for the risk of extinction of future generations. We compare different climate policies, which change the probability of catastrophic outcomes yielding an early extinction, within the class of variable population utilitarian social welfare functions. We show that the risk of extinction is the main driver of the preferred policy over climate damages. We analyze the role of inequality aversion and population ethics. Usually a preference for large populations and a low inequality aversion favour the most ambitious climate policy, although there are cases where the effect of inequality aversion is reversed.

Other working papers

Against Willing Servitude: Autonomy in the Ethics of Advanced Artificial Intelligence – Adam Bales (Global Priorities Institute, University of Oxford)

Some people believe that advanced artificial intelligence systems (AIs) might, in the future, come to have moral status. Further, humans might be tempted to design such AIs that they serve us, carrying out tasks that make our lives better. This raises the question of whether designing AIs with moral status to be willing servants would problematically violate their autonomy. In this paper, I argue that it would in fact do so.

In search of a biological crux for AI consciousness – Bradford Saad (Global Priorities Institute, University of Oxford)

Whether AI systems could be conscious is often thought to turn on whether consciousness is closely linked to biology. The rough thought is that if consciousness is closely linked to biology, then AI consciousness is impossible, and if consciousness is not closely linked to biology, then AI consciousness is possible—or, at any rate, it’s more likely to be possible. A clearer specification of the kind of link between consciousness and biology that is crucial for the possibility of AI consciousness would help organize inquiry into…

Welfare and felt duration – Andreas Mogensen (Global Priorities Institute, University of Oxford)

How should we understand the duration of a pleasant or unpleasant sensation, insofar as its duration modulates how good or bad the experience is overall? Given that we seem able to distinguish between subjective and objective duration and that how well or badly someone’s life goes is naturally thought of as something to be assessed from her own perspective, it seems intuitive that it is subjective duration that modulates how good or bad an experience is from the perspective of an individual’s welfare. …