Choosing the future: Markets, ethics and rapprochement in social discounting

Antony Millner (University of California, Santa Barbara and National Bureau of Economic Research) and Geoffrey Heal (Columbia University and National Bureau of Economic Research)

GPI Working Paper No. 13-2021, published in Journal of Economics Literature

This paper provides a critical review of the literature on choosing social discount rates (SDRs) for public cost-benefit analysis. We discuss two dominant approaches, the first based on market prices, and the second based on intertemporal ethics. While both methods have attractive features, neither is immune to criticism. The market-based approach is not entirely persuasive even if markets are perfect, and faces further headwinds once the implications of market imperfections are recognised. By contrast, the ‘ethical’ approach – which relates SDRs to marginal rates of substitution implicit in a single planner’s intertemporal welfare function – does not rely exclusively on markets, but raises difficult questions about what that welfare function should be. There is considerable disagreement on this matter, which translates into enormous variation in the evaluation of long-run payoffs. We discuss the origins of these disagreements, and suggest that they are difficult to resolve unequivocally. This leads us to propose a third approach that recognises the immutable nature of some normative disagreements, and proposes methods for aggregating diverse theories of intertemporal social welfare. We illustrate the application of these methods to social discounting, and suggest that they may help us to move beyond long-standing debates that have bedevilled this field.

Other working papers

Against Willing Servitude: Autonomy in the Ethics of Advanced Artificial Intelligence – Adam Bales (Global Priorities Institute, University of Oxford)

Some people believe that advanced artificial intelligence systems (AIs) might, in the future, come to have moral status. Further, humans might be tempted to design such AIs that they serve us, carrying out tasks that make our lives better. This raises the question of whether designing AIs with moral status to be willing servants would problematically violate their autonomy. In this paper, I argue that it would in fact do so.

Imperfect Recall and AI Delegation – Eric Olav Chen (Global Priorities Institute, University of Oxford), Alexis Ghersengorin (Global Priorities Institute, University of Oxford) and Sami Petersen (Department of Economics, University of Oxford)

A principal wants to deploy an artificial intelligence (AI) system to perform some task. But the AI may be misaligned and aim to pursue a conflicting objective. The principal cannot restrict its options or deliver punishments. Instead, the principal is endowed with the ability to impose imperfect recall on the agent. The principal can then simulate the task and obscure whether it is real or part of a test. This allows the principal to screen misaligned AIs during testing and discipline their behaviour in deployment. By increasing the…

Maximal cluelessness – Andreas Mogensen (Global Priorities Institute, Oxford University)

I argue that many of the priority rankings that have been proposed by effective altruists seem to be in tension with apparently reasonable assumptions about the rational pursuit of our aims in the face of uncertainty. The particular issue on which I focus arises from recognition of the overwhelming importance…