Choosing the future: Markets, ethics and rapprochement in social discounting
Antony Millner (University of California, Santa Barbara and National Bureau of Economic Research) and Geoffrey Heal (Columbia University and National Bureau of Economic Research)
GPI Working Paper No. 13-2021, published in Journal of Economics Literature
This paper provides a critical review of the literature on choosing social discount rates (SDRs) for public cost-benefit analysis. We discuss two dominant approaches, the first based on market prices, and the second based on intertemporal ethics. While both methods have attractive features, neither is immune to criticism. The market-based approach is not entirely persuasive even if markets are perfect, and faces further headwinds once the implications of market imperfections are recognised. By contrast, the ‘ethical’ approach – which relates SDRs to marginal rates of substitution implicit in a single planner’s intertemporal welfare function – does not rely exclusively on markets, but raises difficult questions about what that welfare function should be. There is considerable disagreement on this matter, which translates into enormous variation in the evaluation of long-run payoffs. We discuss the origins of these disagreements, and suggest that they are difficult to resolve unequivocally. This leads us to propose a third approach that recognises the immutable nature of some normative disagreements, and proposes methods for aggregating diverse theories of intertemporal social welfare. We illustrate the application of these methods to social discounting, and suggest that they may help us to move beyond long-standing debates that have bedevilled this field.
Other working papers
Evolutionary debunking and value alignment – Michael T. Dale (Hampden-Sydney College) and Bradford Saad (Global Priorities Institute, University of Oxford)
This paper examines the bearing of evolutionary debunking arguments—which use the evolutionary origins of values to challenge their epistemic credentials—on the alignment problem, i.e. the problem of ensuring that highly capable AI systems are properly aligned with values. Since evolutionary debunking arguments are among the best empirically-motivated arguments that recommend changes in values, it is unsurprising that they are relevant to the alignment problem. However, how evolutionary debunking arguments…
Aggregating Small Risks of Serious Harms – Tomi Francis (Global Priorities Institute, University of Oxford)
According to Partial Aggregation, a serious harm can be outweighed by a large number of somewhat less serious harms, but can outweigh any number of trivial harms. In this paper, I address the question of how we should extend Partial Aggregation to cases of risk, and especially to cases involving small risks of serious harms. I argue that, contrary to the most popular versions of the ex ante and ex post views, we should sometimes prevent a small risk that a large number of people will suffer serious harms rather than prevent…
Exceeding expectations: stochastic dominance as a general decision theory – Christian Tarsney (Global Priorities Institute, Oxford University)
The principle that rational agents should maximize expected utility or choiceworthiness is intuitively plausible in many ordinary cases of decision-making under uncertainty. But it is less plausible in cases of extreme, low-probability risk (like Pascal’s Mugging), and intolerably paradoxical in cases like the St. Petersburg and Pasadena games. In this paper I show that, under certain conditions, stochastic dominance reasoning can capture most of the plausible implications of expectational reasoning while avoiding most of its pitfalls…