Choosing the future: Markets, ethics and rapprochement in social discounting
Antony Millner (University of California, Santa Barbara and National Bureau of Economic Research) and Geoffrey Heal (Columbia University and National Bureau of Economic Research)
GPI Working Paper No. 13-2021, published in Journal of Economics Literature
This paper provides a critical review of the literature on choosing social discount rates (SDRs) for public cost-benefit analysis. We discuss two dominant approaches, the first based on market prices, and the second based on intertemporal ethics. While both methods have attractive features, neither is immune to criticism. The market-based approach is not entirely persuasive even if markets are perfect, and faces further headwinds once the implications of market imperfections are recognised. By contrast, the ‘ethical’ approach – which relates SDRs to marginal rates of substitution implicit in a single planner’s intertemporal welfare function – does not rely exclusively on markets, but raises difficult questions about what that welfare function should be. There is considerable disagreement on this matter, which translates into enormous variation in the evaluation of long-run payoffs. We discuss the origins of these disagreements, and suggest that they are difficult to resolve unequivocally. This leads us to propose a third approach that recognises the immutable nature of some normative disagreements, and proposes methods for aggregating diverse theories of intertemporal social welfare. We illustrate the application of these methods to social discounting, and suggest that they may help us to move beyond long-standing debates that have bedevilled this field.
Other working papers
Quadratic Funding with Incomplete Information – Luis M. V. Freitas (Global Priorities Institute, University of Oxford) and Wilfredo L. Maldonado (University of Sao Paulo)
Quadratic funding is a public good provision mechanism that satisfies desirable theoretical properties, such as efficiency under complete information, and has been gaining popularity in practical applications. We evaluate this mechanism in a setting of incomplete information regarding individual preferences, and show that this result only holds under knife-edge conditions. We also estimate the inefficiency of the mechanism in a variety of settings and show, in particular, that inefficiency increases…
Crying wolf: Warning about societal risks can be reputationally risky – Lucius Caviola (Global Priorities Institute, University of Oxford) et al.
Society relies on expert warnings about large-scale risks like pandemics and natural disasters. Across ten studies (N = 5,342), we demonstrate people’s reluctance to warn about unlikely but large-scale risks because they are concerned about being blamed for being wrong. In particular, warners anticipate that if the risk doesn’t occur, they will be perceived as overly alarmist and responsible for wasting societal resources. This phenomenon appears in the context of natural, technological, and financial risks…
AI alignment vs AI ethical treatment: Ten challenges – Adam Bradley (Lingnan University) and Bradford Saad (Global Priorities Institute, University of Oxford)
A morally acceptable course of AI development should avoid two dangers: creating unaligned AI systems that pose a threat to humanity and mistreating AI systems that merit moral consideration in their own right. This paper argues these two dangers interact and that if we create AI systems that merit moral consideration, simultaneously avoiding both of these dangers would be extremely challenging. While our argument is straightforward and supported by a wide range of pretheoretical moral judgments, it has far-reaching…