The Conservation Multiplier
Bård Harstad (University of Oslo)
GPI Working Paper No. 13 - 2022, published in Journal of Political Economy
Every government that controls an exhaustible resource must decide whether to exploit it or to conserve and thereby let the subsequent government decide whether to exploit or conserve. This paper develops a positive theory of this situation and shows when a small change in parameter values has a multiplier effect on exploitation. The multiplier strengthens the influence of a lobby paying for exploitation, and of a donor compensating for conservation. A successful donor pays every period for each unit; a successful lobby pays once. This asymmetry causes inefficient exploitation. A normative analysis uncovers when compensations are optimally offered to the party in power, to the general public, or to the lobby.
Other working papers
Existential Risk and Growth – Leopold Aschenbrenner and Philip Trammell (Global Priorities Institute and Department of Economics, University of Oxford)
Technology increases consumption but can create or mitigate existential risk to human civilization. Though accelerating technological development may increase the hazard rate (the risk of existential catastrophe per period) in the short run, two considerations suggest that acceleration typically decreases the risk that such a catastrophe ever occurs. First, acceleration decreases the time spent at each technology level. Second, given a policy option to sacrifice consumption for safety, acceleration motivates greater sacrifices…
Social Beneficence – Jacob Barrett (Global Priorities Institute, University of Oxford)
A background assumption in much contemporary political philosophy is that justice is the first virtue of social institutions, taking priority over other values such as beneficence. This assumption is typically treated as a methodological starting point, rather than as following from any particular moral or political theory. In this paper, I challenge this assumption.
AI alignment vs AI ethical treatment: Ten challenges – Adam Bradley (Lingnan University) and Bradford Saad (Global Priorities Institute, University of Oxford)
A morally acceptable course of AI development should avoid two dangers: creating unaligned AI systems that pose a threat to humanity and mistreating AI systems that merit moral consideration in their own right. This paper argues these two dangers interact and that if we create AI systems that merit moral consideration, simultaneously avoiding both of these dangers would be extremely challenging. While our argument is straightforward and supported by a wide range of pretheoretical moral judgments, it has far-reaching…