'The only ethical argument for positive 𝛿'? 

Andreas Mogensen (Global Priorities Institute, Oxford University)

GPI Working Paper No. 5-2019, published in Philosophical Studies

I consider whether a positive rate of pure intergenerational time preference is justifiable in terms of agent-relative moral reasons relating to partiality between generations, an idea I call ​discounting for kinship​. I respond to Parfit's objections to discounting for kinship, but then highlight a number of apparent limitations of this approach. I show that these limitations largely fall away when we reflect on social discounting in the context of decisions that concern the global community as a whole.

Other working papers

Time discounting, consistency and special obligations: a defence of Robust Temporalism – Harry R. Lloyd (Yale University)

This paper defends the claim that mere temporal proximity always and without exception strengthens certain moral duties, including the duty to save – call this view Robust Temporalism. Although almost all other moral philosophers dismiss Robust Temporalism out of hand, I argue that it is prima facie intuitively plausible, and that it is analogous to a view about special obligations that many philosophers already accept…

What power-seeking theorems do not show – David Thorstad (Vanderbilt University)

Recent years have seen increasing concern that artificial intelligence may soon pose an existential risk to humanity. One leading ground for concern is that artificial agents may be power-seeking, aiming to acquire power and in the process disempowering humanity. A range of power-seeking theorems seek to give formal articulation to the idea that artificial agents are likely to be power-seeking. I argue that leading theorems face five challenges, then draw lessons from this result.

Against Willing Servitude: Autonomy in the Ethics of Advanced Artificial Intelligence – Adam Bales (Global Priorities Institute, University of Oxford)

Some people believe that advanced artificial intelligence systems (AIs) might, in the future, come to have moral status. Further, humans might be tempted to design such AIs that they serve us, carrying out tasks that make our lives better. This raises the question of whether designing AIs with moral status to be willing servants would problematically violate their autonomy. In this paper, I argue that it would in fact do so.