'The only ethical argument for positive 𝛿'?Â
Andreas Mogensen (Global Priorities Institute, Oxford University)
GPI Working Paper No. 5-2019, published in Philosophical Studies
I consider whether a positive rate of pure intergenerational time preference is justifiable in terms of agent-relative moral reasons relating to partiality between generations, an idea I call ​discounting for kinship​. I respond to Parfit's objections to discounting for kinship, but then highlight a number of apparent limitations of this approach. I show that these limitations largely fall away when we reflect on social discounting in the context of decisions that concern the global community as a whole.
Other working papers
Egyptology and Fanaticism – Hayden Wilkinson (Global Priorities Institute, University of Oxford)
Various decision theories share a troubling implication. They imply that, for any finite amount of value, it would be better to wager it all for a vanishingly small probability of some greater value. Counterintuitive as it might be, this fanaticism has seemingly compelling independent arguments in its favour. In this paper, I consider perhaps the most prima facie compelling such argument: an Egyptology argument (an analogue of the Egyptology argument from population ethics). …
Against Anti-Fanaticism – Christian Tarsney (Population Wellbeing Initiative, University of Texas at Austin)
Should you be willing to forego any sure good for a tiny probability of a vastly greater good? Fanatics say you should, anti-fanatics say you should not. Anti-fanaticism has great intuitive appeal. But, I argue, these intuitions are untenable, because satisfying them in their full generality is incompatible with three very plausible principles: acyclicity, a minimal dominance principle, and the principle that any outcome can be made better or worse. This argument against anti-fanaticism can be…
Will AI Avoid Exploitation? – Adam Bales (Global Priorities Institute, University of Oxford)
A simple argument suggests that we can fruitfully model advanced AI systems using expected utility theory. According to this argument, an agent will need to act as if maximising expected utility if they’re to avoid exploitation. Insofar as we should expect advanced AI to avoid exploitation, it follows that we should expected advanced AI to act as if maximising expected utility. I spell out this argument more carefully and demonstrate that it fails, but show that the manner of its failure is instructive…