Calibration dilemmas in the ethics of distribution
Jacob M. Nebel (University of Southern California) and H. Orri Stefánsson (Stockholm University and Swedish Collegium for Advanced Study)
GPI Working Paper No. 10-2021, published in Economics & Philosophy
This paper was the basis for the Parfit Memorial Lecture 2021.
The recording of the Parfit Memorial Lecture is now available to view here.
Other working papers
Will AI Avoid Exploitation? – Adam Bales (Global Priorities Institute, University of Oxford)
A simple argument suggests that we can fruitfully model advanced AI systems using expected utility theory. According to this argument, an agent will need to act as if maximising expected utility if they’re to avoid exploitation. Insofar as we should expect advanced AI to avoid exploitation, it follows that we should expected advanced AI to act as if maximising expected utility. I spell out this argument more carefully and demonstrate that it fails, but show that the manner of its failure is instructive…
Against Anti-Fanaticism – Christian Tarsney (Population Wellbeing Initiative, University of Texas at Austin)
Should you be willing to forego any sure good for a tiny probability of a vastly greater good? Fanatics say you should, anti-fanatics say you should not. Anti-fanaticism has great intuitive appeal. But, I argue, these intuitions are untenable, because satisfying them in their full generality is incompatible with three very plausible principles: acyclicity, a minimal dominance principle, and the principle that any outcome can be made better or worse. This argument against anti-fanaticism can be…
Shutdownable Agents through POST-Agency – Elliott Thornley (Global Priorities Institute, University of Oxford)
Many fear that future artificial agents will resist shutdown. I present an idea – the POST-Agents Proposal – for ensuring that doesn’t happen. I propose that we train agents to satisfy Preferences Only Between Same-Length Trajectories (POST). I then prove that POST – together with other conditions – implies Neutrality+: the agent maximizes expected utility, ignoring the probability distribution over trajectory-lengths. I argue that Neutrality+ keeps agents shutdownable and allows them to be useful.