Consequentialism, Cluelessness, Clumsiness, and Counterfactuals
Alan Hájek (Australian National University)
GPI Working Paper No. 4-2024
According to a standard statement of objective consequentialism, a morally right action is one that has the best consequences. More generally, given a choice between two actions, one is morally better than the other just in case the consequences of the former action are better than those of the latter. (These are not just the immediate consequences of the actions, but the long-term consequences, perhaps until the end of history.) This account glides easily off the tongue—so easily that one may not notice that on one understanding it makes no sense, and on another understanding, it has a startling metaphysical presupposition concerning counterfactuals. I will bring this presupposition into relief. Objective consequentialism has faced various objections, including the problem of “cluelessness”: we have no idea what most of the consequences of our actions will be. I think that objective consequentialism has a far worse problem: its very foundations are highly dubious. Even granting those foundations, a worse problem than cluelessness remains, which I call “clumsiness”. Moreover, I think that these problems quickly generalise to a number of other moral theories. But the points are most easily made for objective consequentialism, so I will focus largely on it.
Other working papers
On two arguments for Fanaticism – Jeffrey Sanford Russell (University of Southern California)
Should we make significant sacrifices to ever-so-slightly lower the chance of extremely bad outcomes, or to ever-so-slightly raise the chance of extremely good outcomes? Fanaticism says yes: for every bad outcome, there is a tiny chance of of extreme disaster that is even worse, and for every good outcome, there is a tiny chance of an enormous good that is even better.
Towards shutdownable agents via stochastic choice – Elliott Thornley (Global Priorities Institute, University of Oxford), Alexander Roman (New College of Florida), Christos Ziakas (Independent), Leyton Ho (Brown University), and Louis Thomson (University of Oxford)
Some worry that advanced artificial agents may resist being shut down. The Incomplete Preferences Proposal (IPP) is an idea for ensuring that doesn’t happen. A key part of the IPP is using a novel ‘Discounted REward for Same-Length Trajectories (DREST)’ reward function to train agents to (1) pursue goals effectively conditional on each trajectory-length (be ‘USEFUL’), and (2) choose stochastically between different trajectory-lengths (be ‘NEUTRAL’ about trajectory-lengths). In this paper, we propose evaluation metrics…
Longtermist political philosophy: An agenda for future research – Jacob Barrett (Global Priorities Institute, University of Oxford) and Andreas T. Schmidt (University of Groningen)
We set out longtermist political philosophy as a research field. First, we argue that the standard case for longtermism is more robust when applied to institutions than to individual action. This motivates “institutional longtermism”: when building or shaping institutions, positively affecting the value of the long-term future is a key moral priority. Second, we briefly distinguish approaches to pursuing longtermist institutional reform along two dimensions: such approaches may be more targeted or more broad, and more urgent or more patient.