Consequentialism, Cluelessness, Clumsiness, and Counterfactuals
Alan Hájek (Australian National University)
GPI Working Paper No. 4-2024
According to a standard statement of objective consequentialism, a morally right action is one that has the best consequences. More generally, given a choice between two actions, one is morally better than the other just in case the consequences of the former action are better than those of the latter. (These are not just the immediate consequences of the actions, but the long-term consequences, perhaps until the end of history.) This account glides easily off the tongue—so easily that one may not notice that on one understanding it makes no sense, and on another understanding, it has a startling metaphysical presupposition concerning counterfactuals. I will bring this presupposition into relief. Objective consequentialism has faced various objections, including the problem of “cluelessness”: we have no idea what most of the consequences of our actions will be. I think that objective consequentialism has a far worse problem: its very foundations are highly dubious. Even granting those foundations, a worse problem than cluelessness remains, which I call “clumsiness”. Moreover, I think that these problems quickly generalise to a number of other moral theories. But the points are most easily made for objective consequentialism, so I will focus largely on it.
Other working papers
The asymmetry, uncertainty, and the long term – Teruji Thomas (Global Priorities Institute, Oxford University)
The Asymmetry is the view in population ethics that, while we ought to avoid creating additional bad lives, there is no requirement to create additional good ones. The question is how to embed this view in a complete normative theory, and in particular one that treats uncertainty in a plausible way. After reviewing…
Prediction: The long and the short of it – Antony Millner (University of California, Santa Barbara) and Daniel Heyen (ETH Zurich)
Commentators often lament forecasters’ inability to provide precise predictions of the long-run behaviour of complex economic and physical systems. Yet their concerns often conflate the presence of substantial long-run uncertainty with the need for long-run predictability; short-run predictions can partially substitute for long-run predictions if decision-makers can adjust their activities over time. …
What power-seeking theorems do not show – David Thorstad (Vanderbilt University)
Recent years have seen increasing concern that artificial intelligence may soon pose an existential risk to humanity. One leading ground for concern is that artificial agents may be power-seeking, aiming to acquire power and in the process disempowering humanity. A range of power-seeking theorems seek to give formal articulation to the idea that artificial agents are likely to be power-seeking. I argue that leading theorems face five challenges, then draw lessons from this result.