When should an effective altruist donate?
William MacAskill (Global Priorities Institute, Oxford University)
GPI Working Paper No. 8-2019, published as a chapter in Giving in Time
Effective altruism is the use of evidence and careful reasoning to work out how to maximize positive impact on others with a given unit of resources, and the taking of action on that basis. It’s a philosophy and a social movement that is gaining considerable steam in the philanthropic world. For example, GiveWell, an organization that recommends charities working in global health and development and generally follows effective altruist principles, moves over $90 million per year to its top recommendations. Giving What We Can, which encourages individuals to pledge at least 10% of their income to the most cost-effective charities, now has over 3500 members, together pledging over $1.5 billion of lifetime donations. Good Ventures is a foundation, founded by Dustin Moskovitz and Cari Tuna, that is committed to effective altruist principles; it has potential assets of $11 billion, and is distributing over $200 million each year in grants, advised by the Open Philanthropy Project. [...]
Other working papers
Aggregating Small Risks of Serious Harms – Tomi Francis (Global Priorities Institute, University of Oxford)
According to Partial Aggregation, a serious harm can be outweighed by a large number of somewhat less serious harms, but can outweigh any number of trivial harms. In this paper, I address the question of how we should extend Partial Aggregation to cases of risk, and especially to cases involving small risks of serious harms. I argue that, contrary to the most popular versions of the ex ante and ex post views, we should sometimes prevent a small risk that a large number of people will suffer serious harms rather than prevent…
Against Willing Servitude: Autonomy in the Ethics of Advanced Artificial Intelligence – Adam Bales (Global Priorities Institute, University of Oxford)
Some people believe that advanced artificial intelligence systems (AIs) might, in the future, come to have moral status. Further, humans might be tempted to design such AIs that they serve us, carrying out tasks that make our lives better. This raises the question of whether designing AIs with moral status to be willing servants would problematically violate their autonomy. In this paper, I argue that it would in fact do so.
Strong longtermism and the challenge from anti-aggregative moral views – Karri Heikkinen (University College London)
Greaves and MacAskill (2019) argue for strong longtermism, according to which, in a wide class of decision situations, the option that is ex ante best, and the one we ex ante ought to choose, is the option that makes the very long-run future go best. One important aspect of their argument is the claim that strong longtermism is compatible with a wide range of ethical assumptions, including plausible non-consequentialist views. In this essay, I challenge this claim…