When should an effective altruist donate?

William MacAskill (Global Priorities Institute, Oxford University)

GPI Working Paper No. 8-2019, published as a chapter in Giving in Time

Effective altruism is the use of evidence and careful reasoning to work out how to maximize positive impact on others with a given unit of resources, and the taking of action on that basis. It’s a philosophy and a social movement that is gaining considerable steam in the philanthropic world. For example, GiveWell, an organization that recommends charities working in global health and development and generally follows effective altruist principles, moves over $90 million per year to its top recommendations. Giving What We Can, which encourages individuals to pledge at least 10% of their income to the most cost-effective charities, now has over 3500 members, together pledging over $1.5 billion of lifetime donations. Good Ventures is a foundation, founded by Dustin Moskovitz and Cari Tuna, that is committed to effective altruist principles; it has potential assets of $11 billion, and is distributing over $200 million each year in grants, advised by the Open Philanthropy Project. [...]

Other working papers

Do not go gentle: why the Asymmetry does not support anti-natalism – Andreas Mogensen (Global Priorities Institute, Oxford University)

According to the Asymmetry, adding lives that are not worth living to the population makes the outcome pro tanto worse, but adding lives that are well worth living to the population does not make the outcome pro tanto better. It has been argued that the Asymmetry entails the desirability of human extinction. However, this argument rests on a misunderstanding of the kind of neutrality attributed to the addition of lives worth living by the Asymmetry. A similar misunderstanding is shown to underlie Benatar’s case for anti-natalism.

The unexpected value of the future – Hayden Wilkinson (Global Priorities Institute, University of Oxford)

Various philosophers accept moral views that are impartial, additive, and risk-neutral with respect to betterness. But, if that risk neutrality is spelt out according to expected value theory alone, such views face a dire reductio ad absurdum. If the expected sum of value in humanity’s future is undefined—if, e.g., the probability distribution over possible values of the future resembles the Pasadena game, or a Cauchy distribution—then those views say that no real-world option is ever better than any other. And, as I argue…

Existential risk and growth – Leopold Aschenbrenner (Columbia University)

Human activity can create or mitigate risks of catastrophes, such as nuclear war, climate change, pandemics, or artificial intelligence run amok. These could even imperil the survival of human civilization. What is the relationship between economic growth and such existential risks? In a model of directed technical change, with moderate parameters, existential risk follows a Kuznets-style inverted U-shape. …