When should an effective altruist donate?

William MacAskill (Global Priorities Institute, Oxford University)

GPI Working Paper No. 8-2019, published as a chapter in Giving in Time

Effective altruism is the use of evidence and careful reasoning to work out how to maximize positive impact on others with a given unit of resources, and the taking of action on that basis. It’s a philosophy and a social movement that is gaining considerable steam in the philanthropic world. For example, GiveWell, an organization that recommends charities working in global health and development and generally follows effective altruist principles, moves over $90 million per year to its top recommendations. Giving What We Can, which encourages individuals to pledge at least 10% of their income to the most cost-effective charities, now has over 3500 members, together pledging over $1.5 billion of lifetime donations. Good Ventures is a foundation, founded by Dustin Moskovitz and Cari Tuna, that is committed to effective altruist principles; it has potential assets of $11 billion, and is distributing over $200 million each year in grants, advised by the Open Philanthropy Project. [...]

Other working papers

Doomsday and objective chance – Teruji Thomas (Global Priorities Institute, Oxford University)

Lewis’s Principal Principle says that one should usually align one’s credences with the known chances. In this paper I develop a version of the Principal Principle that deals well with some exceptional cases related to the distinction between metaphysical and epistemic modal­ity. I explain how this principle gives a unified account of the Sleeping Beauty problem and chance-­based principles of anthropic reasoning…

Tiny probabilities and the value of the far future – Petra Kosonen (Population Wellbeing Initiative, University of Texas at Austin)

Morally speaking, what matters the most is the far future – at least according to Longtermism. The reason why the far future is of utmost importance is that our acts’ expected influence on the value of the world is mainly determined by their consequences in the far future. The case for Longtermism is straightforward: Given the enormous number of people who might exist in the far future, even a tiny probability of affecting how the far future goes outweighs the importance of our acts’ consequences…

Will AI Avoid Exploitation? – Adam Bales (Global Priorities Institute, University of Oxford)

A simple argument suggests that we can fruitfully model advanced AI systems using expected utility theory. According to this argument, an agent will need to act as if maximising expected utility if they’re to avoid exploitation. Insofar as we should expect advanced AI to avoid exploitation, it follows that we should expected advanced AI to act as if maximising expected utility. I spell out this argument more carefully and demonstrate that it fails, but show that the manner of its failure is instructive…