Below you can find summaries of some working papers written by GPI researchers. The full text of these papers as well as other working papers can be found on our papers page.
The singularity is a hypothetical future event in which machines rapidly become significantly smarter than humans. The idea is that we might invent an artificial intelligence (AI) system that can improve itself. After a single round of self-improvement, that system would be better equipped to improve itself than before. This process might repeat many times… Suppose you are choosing where to donate £1,500. One charity will distribute mosquito nets that cheaply and effectively prevent malaria, in all likelihood your donation will save a life. Another charity aims to create computer simulations of brains which could allow morally valuable life to continue indefinitely far into the future. They would be the first to admit that their project is very… Does the happiness in this world balance out its suffering, or does misery have the upper hand? In part this is a measurement question: has there been more happiness or suffering in this world to date, and what should we expect the balance to be in the future? But this is also a philosophical question. Even if we knew precisely how much happiness and suffering a possible future for the world would have… We might hope that there is a straightforward way of predicting the behaviour of future artificial intelligence (AI) systems. Some have suggested that AI will maximise expected utility, because anything else would allow them to accept a series of trades that result in a guaranteed loss of something valuable (Omohundro, 2008). Indeed, we would be able to predict AI behaviour if… Recent work argues for longtermism–the position that often our morally best options will be those with the best long-term consequences. Proponents of longtermism sometimes suggest that in most decisions expected long-term benefits outweigh all short-term effects. In ‘The scope of longtermism’, David Thorstad argues that most of our decisions do not have this character. He identifies three features… At some point in the future we may invent sophisticated simulations. If we do so, we could run millions of simulations of minor variants of the 21st century, each inhabited by simulated people. To those simulated people, it will appear as if they really lived in the 21st century. But that is exactly how our world appears to us, and perhaps we live in a simulation. … The value of the future may be vast. Human extinction, which would destroy that potential, would be extremely bad. Some argue that making such a catastrophe just a little less likely would be by far the best use of our limited resources––much more important than, for example, tackling poverty, inequality, global health or racial injustice. In “High risk, low reward: A challenge to the astronomical… Effective altruists seek to do as much good as possible given limited resources. Often by donating to important causes like global health and poverty, farmed animal welfare, and reducing existential risks. Can we help more by donating now or later? This is the thorny question William MacAskill tackles in the paper “When should an effective altruist donate?”. He explores several considerations… Longtermist altruists – who care about how much impact they have, but not about when that impact occurs – have a strong reason to invest resources before using them directly. Invested resources could grow much larger and be used to do much more good in the future. For example, a $1 investment that grows 5% per year would become $17,000 in 200 years. … Political decisions can have lasting effects on the lives and wellbeing of future generations. Yet political institutions tend to make short-term decisions with only the current generation – or even just the current election cycle – in mind. In “longtermist institutional reform”, Tyler M. John and William MacAskill identify the causes of short-termism in government and give four recommendations… Many people believe that it makes the world worse to create miserable lives, but that it doesn’t make the world better to create happy lives. This is one way of expressing “the Asymmetry” in population ethics. If we go on creating new people, many will be happy, but some will be unhappy. If we accept the Asymmetry, the continued existence of humanity therefore involves… For consequentialists, the outcomes that follow from our actions fully determine the moral value of our actions. Actions are right to the extent they bring about good outcomes and wrong to the extent they bring about bad outcomes. If, as many philosophers believe (Greaves & MacAskill, 2021), the best outcomes we can bring about involve improving the long-run future for sentient life… According to longtermism, what we should do mainly depends on how our actions might affect the long-term future. This claim faces a challenge: the course of the long-term future is difficult to predict, and the effects of our actions on the long-term future might be so unpredictable as to make longtermism false. … Many decisions in life involve balancing risks with their potential payoffs. Sometimes, the risks are small: you might be killed by a car while walking to the shops, but it would be unreasonably timid to sit at home and run out of toilet paper in order to avoid this risk. Other times, the risks are overwhelmingly large: your lottery ticket might win tomorrow, but it would be reckless to borrow £20,000 from a loan shark… In “The case for strong longtermism”, Greaves and MacAskill (2021) argue that potential far-future effects are the most important determinant of the value of our options. This is “axiological strong longtermism”. On some views, we can achieve astronomical value by making the future population of worthwhile lives much greater than it would otherwise have been…