The Significance, Persistence, Contingency Framework
William MacAskill, Teruji Thomas (Global Priorities Institute, University of Oxford) and Aron Vallinder (Forethought Foundation for Global Priorities Institute)
GPI Technical Report No. T1-2022
The world, considered from beginning to end, combines many different features, or states of affairs, that contribute to its value. The value of each feature can be factored into its significance—its average value per unit time—and its persistence—how long it lasts. Sometimes, though, we want to ask a further question: how much of the feature’s value can be attributed to a particular agent’s decision at a particular point in time (or to some other originating event)? In other words, to what extent is the feature’s value contingent on the agent’s choice? For this, we must also look at the counterfactual: how would things have turned out otherwise?
Other working papers
Numbers Tell, Words Sell – Michael Thaler (University College London), Mattie Toma (University of Warwick) and Victor Yaneng Wang (Massachusetts Institute of Technology)
When communicating numeric estimates with policymakers, journalists, or the general public, experts must choose between using numbers or natural language. We run two experiments to study whether experts strategically use language to communicate numeric estimates in order to persuade receivers. In Study 1, senders communicate probabilities of abstract events to receivers on Prolific, and in Study 2 academic researchers communicate the effect sizes in research papers to government policymakers. When…
Evolutionary debunking and value alignment – Michael T. Dale (Hampden-Sydney College) and Bradford Saad (Global Priorities Institute, University of Oxford)
This paper examines the bearing of evolutionary debunking arguments—which use the evolutionary origins of values to challenge their epistemic credentials—on the alignment problem, i.e. the problem of ensuring that highly capable AI systems are properly aligned with values. Since evolutionary debunking arguments are among the best empirically-motivated arguments that recommend changes in values, it is unsurprising that they are relevant to the alignment problem. However, how evolutionary debunking arguments…
The scope of longtermism – David Thorstad (Global Priorities Institute, University of Oxford)
Longtermism holds roughly that in many decision situations, the best thing we can do is what is best for the long-term future. The scope question for longtermism asks: how large is the class of decision situations for which longtermism holds? Although longtermism was initially developed to describe the situation of…