How effective is (more) money? Randomizing unconditional cash transfer amounts in the US
Ania Jaroszewicz (University of California San Diego), Oliver P. Hauser (University of Exeter), Jon M. Jachimowicz (Harvard Business School) and Julian Jamison (University of Oxford and University of Exeter)
GPI Working Paper No. 28-2024
We randomized 5,243 Americans in poverty to receive a one-time unconditional cash transfer (UCT) of $2,000 (two months’ worth of total household income for the median participant), $500 (half a month’s income), or nothing. We measured the effects of the UCTs on participants’ financial well-being, psychological well-being, cognitive capacity, and physical health through surveys administered one week, six weeks, and 15 weeks later. While bank data show that both UCTs increased expenditures, we find no evidence that (more) cash had positive impacts on our pre-specified survey outcomes, in contrast to experts’ and laypeople’s incentivized predictions. We test several explanations for these unexpected results. The data are most consistent with the notion that receiving some but not enough money made participants’ (unmet) needs more salient, which caused distress. We develop a model to illustrate how receiving cash can sometimes also highlight its absence. (JEL: C93, D91, I30)
Other working papers
Concepts of existential catastrophe – Hilary Greaves (University of Oxford)
The notion of existential catastrophe is increasingly appealed to in discussion of risk management around emerging technologies, but it is not completely clear what this notion amounts to. Here, I provide an opinionated survey of the space of plausibly useful definitions of existential catastrophe. Inter alia, I discuss: whether to define existential catastrophe in ex post or ex ante terms, whether an ex ante definition should be in terms of loss of expected value or loss of potential…
Existential Risk and Growth – Leopold Aschenbrenner and Philip Trammell (Global Priorities Institute and Department of Economics, University of Oxford)
Technology increases consumption but can create or mitigate existential risk to human civilization. Though accelerating technological development may increase the hazard rate (the risk of existential catastrophe per period) in the short run, two considerations suggest that acceleration typically decreases the risk that such a catastrophe ever occurs. First, acceleration decreases the time spent at each technology level. Second, given a policy option to sacrifice consumption for safety, acceleration motivates greater sacrifices…
Will AI Avoid Exploitation? – Adam Bales (Global Priorities Institute, University of Oxford)
A simple argument suggests that we can fruitfully model advanced AI systems using expected utility theory. According to this argument, an agent will need to act as if maximising expected utility if they’re to avoid exploitation. Insofar as we should expect advanced AI to avoid exploitation, it follows that we should expected advanced AI to act as if maximising expected utility. I spell out this argument more carefully and demonstrate that it fails, but show that the manner of its failure is instructive…