How effective is (more) money? Randomizing unconditional cash transfer amounts in the US
Ania Jaroszewicz (University of California San Diego), Oliver P. Hauser (University of Exeter), Jon M. Jachimowicz (Harvard Business School) and Julian Jamison (University of Oxford and University of Exeter)
GPI Working Paper No. 28-2024
We randomized 5,243 Americans in poverty to receive a one-time unconditional cash transfer (UCT) of $2,000 (two months’ worth of total household income for the median participant), $500 (half a month’s income), or nothing. We measured the effects of the UCTs on participants’ financial well-being, psychological well-being, cognitive capacity, and physical health through surveys administered one week, six weeks, and 15 weeks later. While bank data show that both UCTs increased expenditures, we find no evidence that (more) cash had positive impacts on our pre-specified survey outcomes, in contrast to experts’ and laypeople’s incentivized predictions. We test several explanations for these unexpected results. The data are most consistent with the notion that receiving some but not enough money made participants’ (unmet) needs more salient, which caused distress. We develop a model to illustrate how receiving cash can sometimes also highlight its absence. (JEL: C93, D91, I30)
Other working papers
Maximal cluelessness – Andreas Mogensen (Global Priorities Institute, Oxford University)
I argue that many of the priority rankings that have been proposed by effective altruists seem to be in tension with apparently reasonable assumptions about the rational pursuit of our aims in the face of uncertainty. The particular issue on which I focus arises from recognition of the overwhelming importance…
Will AI Avoid Exploitation? – Adam Bales (Global Priorities Institute, University of Oxford)
A simple argument suggests that we can fruitfully model advanced AI systems using expected utility theory. According to this argument, an agent will need to act as if maximising expected utility if they’re to avoid exploitation. Insofar as we should expect advanced AI to avoid exploitation, it follows that we should expected advanced AI to act as if maximising expected utility. I spell out this argument more carefully and demonstrate that it fails, but show that the manner of its failure is instructive…
‘The only ethical argument for positive 𝛿’? – Andreas Mogensen (Global Priorities Institute, Oxford University)
I consider whether a positive rate of pure intergenerational time preference is justifiable in terms of agent-relative moral reasons relating to partiality between generations, an idea I call discounting for kinship. I respond to Parfit’s objections to discounting for kinship, but then highlight a number of apparent limitations of this…