Funding public projects: A Case for the Nash product rule

Florian Brandl (University of Bonn), Felix Brandt (Technische Universität München), Matthias Greger (Technische Universität München), Dominik Peters (University of Toronto), Christian Stricker (Technische Universität München) and Warut Suksompong (National University of Singapore)

GPI Working Paper No. 14-2021, published in Journal of Mathematical Economics

We study a mechanism design problem where a community of agents wishes to fund public projects via voluntary monetary contributions by the community members. This serves as a model for public expenditure without an exogenously available budget, such as participatory budgeting or voluntary tax programs, as well as donor coordination when interpreting charities as public projects and donations as contributions. Our aim is to identify a mutually beneficial distribution of the individual contributions. In the preference aggregation problem that we study, agents report linear utility functions over projects together with the amount of their contributions, and the mechanism determines a socially optimal distribution of the money. We identify a specific mechanism—the Nash product rule—which picks the distribution that maximizes the product of the agents’ utilities. This rule is Pareto efficient, and we prove that it satisfies attractive incentive properties: it spends each agent’s contribution only on projects the agent finds acceptable, and agents are strongly incentivized to participate.

Other working papers

Economic growth under transformative AI – Philip Trammell (Global Priorities Institute, Oxford University) and Anton Korinek (University of Virginia)

Industrialized countries have long seen relatively stable growth in output per capita and a stable labor share. AI may be transformative, in the sense that it may break one or both of these stylized facts. This review outlines the ways this may happen by placing several strands of the literature on AI and growth within a common framework. We first evaluate models in which AI increases output production, for example via increases in capital’s substitutability for labor…

Dispelling the Anthropic Shadow – Teruji Thomas (Global Priorities Institute, University of Oxford)

There are some possible events that we could not possibly discover in our past. We could not discover an omnicidal catastrophe, an event so destructive that it permanently wiped out life on Earth. Had such a catastrophe occurred, we wouldn’t be here to find out. This space of unobservable histories has been called the anthropic shadow. Several authors claim that the anthropic shadow leads to an ‘observation selection bias’, analogous to survivorship bias, when we use the historical record to estimate catastrophic risks. …

Against Willing Servitude: Autonomy in the Ethics of Advanced Artificial Intelligence – Adam Bales (Global Priorities Institute, University of Oxford)

Some people believe that advanced artificial intelligence systems (AIs) might, in the future, come to have moral status. Further, humans might be tempted to design such AIs that they serve us, carrying out tasks that make our lives better. This raises the question of whether designing AIs with moral status to be willing servants would problematically violate their autonomy. In this paper, I argue that it would in fact do so.