Funding public projects: A Case for the Nash product rule

Florian Brandl (University of Bonn), Felix Brandt (Technische Universität München), Matthias Greger (Technische Universität München), Dominik Peters (University of Toronto), Christian Stricker (Technische Universität München) and Warut Suksompong (National University of Singapore)

GPI Working Paper No. 14-2021, published in Journal of Mathematical Economics

We study a mechanism design problem where a community of agents wishes to fund public projects via voluntary monetary contributions by the community members. This serves as a model for public expenditure without an exogenously available budget, such as participatory budgeting or voluntary tax programs, as well as donor coordination when interpreting charities as public projects and donations as contributions. Our aim is to identify a mutually beneficial distribution of the individual contributions. In the preference aggregation problem that we study, agents report linear utility functions over projects together with the amount of their contributions, and the mechanism determines a socially optimal distribution of the money. We identify a specific mechanism—the Nash product rule—which picks the distribution that maximizes the product of the agents’ utilities. This rule is Pareto efficient, and we prove that it satisfies attractive incentive properties: it spends each agent’s contribution only on projects the agent finds acceptable, and agents are strongly incentivized to participate.

Other working papers

A paradox for tiny probabilities and enormous values – Nick Beckstead (Open Philanthropy Project) and Teruji Thomas (Global Priorities Institute, Oxford University)

We show that every theory of the value of uncertain prospects must have one of three unpalatable properties. Reckless theories recommend risking arbitrarily great gains at arbitrarily long odds for the sake of enormous potential; timid theories recommend passing up arbitrarily great gains to prevent a tiny increase in risk; nontransitive theories deny the principle that, if A is better than B and B is better than C, then A must be better than C.

Against the singularity hypothesis – David Thorstad (Global Priorities Institute, University of Oxford)

The singularity hypothesis is a radical hypothesis about the future of artificial intelligence on which self-improving artificial agents will quickly become orders of magnitude more intelligent than the average human. Despite the ambitiousness of its claims, the singularity hypothesis has been defended at length by leading philosophers and artificial intelligence researchers. In this paper, I argue that the singularity hypothesis rests on scientifically implausible growth assumptions. …

The freedom of future people – Andreas T Schmidt (University of Groningen)

What happens to liberal political philosophy, if we consider not only the freedom of present but also future people? In this article, I explore the case for long-term liberalism: freedom should be a central goal, and we should often be particularly concerned with effects on long-term future distributions of freedom. I provide three arguments. First, liberals should be long-term liberals: liberal arguments to value freedom give us reason to be (particularly) concerned with future freedom…