Publication:
Humans program artificial delegates to accurately solve collective-risk dilemmas but lack precision
| cris.virtual.department | #PLACEHOLDER_PARENT_METADATA_VALUE# | |
| cris.virtual.department | #PLACEHOLDER_PARENT_METADATA_VALUE# | |
| cris.virtual.orcid | 0000-0002-9569-9373 | |
| cris.virtual.orcid | 0000-0003-2086-1644 | |
| cris.virtualsource.department | 1a726932-beb2-4302-94c5-a7ab5d05ce6c | |
| cris.virtualsource.department | c9bfc953-849e-461c-b140-ee3ece2c5424 | |
| cris.virtualsource.orcid | 1a726932-beb2-4302-94c5-a7ab5d05ce6c | |
| cris.virtualsource.orcid | c9bfc953-849e-461c-b140-ee3ece2c5424 | |
| dc.contributor.author | Terrucha, Ines | |
| dc.contributor.author | Domingos, Elias Fernandez | |
| dc.contributor.author | Suchon, Remi | |
| dc.contributor.author | Santos, Francisco C. | |
| dc.contributor.author | Simoens, Pieter | |
| dc.contributor.author | Lenaerts, Tom | |
| dc.contributor.imecauthor | Terrucha, Ines | |
| dc.contributor.imecauthor | Simoens, Pieter | |
| dc.contributor.orcidimec | Terrucha, Ines::0000-0003-2086-1644 | |
| dc.contributor.orcidimec | Simoens, Pieter::0000-0002-9569-9373 | |
| dc.date.accessioned | 2025-07-09T08:43:18Z | |
| dc.date.available | 2025-07-09T03:58:33Z | |
| dc.date.available | 2025-07-09T08:43:18Z | |
| dc.date.issued | 2025 | |
| dc.description.abstract | In an era increasingly influenced by autonomous machines, it is only a matter of time before strategic individual decisions that impact collective goods will also be made virtually through the use of artificial delegates. Through a series of behavioral experiments that combine delegation to autonomous agents and different choice architectures, we pinpoint what may get lost in translation when humans delegate to algorithms. We focus on the collective-risk dilemma, a game where participants must decide whether or not to contribute to a public good, where the latter must reach a target in order for them to keep their personal endowments. To test the effect of delegation beyond its functionality as a commitment device, participants are asked to play the game a second time, with the same group, where they are given the chance to reprogram their agents. As our main result we find that, when the action space is constrained, people who delegate contribute more to the public good, even if they have experienced more failure and inequality than people who do not delegate. However, they are not more successful. Failing to reach the target, after getting close to it, can be attributed to precision errors in the agent’s algorithm that cannot be corrected amid the game. Thus, with the digitization and subsequent limitation of our interactions, artificial delegates appear to be a solution to help preserving public goods over many iterations of risky situations. But actual success can only be achieved if humans learn to adjust their agents’ algorithms. | |
| dc.description.wosFundingText | We would like to thank Prof. Ana Paiva for the useful conversations and suggestions on this work. We would also like to thank researcher Ana Pop Stefanija for her contribution in developing the questionnaires that preceded and followed the CRD experiment. I.T., P.S., and T.L. received funding from Fonds Wetenschappelijk onderzoek (FWO) under grant agreement number G054919N. E.F.D. is supported by an F.R.S.-FNRS (Fonds de la Recherche Scientifique) Charge de Recherche grant (number 40005955). T.L. is supported by two F.R.S.-FNRS research projects (grant numbers 31257234 and 40007793). E.F.D. and T.L. are supported by Service Public de Wallonie Recherche under grant number 2010235-ariac by digitalwallonia4.ai. T.L. acknowledges the support by the Flemish Government through the AI Research Program. F.C.S. acknowledges support by Fundacao para a Ciencia e a Tecnologia (FCT)-Portugal (grants UIDB/50021/2020, PTDC/CCI-INF/7366/2020, and PTDC/MAT-APL/6804/2020). T.L. and F.C.S. both acknowledge the support by Foundations of Trustworthy AI-Integrating Reasoning, Learning and Optimization project (TAILOR), a project funded by European Union Horizon 2020 research and innovation programme under GA number 952215. | |
| dc.identifier.doi | 10.1073/pnas.2319942121 | |
| dc.identifier.issn | 0027-8424 | |
| dc.identifier.pmid | MEDLINE:40523170 | |
| dc.identifier.uri | https://imec-publications.be/handle/20.500.12860/45888 | |
| dc.publisher | NATL ACAD SCIENCES | |
| dc.source.beginpage | 1 | |
| dc.source.endpage | 10 | |
| dc.source.issue | 25 | |
| dc.source.journal | PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA | |
| dc.source.numberofpages | 10 | |
| dc.source.volume | 122 | |
| dc.subject.keywords | COOPERATION | |
| dc.subject.keywords | DYNAMICS | |
| dc.title | Humans program artificial delegates to accurately solve collective-risk dilemmas but lack precision | |
| dc.type | Journal article | |
| dspace.entity.type | Publication | |
| Files | Original bundle
| |
| Publication available in collections: |