Skip to main content

Collective Intelligence and Human-AI Complementarity

This research agenda examines how digital environments and AI systems can be designed to enhance collective intelligence and improve group decision-making.

Despite their many shortcomings (HKS Misinfo Review, 2025), carefully designed digital environments can harness the collective intelligence of their users. Our prior work shows that social influence in online settings, such as crowdfunding, often amplifies existing inequalities, systematically favoring elites, men, and highly visible or “hyped” ideas (ASONAM, 2015; HCOMP, 2017; CI, 2023). At the same time, we find that specific network structures (ICWSM, 2024) and forms of social signaling (CSCW, 2021; WWW, 2023; TOCHI, 2025) can mitigate these biases and promote better collective outcomes.

Our ongoing research investigates how AI can function as both an individual assistant and a group-level collaborator in hidden-profile decision-making tasks. Using a custom-built proactive AI system, we experimentally compare different functionalities, including devil’s advocate reasoning, behavioral nudges, and summarization, across dozens of experimental groups where AI operates either privately for individuals or as a shared group facilitator. Our findings show that AI support, at both the individual and group levels, leads to improved decision quality compared to settings without AI involvement.

This work informs the design of human–AI systems that better support collaboration, reduce bias, and enhance collective outcomes.

Image of NSF Logo

LINK is supported by a National Science Foundation CAREER Award(IIS-1943506), and a National Science Foundation CRII Award(IIS-1755873)