SMNS event
Aligning AI with Human Social Systems
How should we align large language models (LLMs) with human values when they participate in social and organizational processes? In this talk, I present two recent studies that approach this question from complementary angles: LLMs as facilitators of human coordination, and LLMs as agents exhibiting social behaviors of their own.
First, I describe a framework where LLMs mediate collective decision-making by eliciting preferences, proposing balanced alternatives, and refining outcomes through dialogue, and a novel way to evaluate the system using LLM agents as study participants, showing how we can both use LLMs to design useful collective decision-making systems as well as perform in-silico user studies. Second, I show that across synthetic and real-world settings LLMs consistently reproduce fundamental micro-level principles such as preferential attachment, triadic closure, and homophily, as well as macro-level properties including community structure and small-world effects. Importantly, the relative emphasis of these principles adapts to context: for example, LLMs favor homophily in friendship networks but heterophily in organizational settings, mirroring patterns of social mobility. A companion human experiment confirms the predictive value of these emergent dynamics.
Together, these results highlight a central alignment challenge: ensuring that LLMs, whether assisting humans or acting as autonomous agents, promote outcomes consistent with human social and organizational goals. I conclude by outlining open questions for designing aligned multi-agent AI systems that integrate seamlessly with human networks.
The talk contains joint work with Yuan Yuan (UC Davis and OpenAI), Chin-Chia Hsu (Google DeepMind), and Longqi Yang (Microsoft Research) and has been published in PNAS Nexus and the ACM Conference on Computer-Supported Cooperative Work (CSCW 2025).