[CogSci24] Evaluating and Modeling Social Intelligence: A Comparative Study of Human and AI Capabilities

Abstract

Facing the current debate on whether Large Language Models (LLMs) attain near-human intelligence levels (Mitchell & Krakauer, 2023; Bubeck et al., 2023; Kosinski, 2023; Shiffrin & Mitchell, 2023; Ullman, 2023), the current study introduces a benchmark for evaluating social intelligence, one of the most distinctive aspects of human cognition. We developed a comprehensive theoretical framework for social dynamics and introduced two evaluation tasks: Inverse Reasoning (IR) and Inverse Inverse Planning (IIP). Our approach also encompassed a computational model based on recursive Bayesian inference, adept at elucidating diverse human behavioral patterns. Extensive experiments and detailed analyses revealed that humans surpassed the latest GPT models in overall performance, zero-shot learning, one-shot generalization, and adaptability to multi-modalities. Notably, GPT models demonstrated social intelligence only at the most basic order (order = 0), in stark contrast to human social intelligence (order >= 2). Further examination indicated a propensity of LLMs to rely on pattern recognition for shortcuts, casting doubt on their possession of authentic human-level social intelligence.

Publication
In Proceedings of Annual Meeting of the Cognitive Science Society
Yuxi Ma (Yuki)
Yuxi Ma (Yuki)
Ph.D. '24

My research interests include psychology-inspired AI research to understand and model human behavior and cognition, as well as investigating machine creativity and its applications in art.

Yujia Peng
Yujia Peng
Assistant Professor
Yixin Zhu
Yixin Zhu
Assistant Professor

I build humanlike AI.

Lifeng Fan
Lifeng Fan
Research Scientist

Related