[ICLR21Workshop] HALMA: Humanlike Abstraction Learning Meets Affordance in Rapid Problem Solving

Abstract

Humans learn compositional and causal abstraction, i.e., knowledge, in response to the structure of naturalistic tasks. When presented with a problem-solving task involving some objects, toddlers would first interact with these objects to reckon what they are and what can be done with them. Leveraging these concepts, they could understand the internal structure of this task, without seeing all of the problem instances. Remarkably, they further build cognitively executable strategies to rapidly solve novel problems. To empower a learning agent with similar capability, we argue there shall be three levels of generalization in how an agent represents its knowledge: perceptual, conceptual, and algorithmic. In this paper, we devise the very first systematic benchmark that offers joint evaluation covering all three levels. This benchmark is centered around a novel task domain, HALMA, for visual concept development and rapid problem-solving. Uniquely, HALMA has a minimum yet complete concept space, upon which we introduce a novel paradigm to rigorously diagnose and dissect learning agents’ capability in understanding and generalizing complex and structural concepts. We conduct extensive experiments on reinforcement learning agents with various inductive biases and carefully report their proficiency and weakness.

Type
Publication
In Proceedings of the International Conference on Learning Representations Workshop
Sirui Xie
Sirui Xie
Ph.D. Candidate
Yixin Zhu
Yixin Zhu
Assistant Professor

I build humanlike AI.

Ying Nian Wu
Ying Nian Wu
Professor
Song-Chun Zhu
Song-Chun Zhu
Chair Professor

Related