[SIGGRAPHAsia24] Autonomous Character-Scene Interaction Synthesis from Text Instruction

Abstract

Synthesizing human motions in 3D environments, particularly those with complex activities such as locomotion, hand-reaching, and Human-Object Interaction (HOI), presents substantial demands for user-defined waypoints and stage transitions. These requirements pose challenges for current models, leading to a notable gap in automating the animation of characters from simple human inputs. This paper addresses this challenge by introducing a comprehensive framework for synthesizing multi-stage scene-aware interaction motions directly from a single text instruction and goal location. Our approach employs an auto-regressive diffusion model to synthesize the next motion segment, along with an autonomous scheduler predicting the transition for each action stage. To ensure that the synthesized motions are seamlessly integrated within the environment, we propose a scene representation that considers the local perception both at the start and the goal location. We further enhance the coherence of the generated motion by integrating frame embeddings with language input. Additionally, to support model training, we present a comprehensive motion-captured (MoCap) dataset comprising 16 hours of motion sequences in 120 indoor scenes covering 40 types of motions, each annotated with precise language descriptions. Experimental results demonstrate the efficacy of our method in generating high-quality, multi-stage motions closely aligned with environmental and textual conditions.

Publication
In SIGGRAPH Asia 2024 Conference Papers
Nan Jiang
Nan Jiang
Ph.D. '22
Zimo He
Zimo He
Ph.D. '25, co-advised with Prof. Yizhou Wang
Hongjie Li
Hongjie Li
Zhi Class '21

My research interests include human-object interaction and scene understanding in 3D computer vision.

Yixin Chen
Yixin Chen
Research Scientist
Siyuan Huang
Siyuan Huang
Research Scientist
Yixin Zhu
Yixin Zhu
Assistant Professor

I build humanlike AI.

Related