[IROS23] Part-level Scene Reconstruction Affords Robot Interaction

Abstract

Existing methods for reconstructing interactive scenes primarily focus on replacing reconstructed objects with CAD models retrieved from a limited database, resulting in significant discrepancies between the reconstructed and observed scenes. To address this issue, our work introduces a part-level reconstruction approach that reassembles objects using primitive shapes. This enables us to precisely replicate the observed physical scenes and simulate robot interactions with both rigid and articulated objects. By segmenting reconstructed objects into semantic parts and aligning primitive shapes to these parts, we assemble them as CAD models while estimating kinematic relations, including parent-child contact relations, joint types, and parameters. Specifically, we derive the optimal primitive alignment by solving a series of optimization problems, and estimate kinematic relations based on part semantics and geometry. Our experiments demonstrate that part-level scene reconstruction outperforms object-level reconstruction by accurately capturing finer details and improving precision. These reconstructed part-level interactive scenes provide valuable kinematic information for various robotic applications; we showcase the feasibility of certifying mobile manipulation planning in these interactive scenes before executing tasks in the physical world.

Publication
In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems
Muzhi Han
Muzhi Han
Ph.D. Candidate
Yixin Zhu
Yixin Zhu
Assistant Professor

I build humanlike AI.

Song-Chun Zhu
Song-Chun Zhu
Chair Professor
Hangxin Liu
Hangxin Liu
Research Scientist

Related