[CVPR23] Diffusion-based Generation, Optimization, and Planning in 3D Scenes

Abstract

We introduce SceneDiffuser, a conditional generative model for 3D scene understanding. SceneDiffuser provides a unified model for solving scene-conditioned generation, optimization, and planning. In contrast to prior works, SceneDiffuser is intrinsically scene-aware, physics-based, and goal-oriented. With an iterative sampling strategy, SceneDiffuser jointly formulates the scene-aware generation, physics-based optimization, and goal-oriented planning via a diffusion-based denoising process in a fully differentiable fashion. Such a design alleviates the discrepancies among different modules and the posterior collapse of previous scene-conditioned generative models. We evaluate SceneDiffuser with various 3D scene understanding tasks, including human pose and motion generation, dexterous grasp generation, path planning for 3D navigation, and motion planning for robot arms. The results show significant improvements compared with previous models, demonstrating the tremendous potential of SceneDiffuser for the broad community of 3D scene understanding.

Publication
In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
Siyuan Huang
Siyuan Huang
Research Scientist
Baoxiong Jia
Baoxiong Jia
Research Scientist
Tengyu Liu
Tengyu Liu
Research Scientist
Yixin Zhu
Yixin Zhu
Assistant Professor

I build humanlike AI.

Wei Liang
Wei Liang
Professor
Song-Chun Zhu
Song-Chun Zhu
Chair Professor

Related