[CVPR18] Human-centric Indoor Scene Synthesis Using Stochastic Grammar

Examples of scenes in ten different categories. In each group of three images, left: top-view; middle: a side-view; right: affordance heatmap.

Abstract

We present a human-centric method to sample and synthesize 3D room layouts and 2D images thereof, for the purpose of obtaining large-scale 2D/3D image data with the perfect per-pixel ground truth. An attributed spatial And-Or graph (S-AOG) is proposed to represent indoor scenes. The S-AOG is a probabilistic grammar model, in which the terminal nodes are object entities including room, furniture, and supported objects. Human contexts as contextual relations are encoded by Markov Random Fields (MRF) on the terminal nodes. We learn the distributions from an indoor scene dataset and sample new layouts using Monte Carlo Markov Chain. Experiments demonstrate that the proposed method can robustly sample a large variety of realistic room layouts based on three criteria: (i) visual realism comparing to a state-of-the-art room arrangement method, (ii) accuracy of the affordance maps with respect to ground-truth, and (ii) the functionality and naturalness of synthesized rooms evaluated by human subjects.

Publication
In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
Siyuan Qi
Siyuan Qi
Research Scientist
Yixin Zhu
Yixin Zhu
Assistant Professor

I build humanlike AI.

Siyuan Huang
Siyuan Huang
Research Scientist
Chenfanfu Jiang
Chenfanfu Jiang
Associate Professor
Song-Chun Zhu
Song-Chun Zhu
Chair Professor

Related