Neural Groundplans: Persistent Neural Scene Representations
from a Single Image

ICLR 2023

Prafull Sharma, Ayush Tewari, Yilun Du, Sergey Zakharov, Rares Ambrus, Adrien Gaidon,
William T. Freeman, Fredo Durand, Joshua B. Tenenbaum, Vincent Sitzmann

Code (Coming soon)
Data (Coming soon)

We present a method to map 2D image observations of a scene to a persistent 3D scene representation, enabling novel view synthesis and disentangled representation of the movable and immovable components of the scene. Motivated by the bird’s-eye-view (BEV) representation commonly used in vision and robotics, we propose conditional neural groundplans, ground-aligned 2D feature grids, as persistent and memory-efficient scene representations. Our method is trained self-supervised from unlabeled multi-view observations using differentiable rendering, and learns to complete geometry and appearance of occluded regions. In addition, we show that we can leverage multi-view videos at training time to learn to separately reconstruct static and movable components of the scene from a single image at test time. The ability to separately reconstruct movable objects enables a variety of downstream tasks using simple heuristics, such as extraction of object-centric 3D representations, novel view synthesis, instance-level segmentation, 3D bounding box prediction, and scene editing. This highlights the value of neural groundplans as a backbone for efficient 3D scene understanding models.

Novel View Synthesis and Static-Dynamic Disentanglement

Given a single image as input, our method represents the scene as a static and dynamic groundplan. This representation is then used to render novel views by compositing the contributions from the two groundplans using a neural renderer. Both static and dynamic groundplan can be rendered individually to present the static and dynamic (movable) parts of the scene respectively. In the video below, we show the composite rendering of the static and dynamic groundplans, as well as the static and dynamic groundplans rendered individually from circular camera trajectory.


Since our method computes independent groundplans for the static and dynamic (movable) components from the input image, the densities expressed by the dynamic groundplan enable segmentation in the bird's eye-view, 2D instance-level segmentation, and 3D bounding box prediction in an unsupervised setup.

Object-Centric Representation and Scene Editing

The reliable localization of the objects using the dynamic groundplan provides individual object-centric representation for all movable objects in the input image. These object-centric representations can be individually manipulated to edit the scene, enabling object deletion, insertion, and rearrangement.

Comparison for Novel View Synthesis


               title={Neural Groundplans: Persistent Neural Scene Representations
                         from a Single Image},
               author={Prafull Sharma and Ayush Tewari and Yilun Du and Sergey Zakharov
                              and Rares Andrei Ambrus and Adrien Gaidon and William T. Freeman
                              and Fredo Durand and Joshua B. Tenenbaum and Vincent Sitzmann},
               booktitle={The Eleventh International Conference on Learning Representations },