NeRF-Insert: Local 3D editing with multimodal control signals

1University of California, Los Angeles
2Amazon Web Services
Arxiv Preprint, 2024

NeRF-Insert allows to add an object to a 3D scene using multimodal input signals. The inserted object can be described using a text string or a reference image. The user can define the inpaint region by drawing as little as 2-3 manual masks. Alternatively the user can use a mesh to have more control on the pose or geometry of the inserted object.

Abstract

We propose NeRF-Insert, a NeRF editing framework that allows users to make high-quality local edits with a flexible level of control. Unlike previous work that relied on image-to-image models, we cast scene editing as an in-painting problem, which encourages the global structure of the scene to be preserved. Moreover, while most existing methods use only textual prompts to condition edits, our framework accepts a combination of inputs of different modalities as reference. More precisely, a user may provide a combination of textual and visual inputs including images, CAD models, and binary image masks for specifying a 3D region. We use generic image generation models to in-paint the scene from multiple viewpoints, and lift the local edits to a 3D-consistent NeRF edit. Compared to previous methods, our results show better visual quality and also maintain stronger consistency with the original NeR

BibTeX

BibTex Code Here