NeRF-In: Free-Form Inpainting for Pretrained NeRF with RGB-D Priors

I-Chao Shen1*       Hao-Kang Liu2*       Bing-Yu Chen2


Publication and downloads

I-Chao Shen*, Hao-Kang Liu*, Bing-Yu Chen (*: joint first authors), NeRF-In: Free-Form Inpainting for Pretrained NeRF with RGB-D Priors, IEEE Computer Graphics and Applications (CG&A), 2024.

Paper: [PDF, 19.8MB]
Arxiv: arxiv:2206.04901
Video: [Video]

Abstract

Though Neural Radiance Field (NeRF) demonstrates compelling novel view synthesis results, it is still unintuitive to edit a pre-trained NeRF because the neural network's parameters and the scene geometry/appearance are often not explicitly associated. In this paper, we introduce the first framework that enables users to remove unwanted objects or retouch undesired regions in a 3D scene represented by a pre-trained NeRF without any category-specific data and training. The user first draws a free-form mask to specify a region containing unwanted objects over a rendered view from the pre-trained NeRF. Our framework first transfers the user-provided mask to other rendered views and estimates guiding color and depth images within these transferred masked regions. Next, we formulate an optimization problem that jointly inpaints the image content in all masked regions across multiple views by updating the NeRF model's parameters. We demonstrate our framework on diverse scenes and show it obtained visual plausible and structurally consistent results across multiple views using shorter time and less user manual efforts.

BibTex
@article{ shen2023nerfin,	      
author    = {I-Chao Shen and Hao-Kang Liu and Bing-Yu Chen},
title     = {NeRF-In: Free-Form NeRF Inpainting with RGB-D Priors},
journal   = {Computer Graphics and Applications (CG&A)},
year      = {2024}
}