Skip to yearly menu bar Skip to main content


Poster

NeRF-XL: NeRF at Any Scale with Multi-GPU

Ruilong Li · Sanja Fidler · Angjoo Kanazawa · Francis Williams

# 334
Strong blind review: This paper was not made available on public preprint services during the review process Strong Double Blind
[ ] [ Paper PDF ]
Thu 3 Oct 1:30 a.m. PDT — 3:30 a.m. PDT

Abstract:

In this paper, we first revisit the existing approach of decomposing large-scale scenes into multiple independently trained Neural Radiance Fields (NeRFs), and identify several fundamental issues that hinder performance improvement with additional computational resources (GPUs), which contradicts the fundamental objective of leveraging multi-GPU setups to enhance large-scale NeRF performance. Subsequently, we introduce NeRF-XL, a principled algorithm designed to efficiently harness multi-GPU setups for performance improvement, thereby enabling NeRF at any scale. At its core, our method allocates non-overlapping NeRFs across disjoint spatial regions and optimizes them jointly across GPUs. We reduce the GPU communication overhead by rewriting the volume rendering equation and relevant loss terms, enhancing training and rendering efficiency. Without any heuristics, our approach gracefully reveals scaling laws for NeRFs in the multi-GPU setting across various types of data and scales, including the first time NeRF reconstruction on the largest open-source dataset to date, MatrixCity, with 258K images covering a 25km^2 city area.

Live content is unavailable. Log in and register to view live content