Skip to yearly menu bar Skip to main content


Poster

MVSGaussian: Fast Generalizable Gaussian Splatting Reconstruction from Multi-View Stereo

Tianqi Liu · Guangcong Wang · Shoukang Hu · Liao Shen · Xinyi Ye · Yuhang Zang · Zhiguo Cao · Wei Li · Ziwei Liu

[ ]
Tue 1 Oct 1:30 a.m. PDT — 3:30 a.m. PDT

Abstract:

We present MVSGaussian, a new generalizable 3D Gaussian representation approach derived from Multi-View Stereo (MVS) that can efficiently reconstruct unseen scenes. Specifically, 1) we leverage MVS to encode geometry-aware Gaussian representations and decode them into Gaussian parameters. 2) To further enhance performance, we propose a hybrid Gaussian rendering that integrates an efficient volume rendering design for novel view synthesis. 3) To support fast fine-tuning for specific scenes, we introduce a multi-view geometric consistent aggregation strategy to effectively aggregate the point clouds generated by the generalizable model, serving as the initialization for per-scene optimization. Compared with previous generalizable NeRF-based methods, which typically require minutes of fine-tuning and seconds of rendering per image, MVSGaussian achieves real-time rendering (300+ FPS) with better synthesis quality for each scene on a single RTX 3090 GPU. Compared with the vanilla 3D-GS, MVSGaussian achieves better novel view synthesis with 13.3× less training computational cost (45s). Extensive experiments on DTU, Real Forward-facing, NeRF Synthetic, and Tanks and Temples datasets validate that MVSGaussian achieves state-of-the-art performance with convincing generalizability, real-time rendering speed, and fast per-scene optimization.

Live content is unavailable. Log in and register to view live content