Progressively Optimized Local Radiance Fields for Robust View Synthesis
The algorithm proposes a reconstruction of a large scale radiance field from a single casually captured video.
Structure-from-Motion (SfM) for estimating the camera poses is not robust in hand-held video settings. It fails due to even slight dynamic motion present in the videos. To remove this dependency on camera poses, the authors have proposed a joint pose and radiance field estimation method. The core of the approach is to process the video sequence progressively using overlapping local radiance fields. This leads to significant improved robustness.
The multiple overlapping local radiance fields improves visual quality and supports modeling large-scale unbounded scenes. A newly collected video dataset is also presented as part of the authors contribution to the research on NeRFs.
A limitation of the work is that its not a complete SLAM system.