NeuSample: Neural Sample Field for Efficient View Synthesis
- Jiemin Fang HUST
- Lingxi Xie Huawei
- Xinggang Wang ✉ HUST
- Xiaopeng Zhang Huawei
- Wenyu Liu HUST
- Qi Tian Huawei
Abstract
Neural radiance fields (NeRF) have shown great potentials in representing 3D scenes and synthesizing novel views, but the computational overhead of NeRF at the inference stage is still heavy. To alleviate the burden, we delve into the coarse-to-fine, hierarchical sampling procedure of NeRF and point out that the coarse stage can be replaced by a lightweight module which we name a neural sample field. The proposed sample field maps rays into sample distributions, which can be transformed into point coordinates and fed into radiance fields for volume rendering. The overall framework is named as NeuSample. We perform experiments on Realistic Synthetic 360° and Real Forward-Facing, two popular 3D scene sets, and show that NeuSample achieves better rendering quality than NeRF while enjoying a faster inference speed. NeuSample is further compressed with a proposed sample field extraction method towards a better trade-off between quality and speed.
Video
Sample Field
We propose a neural sample field which maps a ray directly into a series of samples for volume rendering.
Specifically, we first obtain N scalars, which lie in 0 and 1, by feeding ray origin coordinates and direction into the sample field. The output scalar represents the relative sample position between the near and far bound along the ray. Then these scalars are transformed into absolute coordinates.
Integrating Sample Fields with Radiance Fields
To render a pixel in the image, the ray passing through the pixel is first fed into a sample field network, which is mapped to a distribution along the ray. The distribution is then transformed to 3D-point coordinates, which are fed into the radiance field to obtain colors and densities. Finally, volume rendering is performed on these points.
Sample Field Extraction
Besides saving computation cost from coarse fields, we extract the learned sample field to produce fewer samples for further acceleration. A depth boost method is proposed for initializing an extracted field. The output mean value of the extracted field is forced to fit the depth predicted by a learned regular field.
Results
We evaluate NeuSample on both Realistic Synthetic 360° and Real Forward-Facing scenes, which shows competitive or better rendering quality with less computation cost than NeRF.
Under different acceleration settings, NeuSample shows a much better speed-quality trade-off.
Citation
Acknowledgements
We would like to thank Liangchen Song, Yingqing Rao and Yuzhu Sun for their generous assistance and discussion.
The website template was borrowed from Mip-NeRF.