TiNeuVox:
Fast Dynamic Radiance Fields with
Time-Aware Neural Voxels

1Institute of AI, HUST 2School of EIC, HUST 3Huawei Inc. 4TUM
* denotes equal contributions.
Conditionally Accepted to ACM SIGGRAPH Asia 2022

Abstract

overview

We propose a radiance field framework by representing scenes with time-aware voxel features, named as TiNeuVox. A tiny coordinate deformation network is introduced to model coarse motion trajectories and temporal information is further enhanced in the radiance network. A multi-distance interpolation method is proposed and applied on voxel features to model both small and large motions. Our framework significantly accelerates the optimization of dynamic radiance fields while maintaining high rendering quality. Empirical evaluation is performed on both synthetic and real scenes. Our TiNeuVox completes training with only 8 minutes and 8-MB storage cost while showing similar or even better rendering performance than previous dynamic NeRF methods.

Video

Results on 360° Synthetic Scenes

These videos are synthesized with novel time and view with sparse time-view images as input.

The synthesis process can be decomposed with time and view direction separately.

Results on Real Scenes

These videos are synthesized with novel time and view, taken sparse time-view images as input. Right is the depth map.

Citation

@article{tineuvox,
title={Fast Dynamic Radiance Fields with Time-Aware Neural Voxels},
author={Jiemin Fang and Taoran Yi and Xinggang Wang and Lingxi Xie and Xiaopeng Zhang and Wenyu Liu and Matthias Nie{\ss}ner and Qi Tian},
journal={arxiv:2205.15285},
year={2022}
}

Acknowledgements

The authors would like to thank Prof. Angela Dai for her voice recorded in the presentation video and Liangchen Song for his valuable comments and discussions.