TiNeuVox:
Fast Dynamic Radiance Fields with
Time-Aware Neural Voxels

1Institute of AI, HUST 2School of EIC, HUST 3Huawei Inc. 4TUM
* denotes equal contributions.
ACM SIGGRAPH Asia 2022

Abstract

overview

We propose a radiance field framework by representing scenes with time-aware voxel features, named as TiNeuVox. A tiny coordinate deformation network is introduced to model coarse motion trajectories and temporal information is further enhanced in the radiance network. A multi-distance interpolation method is proposed and applied on voxel features to model both small and large motions. Our framework significantly accelerates the optimization of dynamic radiance fields while maintaining high rendering quality. Empirical evaluation is performed on both synthetic and real scenes. Our TiNeuVox completes training with only 8 minutes and 8-MB storage cost while showing similar or even better rendering performance than previous dynamic NeRF methods.

Video

Results on 360° Synthetic Scenes

These videos are synthesized with novel time and view with sparse time-view images as input.

The synthesis process can be decomposed with time and view direction separately.

Results on Real Scenes

These videos are synthesized with novel time and view, taken sparse time-view images as input. Right is the depth map.

Citation

@inproceedings{TiNeuVox,
  author = {Fang, Jiemin and Yi, Taoran and Wang, Xinggang and Xie, Lingxi and Zhang, Xiaopeng and Liu, Wenyu and Nie\ss{}ner, Matthias and Tian, Qi},
  title = {Fast Dynamic Radiance Fields with Time-Aware Neural Voxels},
  year = {2022},
  booktitle = {SIGGRAPH Asia 2022 Conference Papers}
}

Acknowledgements

The authors would like to thank Prof. Angela Dai for her voice recorded in the presentation video, and Liangchen Song and Yingqing Rao for their valuable comments and discussions.