Light Field Video Capture Using a Learning-Based Hybrid Imaging System Academic Article uri icon


  • Light field cameras have many advantages over traditional cameras, as they allow the user to change various camera settings after capture. However, capturing light fields requires a huge bandwidth to record the data: a modern light field camera can only take three images per second. This prevents current consumer light field cameras from capturing light field videos. Temporal interpolation at such extreme scale (10x, from 3 fps to 30 fps) is infeasible as too much information will be entirely missing between adjacent frames. Instead, we develop a hybrid imaging system, adding another standard video camera to capture the temporal information. Given a 3 fps light field sequence and a standard 30 fps 2D video, our system can then generate a full light field video at 30 fps. We adopt a learning-based approach, which can be decomposed into two steps: spatio-temporal flow estimation and appearance estimation. The flow estimation propagates the angular information from the light field sequence to the 2D video, so we can warp input images to the target view. The appearance estimation then combines these warped images to output the final pixels. The whole process is trained end-to-end using convolutional neural networks. Experimental results demonstrate that our algorithm outperforms current video interpolation methods, enabling consumer light field videography, and making applications such as refocusing and parallax view generation achievable on videos for the first time.

published proceedings


altmetric score

  • 5.25

author list (cited authors)

  • Wang, T., Zhu, J., Kalantari, N. K., Efros, A. A., & Ramamoorthi, R.

citation count

  • 84

complete list of authors

  • Wang, Ting-Chun||Zhu, Jun-Yan||Kalantari, Nima Khademi||Efros, Alexei A||Ramamoorthi, Ravi

publication date

  • August 2017