Local Light Field Fusion: Practical View Synthesis with Prescriptive Sampling Guidelines Academic Article uri icon

abstract

  • We present a practical and robust deep learning solution for capturing and rendering novel views of complex real world scenes for virtual exploration. Previous approaches either require intractably dense view sampling or provide little to no guidance for how users should sample views of a scene to reliably render high-quality novel views. Instead, we propose an algorithm for view synthesis from an irregular grid of sampled views that first expands each sampled view into a local light field via a multiplane image (MPI) scene representation, then renders novel views by blending adjacent local light fields. We extend traditional plenoptic sampling theory to derive a bound that specifies precisely how densely users should sample views of a given scene when using our algorithm. In practice, we apply this bound to capture and render views of real world scenes that achieve the perceptual quality of Nyquist rate view sampling while using up to 4000X fewer views. We demonstrate our approach's practicality with an augmented reality smart-phone app that guides users to capture input images of a scene and viewers that enable realtime virtual exploration on desktop and mobile platforms.

published proceedings

  • ACM TRANSACTIONS ON GRAPHICS

altmetric score

  • 6

author list (cited authors)

  • Mildenhall, B., Srinivasan, P. P., Ortiz-Cayon, R., Kalantari, N. K., Ramamoorthi, R., Ng, R., & Kar, A.

citation count

  • 310

complete list of authors

  • Mildenhall, Ben||Srinivasan, Pratul P||Ortiz-Cayon, Rodrigo||Kalantari, Nima Khademi||Ramamoorthi, Ravi||Ng, Ren||Kar, Abhishek

publication date

  • August 2019