Deep HDR Video from Sequences with Alternating Exposures
- Additional Document Info
- View All
2019 The Author(s) Computer Graphics Forum 2019 The Eurographics Association and John Wiley & Sons Ltd. Published by John Wiley & Sons Ltd. A practical way to generate a high dynamic range (HDR) video using off-the-shelf cameras is to capture a sequence with alternating exposures and reconstruct the missing content at each frame. Unfortunately, existing approaches are typically slow and are not able to handle challenging cases. In this paper, we propose a learning-based approach to address this difficult problem. To do this, we use two sequential convolutional neural networks (CNN) to model the entire HDR video reconstruction process. In the first step, we align the neighboring frames to the current frame by estimating the flows between them using a network, which is specifically designed for this application. We then combine the aligned and current images using another CNN to produce the final HDR frame. We perform an end-to-end training by minimizing the error between the reconstructed and ground truth HDR images on a set of training scenes. We produce our training data synthetically from existing HDR video datasets and simulate the imperfections of standard digital cameras using a simple approach. Experimental results demonstrate that our approach produces high-quality HDR videos and is an order of magnitude faster than the state-of-the-art techniques for sequences with two and three alternating exposures.
author list (cited authors)
Kalantari, N. K., & Ramamoorthi, R.
complete list of authors
Kalantari, Nima Khademi||Ramamoorthi, Ravi