Skip to content

Revisiting Light Field Rendering with Deep Anti-Aliasing Neural Network

License

Notifications You must be signed in to change notification settings

GaochangWu/DA2N

Repository files navigation

Revisiting Light Field Rendering with Deep Anti-Aliasing Neural Network

Gaochang Wu1, Yebin Liu2, Lu Fang3, Tianyou Chai1

1State Key Laboratory of Synthetical Automation for Process Industries, Northeastern University
2Department of Automation, Tsinghua University
3Tsinghua-Berkeley Shenzhen Institute

Abstract

Teaser Image

The light field (LF) reconstruction is mainly confronted with two challenges, large disparity and non-Lambertian effect. Typical approaches either address the large disparity challenge using depth estimation followed by view synthesis or eschew explicit depth information to enable non-Lambertian rendering, but rarely solve both challenges in a unified framework. In this paper, we revisit the classic LF rendering framework to address both challenges by incorporating it with advanced deep learning techniques. First, we analytically show that the essential issue behind the large disparity and non-Lambertian challenges is the aliasing problem. Classic LF rendering approaches typically mitigate the aliasing with a reconstruction filter in the Fourier domain, which is, however, intractable to implement within a deep learning pipeline. Instead, we introduce an alternative framework to perform anti-aliasing reconstruction in the image domain and prove in theory the comparable and even superior efficacy on the aliasing issue. To explore the full potential, we then embed the anti-aliasing framework into a deep neural network through the design of an integrated architecture and trainable parameters. The network is trained through end-to-end optimization using a peculiar training set, including regular LFs and unstructured LFs. The proposed deep learning pipeline shows a substantial superiority on solving both the large disparity and the non-Lambertian challenges compared with other state-of-the-art approaches. In addition to the view interpolation for a LF, we also show that the proposed pipeline also benefits light field view extrapolation.

Results

Teaser Image
Comparison of the results (x16 upsampling) on the LFs from the ICME DSLF dataset [34]
Teaser Image
Comparison of the results (x16 upsampling) on the LFs from the MPI Light Field Archive [38]

Note for Code

  1. Environment -Python 3.7.4, tensorflow-gpu==1.13.1

  2. You should first upload your light fields to "./Datasets/".

  3. The code for 3D light field (1D angular and 2D spatial) reconstruction is "main3d.py". Recommend using the model with upsampling scale \alpha_s=3 for x8 or x9 reconstruction, and the model with upsampling scale \alpha_s=4 for x16 reconstruction.

  4. The code for 4D light field reconstruction is "main4d.py".

  5. You can train your own network using "train.py". But you should prepare your own training and testing datasets in "./TrainData/".

  6. Please prepare your dataset in the ".h5" format with shape batch (examples) x channels x width x angular.

  7. "Model_ae" is the encoder part (of the autoencoder) for computing the style loss.

  8. Please cite our paper if it helps, thank you!

About

Revisiting Light Field Rendering with Deep Anti-Aliasing Neural Network

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published