We present a learning-based approach for removing unwanted obstructions, such as window reflections, fence occlusions, or raindrops, from a short sequence of images captured by a moving camera. Our method leverages motion differences between the background and obstructing elements to recover both layers. Specifically, we alternate between estimating dense optical flow fields of the two layers and reconstructing each layer from the flow-warped images via a deep convolutional neural network. This learning-based layer reconstruction module facilitates accommodating potential errors in the flow estimation and brittle assumptions, such as brightness consistency. We show that the proposed approach learned from synthetically generated data performs well to real images. Experimental results on numerous challenging scenarios of reflection and fence removal demonstrate the effectiveness of the proposed method.
Paper
This is the author's reference implementation of the multi-image reflection/fence removal using TensorFlow described in: "Learning to See Through Obstructions with Layered Decomposition" Yu-Lun Liu, Wei-Sheng Lai, Ming-Hsuan Yang, Yung-Yu Chuang, Jia-Bin Huang (National Taiwan University & Google & Virginia Tech & University of California at Merced & MediaTek Inc.) If you find this code useful for your research, please consider citing the following paper.
Further information please contact Yu-Lun Liu.
-
- tested using TensorFlow 1.10.0
-
- Please overwrite
tfoptflow/model_pwcnet.py
andtfoptflow/model_base.py
using the ones in this repository.
- Please overwrite
-
To download the pre-trained models:
Please prepare 5 frames and follow the naming rule XXXXX_I[0-4].png
as shown in reflection_imgs
or fence_imgs
folder, and change the folder path in run_reflection.py
or run_fence.py
.
- Run your own sequence (reflection removal):
CUDA_VISIBLEDEVICES=0 python3 run_reflection.py
- Run your own sequence (fence removal):
CUDA_VISIBLEDEVICES=0 python3 run_fence.py
https://colab.research.google.com/drive/1kCG5SJd3usgzi6Bx979KiaO_YTanNVVz?usp=sharing
We collect six sequences with ground truth.
- website/Obstruction_HTML_CameraReady/results/reflection/Huang and Liu/00071
- website/Obstruction_HTML_CameraReady/results/reflection/Huang and Liu/00072
- website/Obstruction_HTML_CameraReady/results/fence/Huang and Liu/00006
- website/Obstruction_HTML_CameraReady/results/fence/Huang and Liu/00007
- website/Obstruction_HTML_CameraReady/results/fence/Huang and Liu/00008
- website/Obstruction_HTML_CameraReady/results/raindrop/Huang and Liu/00005
[1] Yu-Lun Liu, Wei-Sheng Lai, Ming-Hsuan Yang, Yung-Yu Chuang, and Jia-Bin Huang. Learning to See Through Obstructions with Layered Decomposition. arXiv, 2020