Our scheme adaptively restore underwater vision in real time. During training, FRS serve as the ground truth, but the output is more superior owing to our improvements.
For underwater vision, our contribution contains:
- Underwater branch in D: distinguish an image is aquatic or not
- Underwater index loss: what D shuold learn and G shuold reduce
- DCP loss: encourage the output to be similar with the ground truth in terms of dark channel
- Multi-stage loss stratage: when to ally the underwater index loss
This project is based on pix2pix, and the video result is located at GAN-RS. If you use this code for your research, please cite:
Towards Qualitaty Advancement of Underwater Machine Vision with Generative Adversarial Networks, Xingyu Chen, Junzhi Yu, Shihan Kong, Zhengxing Wu, Xi Fang, Li Wen, arXiv preprint arXiv:1712.00736 (2017).
Image-to-Image Translation with Conditional Adversarial Networks
Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, Alexei A. Efros
In CVPR 2017.