Skip to content

Latest commit

 

History

History
87 lines (64 loc) · 3.94 KB

README.md

File metadata and controls

87 lines (64 loc) · 3.94 KB

by Yi Wang, Ying-Cong Chen, Xiangyu Zhang, Jian Sun, Jiaya Jia. The code will be updated.

Introduction

This repository gives the implementation of our method in CVPR 2020 paper, 'Attentive Normalization for Conditional Image Generation'. This paper studies conducting visual long-range dependency modeling in an normalization manner, verified both in class-conditional image generation and image inpainting tasks.

Framework

We normalize the input feature maps spatially according to the semantic layouts predicted from them. It improves the distant relationship in the input as well as preserving semantics spatially.

Framework

Our method is built upon instance normalization (IN). It contains semantic layout learning module (semantic layout prediction + self-sampling regularization) and regional normalization.

Applications

This module can be applied to the current GAN-based conditional image generation tasks, e.g., class-conditional image generation and image inpainting. applications

In common practice, Attentive Normalization is placed between the convolutional layer and the activation layer. In the testing phase, we remove the randomness in AttenNorm by switching off its self-sampling branch. Thus, the generation procedure is deterministic only affected by the input.

Implementation

The TensorFlow implementation of our attentive normalization is given in inpaint_attnorm.

Citation

If our method is useful for your research, please consider citing:

@article{wang2020attentive,
  title={Attentive Normalization for Conditional Image Generation},
  author={Wang, Yi and Chen, Ying-Cong and Zhang, Xiangyu and Sun, Jian and Jia, Jiaya},
  journal={arXiv preprint arXiv:2004.03828},
  year={2020}
}

Acknowledgments

Our TensorFlow code is built upon DeepFill (v1).

Contact

Please send email to yiwang@cse.cuhk.edu.hk.