Skip to content

A Pytorch implementation of the paper Image-to-Image Translation with Conditional Adversarial Networks.

License

Notifications You must be signed in to change notification settings

GabrielDornelles/Pix2Pix

Repository files navigation

Pix2Pix

PyTorch

This repository is an implementation of the paper Image-to-Image Translation with Conditional Adversarial Networks.

I've implemented it reading the paper and comparing with the original authors/AladdinPersson implementation.

This Gan was trained to colorize anime pictures:

Inputs

input_81

Outputs

y_gen_81

The same model can be trained to generate maps from aerial images or vice-versa, and many other applications.

image


Training

I've trained it in the anime-sketch-colorization-pair dataset

  • batch_size: 8
  • lr: 2e-4
  • resolution: 256x256

BatchNorm2d changed for InstanceNorm2d (already changed by the authors in the CycleGAN paper):

Batchnorm computes one mean and std per batch, and make the whole Gaussian Unit. Instance norm computes one mean and std per sample in the batch, and then make each sample Gaussian Unit, separately. So using Instancenorm gives better visual results, specially on the background, since the background is pure white, where batchnorm may cause noise on it , since it makes the whole batch Gaussian Unit at once.

About

A Pytorch implementation of the paper Image-to-Image Translation with Conditional Adversarial Networks.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages