Skip to content

This is the repository for the code used in writing the paper, 'Generative Adversarial Network Architectures For Image Synthesis Using Capsule Network'

Notifications You must be signed in to change notification settings

codeaudit/Conditional-and-nonConditional-Capsule-GANs

 
 

Repository files navigation

Conditional-and-nonConditional-Capsule-GANs

This is the repository for the code used in writing the paper, 'Generative Adversarial Network Architectures For Image Synthesis Using Capsule Network'. This paper proposes GAN architectures for incorporating Capsule networks for conditional and non-conditional image synthesis. The paper also demonstartes that such an architecture requires significantly lesser training data to generate good quality images in comparison to current architectures for image synthesis (DCGANs with Improved Wasserstein Loss). The paper also demonstrates an increase in the diversity of images generated owing to the robustness of Capsule GANs to small affine transformations.

Architectures

Discriminative Capsule GAN

The following diagram shows the architecture for the Discriminative Capsule GAN that is used for non-conditional image synthesis.

CapsGANArch

The discriminator has been substituted with a Capsule Network in place of a CNN. Also, the loss uses marginal losses described in the paper, "Dynamic Routing Between Capsules" by Sabour et al[1], for Capsule Networks to build a function analogous to the Wasserstein Loss, allowing the architecture to benefit from stable training and faster convergence of critic to optimality.

Split Auxiliary Conditional Capsule GAN

The following diagram shows the architecture for the Split-Auxiliary Conditional Capsule GAN that is used for conditional image synthesis. Alt text

The discriminator uses a split-auxiliary classifier architetcure for conditional image discrimination. The Primary Capsule layer is common to the Primary and Secondary Classifiers. The Primary Classifier classifies the image as real/fake and the Secondary Classifier predicts the most probable class of the image. The losses from these 2 networks is combined to develop a single discriminator loss.

Requirements

  • Python 3.5 or higher
  • Tensorflow 1.4
  • Matplotlib
  • Numpy
  • Imageio
  • Pickle
  • Scikit-Learn
  • Tqdm

Running Instructions

Improved Wasserstein GAN: Go to /Improved Wasserstein GAN/ and run,

python NonCondImprovedWassersteinGAN.py

Non-conditional Discriminative Capsule GAN: Go to /Discriminative Capsule GAN/ and run,

python DiscriminativeCapsGAN.py

Conditional Improved Wasserstein DCGAN Go to /Conditional Improved WDCGAN/ and run,

python ConditionalImprovedWDCGAN.py

Split-Auxiliary Capsule GAN: Go to /Condtional Capsule GAN/ and run,

python ConditionalCapsGAN.py

Results

Non-Conditional Models

Improved Wassersteain GAN

Following are the images generated by training the GAN over MNIST and Fashion-MNIST datasets for 5 epochs

Discriminative Capsule GAN

Following are the images generated by training the GAN over MNIST and Fashion-MNIST datasets for 5 epochs

Nearest Nighbour Distance Comparison

Following is the comparison of the mean nearest neighbour distances of the images generated from the Fashion-MNIST dataset by the Improved Wasserstein GAN and Discriminatative Capsule GAN models. As one can see, despite the images generated by Dicriminative Capsule GAN being more realistic, the mean nearest neighbour distances are consistently higher than that of the images generated by the Improved Wasserstein GAN. This points towards a greater diversity in the images generated by the Discriminative Capsule GAN.

Conditional Models

Improved Wasserstein GAN

Following are the images generated by training the GAN over rotated-MNIST for 100 epochs and flipped-Fashion-MNIST for 50 epochs.

Split-Auxiliary Conditional Capsule GAN

Following are the images generated by training the GAN over rotated-MNIST for 5 epochs and flipped-Fashion-MNIST for 10 epochs.

Nearest Neighbour Distance Comparison

Following is the comparison of the mean nearest neighbour distances of the images generated from the rotated-MNIST dataset by the Conditional Improved Wasserstein GAN (left) over 100 epochs and Split-Auxiliary Conditional Capsule GAN models (right) over 10 epochs. As one can see, despite the images generated by Capsule GAN being more realistic, the mean nearest neighbour distances are consistently higher than that of the images generated by the Improved Wasserstein GAN. This points towards a greater diversity in the images generated by the Capsule GAN.

About

This is the repository for the code used in writing the paper, 'Generative Adversarial Network Architectures For Image Synthesis Using Capsule Network'

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%