This is the repository for the code used in writing the paper, 'Generative Adversarial Network Architectures For Image Synthesis Using Capsule Network'. This paper proposes GAN architectures for incorporating Capsule networks for conditional and non-conditional image synthesis. The paper also demonstartes that such an architecture requires significantly lesser training data to generate good quality images in comparison to current architectures for image synthesis (DCGANs with Improved Wasserstein Loss). The paper also demonstrates an increase in the diversity of images generated owing to the robustness of Capsule GANs to small affine transformations.
Discriminative Capsule GAN
The following diagram shows the architecture for the Discriminative Capsule GAN that is used for non-conditional image synthesis.
The discriminator has been substituted with a Capsule Network in place of a CNN. Also, the loss uses marginal losses described in the paper, "Dynamic Routing Between Capsules" by Sabour et al, for Capsule Networks to build a function analogous to the Wasserstein Loss, allowing the architecture to benefit from stable training and faster convergence of critic to optimality.
Split Auxiliary Conditional Capsule GAN
The discriminator uses a split-auxiliary classifier architetcure for conditional image discrimination. The Primary Capsule layer is common to the Primary and Secondary Classifiers. The Primary Classifier classifies the image as real/fake and the Secondary Classifier predicts the most probable class of the image. The losses from these 2 networks is combined to develop a single discriminator loss.
- Python 3.5 or higher
- Tensorflow 1.4
Improved Wasserstein GAN:
/Improved Wasserstein GAN/ and run,
Non-conditional Discriminative Capsule GAN:
/Discriminative Capsule GAN/ and run,
Conditional Improved Wasserstein DCGAN
/Conditional Improved WDCGAN/ and run,
Split-Auxiliary Capsule GAN:
/Condtional Capsule GAN/ and run,
Improved Wassersteain GAN
Discriminative Capsule GAN
Nearest Nighbour Distance Comparison
Following is the comparison of the mean nearest neighbour distances of the images generated from the Fashion-MNIST dataset by the Improved Wasserstein GAN and Discriminatative Capsule GAN models. As one can see, despite the images generated by Dicriminative Capsule GAN being more realistic, the mean nearest neighbour distances are consistently higher than that of the images generated by the Improved Wasserstein GAN. This points towards a greater diversity in the images generated by the Discriminative Capsule GAN.
Improved Wasserstein GAN
Following are the images generated by training the GAN over rotated-MNIST for 100 epochs and flipped-Fashion-MNIST for 50 epochs.
Split-Auxiliary Conditional Capsule GAN
Following are the images generated by training the GAN over rotated-MNIST for 5 epochs and flipped-Fashion-MNIST for 10 epochs.
Nearest Neighbour Distance Comparison
Following is the comparison of the mean nearest neighbour distances of the images generated from the rotated-MNIST dataset by the Conditional Improved Wasserstein GAN (left) over 100 epochs and Split-Auxiliary Conditional Capsule GAN models (right) over 10 epochs. As one can see, despite the images generated by Capsule GAN being more realistic, the mean nearest neighbour distances are consistently higher than that of the images generated by the Improved Wasserstein GAN. This points towards a greater diversity in the images generated by the Capsule GAN.