Skip to content

This repository contains PyTorch implementation of the Paper Training Triplet Networks With GAN on MNIST Dataset

Notifications You must be signed in to change notification settings

07Agarg/Training-Triplet-Networks-With-GAN

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

8 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Training-Triplet-Networks-With-GAN

This repository contains PyTorch implementation of the Paper Training Triplet Networks With GAN (Triplet-GANs) on MNIST Dataset

Hyperparameters

  • Batch Size: 100
  • Pre-train learning rate: 0.0003
  • Train learning rate: 0.0003
  • Pre-train epochs: 100
  • Training epochs: 30
  • Input size of generator: 100

Important techniques used for training.

  1. Weight initialization to standard normal value with mean 0 and variance 0.05 for convolutional layers and to variance of 0.02 for fully connected layers.
  2. Weight normalization.
  3. Batch norm layers in initial layers of generator.
  4. Set output layer of generator to Sigmoid non-linearity.
  5. Use feature matching to calculate generator's loss.

Results

Accuracy
N=100, M=16 0.9806
N=100, M=32 0.9773
N=200, M=16 0.9817

Plots

Pre-train Loss Curve

Generated Images after Pre-training

Generated Images after Pre-training Training Loss Curve

Generated Images after training

Generated Images after training

TSNE-Plots After Training

TSNE-Plot

References

  1. Improved techniques for training GANs. [Paper] (NeurIPS, 2016), [Code]
  2. Official Code Repo (Lasagne Code): https://github.com/maciejzieba/tripletGAN

About

This repository contains PyTorch implementation of the Paper Training Triplet Networks With GAN on MNIST Dataset

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Languages