Skip to content

samarth0898/MNISTAutoEncoder

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

8 Commits
 
 
 
 
 
 

Repository files navigation

A Simple AutoEncoder for images in the MNIST DATASET

The learning capability of the simplest autoencoder is explored in this work. We see if the self-supervised autoencoder can accurately learn to reconstruct the MNIST digits.

AutoEncoder1

Useful parameters to consider tuning in this simple autoencoder

  • Model Architecture: In this case we pick Multi-Layer Perceptrons of size [784, 128, 32 (latent representation size)] for the encoder and symmetrically for decoder.
  • Learning rate scheduler : REDUCELRONPLATEAU with default factor and patience;
  • Optimizer : Adaptive momentum with default betas and LR of 0.001
  • Codeword represenation size
  • misc: epoch, regulariztion etc.

Reconstruction results

Original

AutoEncoder2

Reconstructed

AutoEncoder3

About

Simple self-supervised autoencoder to accurately learn MNIST image reconstructions.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published