Skip to content

Tensorflow implementation of the paper Marginalized Denoising Auto-encoders for Nonlinear Representations (ICML 2014)

License

Notifications You must be signed in to change notification settings

satwik77/mDA-Tensorflow

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Marginalised Denoising Autoencoders for Nonlinear Respresentations

Tensorflow implementation of the paper Marginalized Denoising Auto-encoders for Nonlinear Representations (ICML 2014). Other denoising techniques have longer training time and high computational demands. mDA addresses the problem by implicitly denoising the raw input via Marginalization and, thus, is effectively trained on infinitely many training samples without explicitly corrupting the data. There are similar approaches but they have non-linearity or latent representations stripped away. This addresses the disadvantages of those approaches, and hence is a generalization of those works.

Requirements

  • Python 2.7
  • Tensorflow
  • NumPy

Run

To train the demo model :

python mdA.py 

Demo Results

Resulted filters of first layer during training:
Image Filter Gif
The filters are continuously improving and learning specialized feature extractors.

References

  • Chen, Minmin, et al. "Marginalized denoising auto-encoders for nonlinear representations." International Conference on Machine Learning. 2014. [Paper]
  • Vincent, Pascal, et al. "Extracting and composing robust features with denoising autoencoders." Proceedings of the 25th international conference on Machine learning. ACM, 2008. [Paper]

About

Tensorflow implementation of the paper Marginalized Denoising Auto-encoders for Nonlinear Representations (ICML 2014)

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages