Navigation Menu

Skip to content

loliverhennigh/Deep-Learning-Papers

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

12 Commits
 
 

Repository files navigation

Deep-Learning-Papers

A list of deep learning papers, videos, reddit pages, stack overflow questions and tutorials that I have found helpful in my studies. This list will mostly contain fairly new (as of me seeing it) stuff.

Variational Autoencoders

Variational Autoencoders

TJ Torres: Deep Style (Dec 4, 2015)

This was the first video I saw on variational autoencoders. I found his explinations extremely intuative and would recogment to anyone who just wants to get some nets trainin.

Tutorial on Variational Autoencoders (August 16, 2016, gets updated a lot though)

Gives very intuative explination of variational autoencoders. I particularly like Figure 4, it shows how to train the networks. I found reading this paper very slowly a few times to give me a much firmer understanding of variational autoencoders.

Looks at disantagleing variables of the learned latent space of variational autoencoders. Often times in varaitional autoencoders some of the variables tend to never be used. This is sometimes called the componet collapse problem. Before I read this paper I thought this was a undisirable trait however they argue that it is adventagous in some tasks because it builds better more independent features. They report a trade off between reconstruction accuracy and ability to disantagle variables. Their whole approach reliys on a tempeture coefficient Beta on the VAE loss.

So many great sources are on this page.

Seeing this 2 months ago would have saved me a lot of time. I found it rediculously difficult to train variational autoencoders with L2 loss. I gave up eventualy and just used log loss and normalized my data between 0 and 1. There is a commit about using Huber loss instead of L2 to speed things up. After trying it I noticed similar speed increases in training.

Does like a Seq2Seq type thing but the hidden state is treated as latent space right before decoding. Figure 1 says it all. Looks like a super good method. I would like to try it out.

Ya, I guess I will put the original paper in this list.

Use variational autoencoders to learn compressed representations of control type problems. Figure 1 says it all really. Basicly they learn a linear mapping on the latent space encodeing and use this for planning. They apply it to some basic reinforcement learning problems with visual input.

Generative Adversarial Networks

Inception Network

Physics Simulations with Neural Networks

ResNet

Wide Residual Networks (May 23, 2016)

Transfer Learning

Super Resolution

[Real-Time Single Image and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional Neural Network] (http://www.cv-foundation.org/openaccess/content_cvpr_2016/papers/Shi_Real-Time_Single_Image_CVPR_2016_paper.pdf) (Sep 23, 2016)

Video Prediction

Neural Networks for Artistic Style

Cool Papers

About

A list of deep learning papers and notes on them

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published