This note presents in a technical though hopefully pedagogical way the three most common forms of neural network architectures: Feedforward, Convolutional and Recurrent.
Switch branches/tags
Nothing to show
Clone or download
tomepel Merge pull request #9 from zhangzhishan/master
change 'output width' to 'input width'
Latest commit dd2ea33 Dec 17, 2017
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
.gitattributes Initial commit Sep 4, 2017
.gitignore remove temporary latex output and ignore in gitignore Sep 7, 2017
Acknowledgements.tex
AlexNet.pdf
Bottleneck.pdf Added all the files Sep 4, 2017
Bottleneck_BN.pdf
Bottleneck_BN_2.pdf
Bottleneck_BN_backprop.pdf Added all the files Sep 4, 2017
Bottleneck_BN_backprop_2.pdf
CNN_MM_pixels.pdf
CNN_MM_unpixels.pdf Added all the files Sep 4, 2017
Conclusion.tex typos in conclusion, intro and preface fixed Sep 7, 2017
Conv_equiv.pdf Added all the files Sep 4, 2017
DEEP_LEARNING.bib
ELU.pdf
GoogleNet.pdf Added all the files Sep 4, 2017
Inception.pdf
Introduction.tex removed double whitespaces in text and trailing whitespaces Sep 7, 2017
LSTM_structure-peephole.pdf Added all the files Sep 4, 2017
LSTM_structure-tot.pdf Added all the files Sep 4, 2017
LSTM_structure.pdf Added all the files Sep 4, 2017
LeNet.pdf Added all the files Sep 4, 2017
Mediamobile.png Added all the files Sep 4, 2017
Preface.tex removed double whitespaces in text and trailing whitespaces Sep 7, 2017
README.md Fix typos in README Sep 7, 2017
RNN_structure-tot.pdf Added all the files Sep 4, 2017
RNN_structure.pdf Added all the files Sep 4, 2017
ReLU.pdf
ResNet.pdf Added all the files Sep 4, 2017
S_FNN.pdf
ThesisStyle.cls
VGG-conv.pdf Added all the files Sep 4, 2017
VGG-fc.pdf Added all the files Sep 4, 2017
VGG-pool-fc.pdf Added all the files Sep 4, 2017
VGG-pool.pdf Added all the files Sep 4, 2017
VGG.pdf Added all the files Sep 4, 2017
White_book-blx.bib Added all the files Sep 4, 2017
White_book.bcf Added all the files Sep 4, 2017
White_book.dvi Added all the files Sep 4, 2017
White_book.pdf Typo chapter 3 Nov 15, 2017
White_book.tex removed double whitespaces in text and trailing whitespaces Sep 7, 2017
chapter1.tex Update chapter1.tex Nov 28, 2017
chapter2.tex change 'output width' to 'input width' Dec 17, 2017
chapter3.tex Typo chapter 3 Nov 15, 2017
conv_2d-crop.pdf
conv_4d-crop.pdf Added all the files Sep 4, 2017
cover_page-crop.pdf change date cover page for arxiv revision Sep 11, 2017
fc_equiv.pdf Added all the files Sep 4, 2017
fc_resnet.pdf
fc_resnet_2.pdf
fc_resnet_3.pdf Added all the files Sep 4, 2017
formatAndDefs.tex removed double whitespaces in text and trailing whitespaces Sep 7, 2017
fully_connected.pdf Added all the files Sep 4, 2017
input_layer.pdf Added all the files Sep 4, 2017
lReLU.pdf change leaky-relu plot Sep 15, 2017
mathpazo.sty Added all the files Sep 4, 2017
output_layer.pdf Added all the files Sep 4, 2017
padding.pdf Added all the files Sep 4, 2017
pagedecouv.sty Added all the files Sep 4, 2017
pool_4d-crop.pdf Added all the files Sep 4, 2017
sigmoid.pdf Added all the files Sep 4, 2017
softmax.pdf Added all the files Sep 4, 2017
tanh.pdf Added all the files Sep 4, 2017
tanh2.pdf Added all the files Sep 4, 2017

README.md

Technical Book on Deep Learning

This note presents in a technical though hopefully pedagogical way the three most common forms of neural network architectures: Feedforward, Convolutional and Recurrent.

For each network, their fundamental building blocks are detailed. The forward pass and the update rules for the backpropagation algorithm are then derived in full.

The pdf of the whole document can be downloaded directly: White_book.pdf.

Otherwise, all the figures contained in the note are joined in this repo, as well as the tex files needed for compilation. Just don't forget to cite the source if you use any of this material! :)

Hope it can help others!

Acknowledgement

This work has no benefit nor added value to the deep learning topic on its own. It is just the reformulation of ideas of brighter researchers to fit a peculiar mindset: the one of preferring formulas with ten indices but where one knows precisely what one is manipulating rather than (in my opinion sometimes opaque) matrix formulations where the dimension of the objects are rarely if ever specified.

Among the brighter people from whom I learned online are Andrew Ng. His Coursera class (https://www.coursera.org/learn/machine-learning) was the first contact I got with Neural Network, and this pedagogical introduction allowed me to build on solid ground.

I also wish to particularly thanks Hugo Larochelle, who not only built a wonderful deep learning class (http://info.usherbrooke.ca/hlarochelle/neural_networks/content.html), but was also kind enough to answer emails from a complete beginner and stranger!

The Stanford class on convolutional networks (http://cs231n.github.io/convolutional-networks/) proved extremely valuable to me, so did the one on Natural Language processing (http://web.stanford.edu/class/cs224n/).

I also benefited greatly from Sebastian Ruder's blog (http://ruder.io/#open), both from the blog pages on gradient descent optimization techniques and from the author himself.

I learned more about LSTM on colah's blog (http://colah.github.io/posts/2015-08-Understanding-LSTMs/), and some of my drawings are inspired from there.

I also thank Jonathan Del Hoyo for the great articles that he regularly shares on LinkedIn.

Many thanks go to my collaborators at Mediamobile, who let me dig as deep as I wanted on Neural Networks. I am especially indebted to Clément, Nicolas, Jessica, Christine and Céline.

Thanks to Jean-Michel Loubes and Fabrice Gamboa, from whom I learned a great deal on probability theory and statistics.

I end this list with my employer, Mediamobile, which has been kind enough to let me work on this topic with complete freedom. A special thanks to Philippe, who supervised me with the perfect balance of feedback and freedom!

Contact

If you detect any typo, error (as I am sure that there unfortunately still are), or feel that I forgot to cite an important source, don't hesitate to email me: thomas.epelbaum@mediamobile.com