Skip to content

tomturing/UFLDL-tutorial

 
 

Repository files navigation

These are solutions to the exercises up at the Stanford OpenClassroom Deep Learning class and Andrew Ng's UFLDL Tutorial. When I was solving these, I looked around for copies of the solutions so I could compare notes because debugging learning algorithms is often tedious in a way that isn't educational, but almost everything I found was incomplete or obviously wrong. I don't promise that these don't have bugs, but they at least give outputs within the range of the expected outputs for the assignments.

I've attempted to make this Octave compatible, so that you can run this with free software. It seems to work, but the results are slightly different. One side effect of this is that I'm using fminlbfgs instead of minFunc. It ran for me with Octave 3.6.4; my understanding is that Octave 3.8 and newer versions aren't completely backwards compatible, so you may run into problems with the current version of octave. Pull requests welcome, of course.

Here's the order of the exercises:

  1. linear.m
  2. multiple.m
  3. logistic.m
  1. Sparse Autoencoder: sparseae_exercise/train.m

  2. Vectorized Implementation: sparseae_exercise/train.m (1 is already vectorized)

3.1. PCA in 2d: pca_2d/pca_2d.m

3.2. PCA: pca_gen/pca_gen.m

  1. Softmax Regression: softmax_exercise/softmaxExercise.m

  2. Self-Taught Learning: stl_exercise/stlExercise.m

  3. Building Deep Networks for Classification: stackedae_exercise/stackedAEExercise.m

  4. Learning Color Features with Sparse Autoencoders: linear_decoder_exercise/linearDecoderExercise.m

  5. Convolution and Pooling: cnn_exercise/cnnExercise.m

About

Deep Learning and Unsupervised Feature Learning Tutorial Solutions

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • MATLAB 99.8%
  • Julia 0.2%