Matlab Code for Restricted/Deep Boltzmann Machines and Autoencoders
Switch branches/tags
Nothing to show
Clone or download
Pull request Compare This branch is 2 commits behind kyunghyuncho:master.
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Failed to load latest commit information.
example
MaxPooling.cpp
MaxPooling.m
MaxPooling.mexa64
README.md
TODO.txt
candidate_lrates.m
candidate_moments.m
convnet.m
convnet_avgpool.m
convnet_classify.m
convnet_maxpool.m
dae.m
dae_get_hidden.m
dbm.m
dbm_energy.m
dbm_entropy.m
dbm_get_hidden.m
dbm_get_hidden_raw.m
dbm_sample.m
dbn.m
dbn_sample.m
default_convnet.m
default_dae.m
default_dbm.m
default_dbn.m
default_gsn.m
default_mlp.m
default_rbm.m
default_sdae.m
dsigmoid.m
gpl-2.0.txt
grbm_energy.m
grbm_pt.m
grbm_sample.m
gsn.m
gsn_sample.m
logdiff.m
logsum.m
mlp.m
mlp_classify.m
mlp_dbm.m
padimages.m
print_n_updates.m
rbm.m
rbm_ais.m
rbm_energy.m
rbm_get_hidden.m
rbm_get_visible.m
rbm_pt.m
rbm_sample.m
save_intermediate.m
sdae.m
sdae_get_hidden.m
sdae_get_visible.m
set_dbm_centers.m
set_mlp_dbm.m
sigmoid.m
softmax.m
train_rbm.m
visualize.m
visualize_adv.m
visualize_grbm.m
visualize_rbm.m
zca.m
zca_whiten.m

README.md

deepmat

WARNING: this is not my main code, and there is no warranty attached!

= Generative Stochastic Network =

  • A simple implementation of GSN according to (Bengio et al., 2013)

= Convolutional Neural Network =

  • A naive implementation (purely using Matlab)
  • Pooling: max (Jonathan Masci's code) and average
  • Not for serious use!

= Restricted Boltzmann Machine & Deep Belief Networks =

  • Binary/Gaussian Visible Units + Binary Hidden Units
  • Enhanced Gradient, Adaptive Learning Rate
  • Adadelta for RBM
  • Contrastive Divergence
  • (Fast) Persistent Contrastive Divergence
  • Parallel Tempering
  • DBN: Up-down Learning Algorithm

= Deep Boltzmann Machine =

  • Binary/Gaussian Visible Units + Binary Hidden Units
  • (Persistent) Contrastive Divergence
  • Enhanced Gradient, Adaptive Learning Rate
  • Two-stage Pretraining Algorithm (example)
  • Centering Trick (fixed center variables only)

= Denoising Autoencoder (Tied Weights) =

  • Binary/Gaussian Visible Units + Binary(Sigmoid)/Gaussian Hidden Units
  • tanh/sigm/relu nonlinearities
  • Shallow: sparsity, contractive, soft-sparsity (log-cosh) regularization
  • Deep: stochastic backprop
  • Adagrad, Adadelta

= Multi-layer Perceptron =

  • Stochastic Backpropagation, Dropout
  • tanh/sigm/relu nonlinearities
  • Adagrad, Adadelta
  • Balanced minibatches using crossvalind()