Skip to content

Run fully connected artificial neural networks with dropout applied (mini)batchwise, rather than samplewise. Given two hidden layers each subject to 50% dropout, the corresponding matrix multiplications for forward- and back-propagation is 75% less work as the dropped out units are not calculated.

Notifications You must be signed in to change notification settings

ml-lab/Batchwise-Dropout

 
 

Repository files navigation

Batchwise Dropout Benjamin Graham, University of Warwick, 2015 GPLv3

If you use this software please tell me what you are using it for (b.graham@warwick.ac.uk).

Run "make dataset" for dataset in the list { mnist, cifar10, artificial }

About

Run fully connected artificial neural networks with dropout applied (mini)batchwise, rather than samplewise. Given two hidden layers each subject to 50% dropout, the corresponding matrix multiplications for forward- and back-propagation is 75% less work as the dropped out units are not calculated.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • C++ 87.6%
  • C 5.9%
  • Cuda 3.6%
  • Python 2.0%
  • Makefile 0.9%