Skip to content
Simple GPGPU OpenCL C++ code for training Deep Neural Networks (DNN)
C++ C Shell
Branch: master
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
pics
source
README.md
compile.sh

README.md

WideOpenThoughts

Wide Open Thoughts (WOTs) is almost same software as DeeperThought, but instead of CUDA C++ uses OpenCL C++ so that AMD GPU cards can be used as well as NVIDIA GPU cards.

Wide Open Thoughts is general framework for training Deep Neural Networks (and convolutional as well) using OpenCL C++.

To compile on linux:

./compile.sh

To run training using your train and test data from command line execute:

./wots configFile trainFile testFile batchSize(integer) paramFile/null saveEveryNEpochs(integer) square/log wheremax/netflix/none

Input format of data:

expOut_1, ... , expOut_n, inp_1, ... , inp_m

For both trainFile and testFile (expOut - expected output, inp - input). One data point is one line.

Good research publications on the topic:

Results for MNIST dataset:

You can download MNIST dataset here. Mnist dataset can be found in other formats on Yann Lecun's web page or Kaggle's web page.

configBB.txt

batch size 4000, log loss error, auto step size

matrix,784,130,0.5,-0.001

sigmoid,130

dropout,130,0.5

matrix,130,10,0.5,-0.001

sigmoid,10

graphBB

Top accuracy on test data: 97.47 %

configG.txt (convolutional neural network CNN)

batch size 4000, log loss error, auto step size

convolution,1,28,28,200,8,8,0.5,-0.001

max,200,21,21,7,7

matrix,1800,130,0.5,-0.001

sigmoid,130

dropout,130,0.5

matrix,130,10,0.5,-0.001

softmax,10

graphG

Top accuracy on test data: 99.41 % (run 2500 epochs)

What you need to download and install beforehand:

Other alternatives for training DNNs:

You can’t perform that action at this time.