Deep neural network framework (C/C++/CUDA).
To run this code, you should have
- a cifar-10 dataset( put "cifar-10-batches-bin" where this .md file is, you can get it from HERE, make sure to download the binary version which suitable for C programs);
- nVidia graphic card which supports nVidia CUDA
- for running network with pre-trained network, you should put pre-trained files into "config" folder, there is a demo config folder which is named "pre-trained-conf", you can rename it to "config" and replace the current "config" folder.
Compile & Run
add this project into nVidia nsight, add curand and cufft into path.
- 0.1.0: Aug.5, the first version released.
- 0.1.1: Aug.10, remove hostData in Mat, only use device memory, for speed up.
- 0.1.1: Aug.11, add functions that save matrices and configs into .txt files.
- 0.1.1: Aug.12, add functions that read network from .txt files.
- Similar with Mat in OpenCV, has memory in both CPU and GPU, use it to do most of the calculations.
- Has memory only in CPU, use it to read dataset, and do pre-processing (unless your GPU memory is huge...).
- Similar with Scalar in OpenCV, has 3 float space, which corresponses 3 channels. For example, the sum of a 3-channal Mat is a vector3f.
- Similar with vector3f, but has 2 int space, use it to represent 2-d position, or size.
Layer Config Description
- For each layer, there is a layer_name, a layer_type, and a output_format.
- There are currently 2 output formats: matrix single channel Mat, and image (vector of 3-channel Mat).
- batch size: the training process is using mini-batch stochastic gradient descent.
- kernel size: size of kernels for convolution calculation.
- kernel amount: amount of kernels for convolution calculation.
- combine map: amount of combine feature map, details can be found in Notes on Convolutional Neural Networks.
- weight decay: weight decay for convolutional kernels.
- padding: padding before doing convolution.
- stride: stride when doing convolution (For "VALID" type of convolution, result size = (image_size + 2 * padding - kernel_size) / stride + 1).
Fully Connected Layer
- num hidden neurons: size of fully connected layer.
- weight decay: weight decay for fully connected layer.
- num classes: output size of softmax layer.
- weight decay: weight decay for softmax layer.
- method: sigmoid/tanh/relu/leaky_relu.
- method: max/mean/stochastic.
- overlap: if use overlap pooling.
- window size: window size when using overlap pooling.
- stride: pooling stride.
Local Response Normalization Layer
- alpha, beta, k, n: see ImageNet Classification with Deep Convolutional Neural Networks.
- dropout rate: percentage of zeros when generating Bernoulli matrix.
- for implementing GoogLeNet, TODO...
- for implementing GoogLeNet, TODO...
Structure and Algorithm
See my several posts about CNNs at my tech-blog.
- combine layer
- branch layer
- stochastic pooling
The MIT License (MIT)
Copyright (c) 2015 Xingdi (Eric) Yuan
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.