-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Deep Learning Module #817
Deep Learning Module #817
Conversation
Implementation of the General Layer class, which is a virtual base class for all of the layer classes in the Deep Learning Module.
Registering the Deep Learning Method in the Header files.
Only a definition of the Deep Net class
Only a definition of the MethodDL class, that should manage everything in terms of Deep Learning Nets
Implementation of the Dense Layer class, which is a sub-class of the General Layer class. It represents a layer where each neuron is connected with each neuron in the next layer.
Adding implementation for the Conv and Max Pool Layers Forward and Backward passes, for the Reference architecture for the
Implementation of the Convolutional Layer Class, which is derived from the General Layer class and represent a convolution operation on the input.
Implementation of the Max Pooling Layer Class, which is derived from the General Layer class and represent a downsampling operation using the max function.
Definition of the Reshape Layers, which is derived from the General Layer class. This layer only transforms the input to the provided dimensions.
Implementation of the Tensor Data Loader Class, which loads and creates batches of data suitable for the Deep Learning Nets. One Tensor Batch is consisted of 3D input tensor and output matrix.
Implementation of these two methods, which copy the input and output tensors from the host to the device, either CPU or GPU.
Implementation of the class DLMinimizers, which provides the gradient descent methods for minimizing the Deep Neural Nets Loss functions.
Implementation of the Create Deep Net and all supporting parsing layer methods. All of these methods, based on the layout string, build a master deep net and slave deep nets.
Insert Fetch Methods, needed for parsing the training settings provided as a string in key-value format.
Inserting the methods Declare Options and Parse Key Value String. The Declare Options method sets default values and description for the option strings that are defining the deep net and its training. The Parse Key Value String method, parses the training settings.
Implementation of the Process Options method, which is parsing every option stirng, thus preparing the network for training.
Implementation of the Train GPU method, for training a Deep Net on a GPU device.
Definition of the Conv and Max Pool Layers Forward and Backward passes, for the CPU architecture
Define the Conv and Max Pool Layers Forward and Backward passes, for the GPU architecture
Missed 'public' key word while extending the class.
Including the GPU and CPU headers if the appropriate flags are on.
Implemen tation of the Deep Net class, which encapsulates everything for one deep neural network.
Add the support for weighting each example in the batch.
Define the Reshape kernel for GPU and CPU architectures, implement it for the Reference architecture.
Implementation of the Forward and Backward pass in the Reshape Layer, which transforms the input to the desired output dimensions.
Hi, @IlievskiV, @sshekh. Could you rebase and resolve conflicts? I'm assigning this PR to @lmoneta too to see if we can move it forward. |
Hi Guys, I am getting errors in this branch compiling with cuda support, I am using cuda 8. /home/ozapatam/Projects/GSoC/rootdnn/compile/include/TMVA/DNN/Architectures/Cuda.h(396): error: identifier "AReal" is undefined |
…ation in conv. layer. Fix also backward pass in the maxpool layer Add Assert in matrix operations Add a test for backward pass comparing weight gradient with those computed with finite differences
Integration of all different layers in one Deep Learning module.