-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Mkl Dnn Supports (WorkInProgress) #363
Conversation
23957c0
to
eab1bb5
Compare
@@ -0,0 +1,91 @@ | |||
/* | |||
* Licensed to the Apache Software Foundation (ASF) under one or more |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
License ASF => intel corporation
@@ -0,0 +1,279 @@ | |||
/* | |||
* Licensed to the Apache Software Foundation (ASF) under one or more |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
License ASF => intel corporation
@@ -0,0 +1,391 @@ | |||
/* | |||
* Licensed to the Apache Software Foundation (ASF) under one or more |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
License.
The same as below.
ad8c8e6
to
1678991
Compare
…run on 2-D inputs.
1. backToUsr seg fault <- storage of Tensor will be set to 0. 2. validation accuracy is wrong <- hasConverted is set to true, but not set back to false because val has no backward. 3. Optimizer supports MklDnn.
Some issues: 1. the dnn version of SpatialBatchNormalization seems not working on VGG. It will not converge if we use it. 2. the dnn version of SpatialConvolution seems existing a bug, when the input width / height is less than kernel width. Details are not here.
1. Delete empty primitive when initlizing Concat layer because sum and split are not initlized after validation. 2. Random segment fault. Because it can't convert weight to mkl memory from scala array. We should convert it from mkl layout which is converted at updateOutput instead. 3. Make model threadpool to single thread, which will have much better performance on Inception or AlexNet. 4. Add AlexNet which was temporarily deleted. 5. Add offset to scaleShift in BatchNormalization. Because weight and bias does not start with 0 but an offset.
[fix] can not do serialization when save model (alexnet/inception) [fix] can not change batch size with linear and relu layers.
No description provided.