Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Mkl Dnn Supports (WorkInProgress) #363

Closed
wants to merge 32 commits into from
Closed

Conversation

i8run
Copy link
Contributor

@i8run i8run commented Jan 10, 2017

No description provided.

@i8run i8run force-pushed the mkl-dnn branch 3 times, most recently from 23957c0 to eab1bb5 Compare January 16, 2017 01:47
@@ -0,0 +1,91 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one or more
Copy link

@fanshiqing fanshiqing Jan 16, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

License ASF => intel corporation

@@ -0,0 +1,279 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one or more

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

License ASF => intel corporation

@@ -0,0 +1,391 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one or more

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

License.
The same as below.

Some issues:

1. the dnn version of SpatialBatchNormalization seems not working on VGG. It
   will not converge if we use it.
2. the dnn version of SpatialConvolution seems existing a bug, when the
   input width / height is less than kernel width. Details are not here.
1. Delete empty primitive when initlizing Concat layer because sum and
   split are not initlized after validation.
2. Random segment fault. Because it can't convert weight to mkl memory
   from scala array. We should convert it from mkl layout which is
   converted at updateOutput instead.
3. Make model threadpool to single thread, which will have much better
   performance on Inception or AlexNet.
4. Add AlexNet which was temporarily deleted.
5. Add offset to scaleShift in BatchNormalization. Because weight and
   bias does not start with 0 but an offset.
[fix] can not do serialization when save model (alexnet/inception)
[fix] can not change batch size with linear and relu layers.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants