Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Initial implementation of multiplication operator for MKLDNN #9949

Merged
merged 2 commits into from
Apr 24, 2018

Conversation

mozga-intel
Copy link
Contributor

@mozga-intel mozga-intel commented Apr 16, 2018

No description provided.

@mozga-intel mozga-intel force-pushed the mozga-intel/Mul_mkldnn branch 6 times, most recently from 97843f1 to f24fec2 Compare April 16, 2018 23:17
@mozga-intel mozga-intel changed the title [Only Tests] Initial implementation of multiplication operator for MKLDNN Initial implementation of multiplication operator for MKLDNN Apr 17, 2018
Copy link
Contributor

@luotao1 luotao1 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As the implement of mul_mkldnn_op is similar to fc_mkldnn_op, and we can set bias_attr=None in fc_mkldnn_op to obtain mul_mkldnn_op. Thus, why mul_mkldnn_op should be implemented again?

"bias_attr": bias_attr
"x_num_col_dims": num_flatten_dims,
"y_num_col_dims": 1,
"use_mkldnn": use_mkldnn
Copy link
Contributor

@luotao1 luotao1 Apr 20, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why use mul_mkldnn_op to replace fc_mkldnn_op here?
In my mind, the performance of fc_mkldnn_op is better than mul_mkldnn_op + elementwise_add_op in fluid.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes I must agree with you, but the fully connected layer of mkldnn has a little problem with the input of matrices, What means that in the case when the input of layer is greater than one the mkldnn operator doesn't work correctly. Therefore we hit upon a good idea to remove this pitfall and withdraw the fully connected operator. Moreover this simple solution give us an opportunity to run the new neural network models.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you mean that you will not use fc_mkldnn_op in fluid? Maybe you can add a comment in nn.py.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

With a heavy heart, I must say, yes, this operator is removed but for a while. I guess that in the future we will come back to the fully connected layer for a mkldnn.

auto desc = tf_desc<Type>(args...);
return tf_pd<Type>(desc, e, p);
}

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are line 36-61 only used in mul_mkldnn_op? If these lines are not used in other ops, they should be moved into mul_mkldnn_op internel.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

At this moment, yes, but this code may be uses by the all mkldnn operators.

@luotao1
Copy link
Contributor

luotao1 commented Apr 23, 2018

@tensor-tang Do you have any other comments about this PR?

using tf_desc = typename Type::desc;

template <class Type>
using tf_pd = typename Type::primitive_desc;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A small question, why named tf_pd here?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

tf = type_function.

Copy link
Contributor

@luotao1 luotao1 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! Thanks very much!

@luotao1 luotao1 merged commit 44fa823 into PaddlePaddle:develop Apr 24, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Development

Successfully merging this pull request may close these issues.

None yet

3 participants