Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

enable TF remapper optimizer #1418

Merged
merged 1 commit into from
Jan 17, 2022
Merged

Conversation

njzjz
Copy link
Member

@njzjz njzjz commented Jan 14, 2022

TF supports a remapper optimizer which remaps subgraphs onto more efficient implementations by replacing commonly occuring subgraphs with optimized fused monolithic kernels. However, its support is limited: (1) MatMul + BiasAdd (not Add) + Activation; (2) Float32 (but not float64); (3) Activation is Tanh; (4) MKL is built and used.
This commit replaces Add by BiasAdd in the NN. The speed of a single op can be improved by about 20% when TF is using MKL and precision is set to float32. One can find _MklNativeFusedMatMul op in the profiler.

image
Original graph. Ops include MklMatMul, AddV2, and Tanh.

image
New graph. _MklNativeFusedMatMul is used here.

See also:

(cherry picked from commit 8f2dc44)

TF supports a remapper optimizer which remaps subgraphs onto more efficient implementations by replacing commonly occuring subgraphs with optimized fused monolithic kernels. However, its support is limited: (1) MatMul + BiasAdd (not Add) + Activation; (2) Float32 (but not float64); (3) Activation is Tanh; (4) MKL is built and used.
This commit replaces Add by BiasAdd in the NN. The speed of a single op can be improved by about 20% when TF is using MKL and precision is set to float32. One can find `_MklNativeFusedMatMul` op in the profiler.

See also:
- https://www.tensorflow.org/guide/graph_optimization
- https://github.com/tensorflow/tensorflow/blob/master/tensorflow/core/grappler/optimizers/remapper.cc

(cherry picked from commit 8f2dc44)
@codecov-commenter
Copy link

codecov-commenter commented Jan 14, 2022

Codecov Report

Merging #1418 (69f9d9f) into devel (0f6d644) will increase coverage by 1.17%.
The diff coverage is 100.00%.

Impacted file tree graph

@@            Coverage Diff             @@
##            devel    #1418      +/-   ##
==========================================
+ Coverage   74.55%   75.72%   +1.17%     
==========================================
  Files          92       92              
  Lines        7623     7650      +27     
==========================================
+ Hits         5683     5793     +110     
+ Misses       1940     1857      -83     
Impacted Files Coverage Δ
deepmd/utils/network.py 82.79% <100.00%> (ø)
source/op/_gelu.py 69.23% <0.00%> (-12.59%) ⬇️
source/op/_tabulate_grad.py 100.00% <0.00%> (ø)
source/op/_prod_force_grad.py 100.00% <0.00%> (ø)
source/op/_prod_virial_grad.py 100.00% <0.00%> (ø)
source/op/_soft_min_force_grad.py 100.00% <0.00%> (ø)
source/op/_prod_force_se_a_grad.py 100.00% <0.00%> (ø)
source/op/_prod_force_se_r_grad.py 100.00% <0.00%> (ø)
source/op/_soft_min_virial_grad.py 100.00% <0.00%> (ø)
source/op/_prod_virial_se_a_grad.py 100.00% <0.00%> (ø)
... and 7 more

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 0f6d644...69f9d9f. Read the comment docs.

Copy link
Collaborator

@wanghan-iapcm wanghan-iapcm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this change compatible to tf 1.x?

@njzjz
Copy link
Member Author

njzjz commented Jan 15, 2022

This PR only introduces tf.nn.bais_add, which is available in tf v1 and it is a recommended way to add the biases.

The support for tanh + MKL is introduced in tensorflow/tensorflow#42173 (v2.4).

I don't know why only FP32 is supported.

@njzjz
Copy link
Member Author

njzjz commented Jan 15, 2022

I found that Intel OneDNN does not support fp64 at all...

@njzjz
Copy link
Member Author

njzjz commented Jan 16, 2022

I have some other ideas: we can customize a remapper optimizer. TF has provided the interface.

@wanghan-iapcm wanghan-iapcm merged commit 057e6ab into deepmodeling:devel Jan 17, 2022
@njzjz njzjz deleted the bias_add branch January 17, 2022 01:09
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants