-
Notifications
You must be signed in to change notification settings - Fork 74k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add mlir graph optimization to the build #39231
Conversation
Thanks for the fix! Seems like the CI is not yet happy with it right now. |
…gistration" to tensorflow/python/_pywrap_mlir.so This commit adds build target //tensorflow/compiler/mlir/tensorflow:graph_optimization_pass_registration to tensorflow/python/_pywrap_mlir.so, so that graph optimization could be available in tf-nightly pip wheel. In the past while graph_optimization_pass_registration is available, it is not packaged with tf-nightly so it is not possible to access graph optimization if installed from pip wheel. In order to enable graph optimization pass for pip wheel, graph_optimization_pass_registration target has to be included in "somewhere" that will be loaded when pip installed tensorflow is imported. A natural place would be libtensorflow_framework.so which is the core .so always loaded with `import tensorflow`. It is possible to include graph_optimization_pass_registration to target libtensorflow_framework.so, see last attempt on this route: tensorflow#39231 However, this caused many test failures like: ``` : CommandLine Error: Option 'help-list' registered more than once! LLVM ERROR: inconsistency in registered CommandLine options ``` The reason is that many tests as a binary also have a copy of the LLVM in its binary, thus causing multiple copies (one in binary, another one in libtensorflow_framework.so where many tests depends on). Because there are so many tests, it is really hard to make all the needed changes without break somewhere else. This commit takes a different approach to include graph_optimization_pass_registration to tensorflow/python/_pywrap_mlir.so. This shared object is dedicated to mlir related APIs. The current exposed one is `tf.mlir.experimental.convert_graph_def`. Because this tensorflow/python/_pywrap_mlir.so already depends on llvm, place graph_optimization_pass will avoid multiple copies of llvm in multiple locations. This tensorflow/python/_pywrap_mlir.so is also loaded with `import tensorflow` as part of the python binding. Ideally it probably would be still preferrale to get the graph_optimization_pass into libtensorflow_framework.so. That will be investigated further later. Signed-off-by: Yong Tang <yong.tang.github@outlook.com>
…gistration" to tensorflow/python/_pywrap_mlir.so This commit adds build target //tensorflow/compiler/mlir/tensorflow:graph_optimization_pass_registration to tensorflow/python/_pywrap_mlir.so, so that graph optimization could be available in tf-nightly pip wheel. In the past while graph_optimization_pass_registration is available, it is not packaged with tf-nightly so it is not possible to access graph optimization if installed from pip wheel. In order to enable graph optimization pass for pip wheel, graph_optimization_pass_registration target has to be included in "somewhere" that will be loaded when pip installed tensorflow is imported. A natural place would be libtensorflow_framework.so which is the core .so always loaded with `import tensorflow`. It is possible to include graph_optimization_pass_registration to target libtensorflow_framework.so, see last attempt on this route: tensorflow#39231 However, this caused many test failures like: ``` : CommandLine Error: Option 'help-list' registered more than once! LLVM ERROR: inconsistency in registered CommandLine options ``` The reason is that many tests as a binary also have a copy of the LLVM in its binary, thus causing multiple copies (one in binary, another one in libtensorflow_framework.so where many tests depends on). Because there are so many tests, it is really hard to make all the needed changes without break somewhere else. This commit takes a different approach to include graph_optimization_pass_registration to tensorflow/python/_pywrap_mlir.so. This shared object is dedicated to mlir related APIs. The current exposed one is `tf.mlir.experimental.convert_graph_def`. Because this tensorflow/python/_pywrap_mlir.so already depends on llvm, place graph_optimization_pass will avoid multiple copies of llvm in multiple locations. This tensorflow/python/_pywrap_mlir.so is also loaded with `import tensorflow` as part of the python binding. Ideally it probably would be still preferrale to get the graph_optimization_pass into libtensorflow_framework.so. That will be investigated further later. Signed-off-by: Yong Tang <yong.tang.github@outlook.com>
Thanks @joker-eph for the review. The tests error was due to tests normally depends on LLVM so there are two copies again (one in libtensorflow_io.so and one in tests binary itself). While it is possible to updates the tests, there are so many and dependencies are just too interleaved. I take another look and now the PR takes a different approach: it adds
Since In the long term it still might makes sense to build |
@joker-eph The PR has been updated. The |
This PR is part of the process to resolve #39135.
As was mentioned in #39135, 248bc00 enables MLIR graph optimizations. However, it is not part of the build for pip wheel. The reason was that MLIR graph optimizations is done through a registration (mlir_graph_optimization_pass_registration.cc) but this file is not included in
libtensorflow_framerowk.so
so pip wheel package is not enabled.This PR add the
mlir_graph_optimization_pass_registration
to be part of thelibtensorflow_framework.so
. Due to thebazel
dependency reasons, a direct inclusion will not work as ops in core and xla will be pulled in multiple times by bazel. There will be 2 copies of xla ops, 2 copies of lite and core ops inlibtensorflow_framework.so
So the following change has been made. In general
import_model.[h|cc]
has been split into 3 parts:import_base.[h|cc] consists of ImporterBase class and common shared functions (by
import_model.[h|cc]and
import_graphdef.[h|cc]`import_graphdef.[h|cc]
consists of graph only conversion to mlirimport_model.[h|cc]
has also been updated to consist model only conversion to mlirThere are also some small bazel changes that removes unnecessary dependencies.
Note with the change, MLIR graph optimizations only need to depends on
import_graphdef.cc
andexport_graphdef.cc
. It will not depend onimport_model.cc
which pulled in xla and core ops multiple time by bazel.Signed-off-by: Yong Tang yong.tang.github@outlook.com