New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
bazel build failure of tensorflow with mkl and specific eigen3 flags #10157
Comments
See also PointCloudLibrary/pcl#1266 Is the Did you modify the workspace.bzl file to upgrade Eigen? |
Hi, I removed the MKL flag and now the compilation finished, however I wish to work with MKL support and I don't know how to confirm that the Eigen's code generation and compilation of tensorflow integrated successfully with my current project (which already uses MKL and 64bit integers as indexes). I guess currently I don't have a better way to make sure it's OK besides using breakpoints inside Eigen source code and see if it calls the correct functions in MKL. Regarding the bazel workspace, I didn't modify any file. Do I need to? 10x! |
@jart Could you take a look please? Thanks. |
@drormeir I fixed this problem editing the file Eigen/src/Core/util/MKL_support.h I added the following code at line 116:
and in my CMakeLists I did:
This problem is a conflict between MKL_INT and BLAS index type. In my case, the BLAS is using int as index and MKL_INT is a long long. I did this workaround and everything is working fine. Also, I did a pull request to eigen repository. I`m waiting to be accepted. |
Thanks! |
It is not a bug in TensorFlow; it's a problem with Eigen. When used in the MKL mode, Eigen's code defines
It is utterly confusing because the definition of @gogo40 Your patch is unnecessary if you use short indices ( @drormeir if you use |
Thank you!
…On Jul 14, 2017 23:57, "Lukasz Janyst" ***@***.***> wrote:
It is not a bug in TensorFlow; it's a problem with Eigen.
When used in the MKL mode, Eigen's code defines BlasIndex to be whatever
MKL_INT is:
typedef MKL_INT BlasIndex;
It is utterly confusing because the definition of MKL_INT varies
depending on the size of indices you want to use in your program. However,
instead of using the MKL's BLAS interface, which takes the size of MKL_INT
into account, Eigen defines its own interface (
https://bitbucket.org/eigen/eigen/src/e7027de735d6450c8ede3ce2f65166
714c6aef50/Eigen/src/misc/blas.h?at=default&fileviewer=file-view-default)
using only 32-bit long ints. This is the reason for the compilation errors
you see.
@gogo40 <https://github.com/gogo40> Your patch is unnecessary if you use
short indices (-DMKL_LP64) and makes a significant chunk of the Eigen's
unit tests fail if you use long indices (-DMKL_ILP64
-DEIGEN_BLAS_INDEX=int). This is because the implementation of BLAS in
libmkl_intel_ilp64.so expects 64-bit long indices.
@drormeir <https://github.com/drormeir> if you use -DMKL_LP64 instead of
-DMKL_ILP64, TensorFlow will compile fine. I would not expect to see
performance improvements though.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#10157 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AYEDEa7uYfjj9T7CCpmXYlBFZat8h3jiks5sN9Y3gaJpZM4Nk8LC>
.
|
Thank you @ljanyst |
Looks like this issue was resolved. It is also obsolete now, as you can just use |
System information
Have I written custom code (as opposed to using a stock example script provided in TensorFlow):
Don't have code yet
**OS Platform and Distribution
Linux Ubuntu 16.04
TensorFlow installed from (source or binary):
git clone latest revision (TensorFlow 1.1)
TensorFlow version (use command below):
Bazel version (if compiling from source):
Build label: 0.4.5
Build target: bazel-out/local-fastbuild/bin/src/main/java/com/google/devtools/build/lib/bazel/BazelServer_deploy.jar
Build time: Thu Mar 16 12:19:38 2017 (1489666778)
Build timestamp: 1489666778
Build timestamp as int: 1489666778
CUDA/cuDNN version:
None
GPU model and memory:
None
Exact command to reproduce:
I don't understand what the above sentence refers to?
Describe the problem
Bazel failed to build/compile tensor flow with mkl support
I added these compiler flags during the configure phase and they caused the compilation error:
-DEIGEN_USE_MKL_ALL -DMKL_ILP64
Source code / logs
Configure phase:
Build command...
sudo bazel build --config=opt //tensorflow/tools/pip_package:build_pip_package
Error output:
The text was updated successfully, but these errors were encountered: