Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

bazel build --copt=-march=native not using available CPU instructions #7449

Closed
ahundt opened this issue Feb 12, 2017 · 37 comments
Closed

bazel build --copt=-march=native not using available CPU instructions #7449

ahundt opened this issue Feb 12, 2017 · 37 comments
Labels
stat:community support Status - Community Support

Comments

@ahundt
Copy link
Contributor

ahundt commented Feb 12, 2017

Update 2017-12-06:
The current version of my tensorflow.sh install script has been working well for me, and the update from tf 1.3 to tf 1.4 required only a one character change!

Original Post:
Here are the key lines in my install script with a quote from the tensorflow docs:

# To be compatible with as wide a range of machines as possible, TensorFlow defaults to only using SSE4.1 SIMD instructions on x86 machines. Most modern PCs and Macs support more advanced instructions, so if you're building a binary that you'll only be running on your own machine, you can enable these by using --copt=-march=native in your bazel build command.

bazel build --copt=-march=native -c opt --config=cuda //tensorflow/tools/pip_package:build_pip_package

Even with --copt=-march=native I get the following warnings about the CPU instruction set, contradicting the above statement:


W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE3 instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations.

Here is the exact script I used to build tensorflow:
https://github.com/ahundt/robotics_setup/blob/b5ee71f262ec36f8dbc8374ed2503c0812fb0f47/tensorflow.sh

What related GitHub issues or StackOverflow threads have you found by searching the web for your problem?

http://stackoverflow.com/a/41520266/99379

Operating System:
Ubuntu 16.04

  1. The output from python -c "import tensorflow; print(tensorflow.__version__)".
python -c "import tensorflow; print(tensorflow.__version__)"
I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcublas.so.8.0 locally
I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcudnn.so.5 locally
I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcufft.so.8.0 locally
I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcuda.so.1 locally
I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcurand.so.8.0 locally
1.0.0

If installed from source, provide

  1. The commit hash (git rev-parse HEAD)
    07bb8ea

  2. The output of bazel version

± bazel version
Build label: 0.4.4
Build target: bazel-out/local-fastbuild/bin/src/main/java/com/google/devtools/build/lib/bazel/BazelServer_deploy.jar
Build time: Wed Feb 1 18:54:21 2017 (1485975261)
Build timestamp: 1485975261
Build timestamp as int: 1485975261

If possible, provide a minimal reproducible example (We usually don't have time to read hundreds of lines of your code)


python -c 'import tensorflow as tf; print(tf.__version__); sess = tf.InteractiveSession(); sess.close();'
I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcublas.so.8.0 locally
I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcudnn.so.5 locally
I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcufft.so.8.0 locally
I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcuda.so.1 locally
I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcurand.so.8.0 locally
1.0.0
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE3 instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations.
I tensorflow/core/common_runtime/gpu/gpu_device.cc:885] Found device 0 with properties:
name: GeForce GTX 1080
major: 6 minor: 1 memoryClockRate (GHz) 1.7335
pciBusID 0000:02:00.0
Total memory: 7.92GiB
Free memory: 7.81GiB
I tensorflow/core/common_runtime/gpu/gpu_device.cc:906] DMA: 0
I tensorflow/core/common_runtime/gpu/gpu_device.cc:916] 0:   Y
I tensorflow/core/common_runtime/gpu/gpu_device.cc:975] Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GTX 1080, pci bus id: 0000:02:00.0)

What other attempted solutions have you tried?

This person tried some other things: http://stackoverflow.com/a/41520266/99379

@ahundt ahundt changed the title bazel build --copt=-march=native not using best available CPU instructions bazel build --copt=-march=native not using available CPU instructions Feb 12, 2017
@yaroslavvb
Copy link
Contributor

This command should break down the optimizations that are turned on with march=native

gcc -march=native -Q --help=target

Are AVX/FMA optimizations in there?

Also, march=native is a long way of writing

bazel build --config=opt --config=cuda

@ahundt
Copy link
Contributor Author

ahundt commented Feb 13, 2017

Here is the full output of that command:
https://gist.github.com/ahundt/ec233276360962b1317a36c2054d933c

Key lines:

  -mavx                                 [enabled]
  -mavx2                                [enabled]

Also, march=native is a long way of writing

bazel build --config=opt --config=cuda

Thanks, I didn't know that about the bazel flags, I usually use cmake so I'm not super sure of the particulars of bazel.

Based on what you are saying this may also have something to do with either the gcc version or how my gcc is configured/compiled and not just which flags are passed? I had assumed based on those quoted docs that everything could be detected and configured appropriately based on compiler flags alone.

@yaroslavvb
Copy link
Contributor

cc @martinwicke who troubleshooted (troubleshot?) similar issues in the past

@aselle
Copy link
Contributor

aselle commented Feb 15, 2017

@gunan, could you take a look at this please?

@aselle aselle added type:build/install Build and install issues stat:awaiting tensorflower Status - Awaiting response from tensorflower labels Feb 15, 2017
@martinwicke
Copy link
Member

@gunan is out. I'll look.

@martinwicke
Copy link
Member

Maybe you have to use --cxxopt=-march=native as well? It's safer to use --config=opt, which does that for you.

@martinwicke martinwicke added stat:awaiting response Status - Awaiting response from author and removed stat:awaiting tensorflower Status - Awaiting response from tensorflower labels Feb 15, 2017
@ahundt
Copy link
Contributor Author

ahundt commented Feb 15, 2017

Looks like the following is a workaround according to wangyum/Anaconda#15

bazel build --linkopt='-lrt' -c opt --copt=-mavx --copt=-msse4.2 --copt=-msse4.1 --copt=-msse3 -k //tensorflow/tools/pip_package:build_pip_package

@aselle aselle removed the stat:awaiting response Status - Awaiting response from author label Feb 15, 2017
@martinwicke
Copy link
Member

@ahundt does using just --config=opt work for you?

@aselle aselle added the stat:awaiting response Status - Awaiting response from author label Feb 15, 2017
@wangyum
Copy link
Contributor

wangyum commented Feb 16, 2017

@martinwicke @ahundt It works for me.
tensorflow-7449

The following options are enabled target specific:

$ gcc -march=native -Q --help=target | grep enable
  -m64                                  [enabled]
  -m80387                               [enabled]
  -m96bit-long-double                   [enabled]
  -maes                                 [enabled]
  -malign-stringops                     [enabled]
  -mavx                                 [enabled]
  -mcx16                                [enabled]
  -mfancy-math-387                      [enabled]
  -mfp-ret-in-387                       [enabled]
  -mfused-madd                          [enabled]
  -mglibc                               [enabled]
  -mhard-float                          [enabled]
  -mieee-fp                             [enabled]
  -mpclmul                              [enabled]
  -mpopcnt                              [enabled]
  -mpush-args                           [enabled]
  -mred-zone                            [enabled]
  -msahf                                [enabled]
  -msse                                 [enabled]
  -msse2                                [enabled]
  -msse3                                [enabled]
  -msse4                                [enabled]
  -msse4.1                              [enabled]
  -msse4.2                              [enabled]
  -mssse3                               [enabled]
  -mstackrealign                        [enabled]
  -mtls-direct-seg-refs                 [enabled]

@devfubar
Copy link

I had this exact issue. However when I used the following to build from source the messages no longer appear. @ahundt suggestion removed a few of them however it was missing fma and avx2.

bazel build --linkopt='-lrt' -c opt --copt=-mavx --copt=-msse4.2 --copt=-msse4.1 --copt=-msse3 --copt=-mavx2 --copt=-mfma -k //tensorflow/tools/pip_package:build_pip_package

@vskubriev
Copy link

Dear @devfubar. Is --linkopt='-lrt' required option or it is you own customization ?

@devfubar
Copy link

@vskubriev I merely extended @ahundt suggested options from this comment

Not sure what it does personally. Perhaps @ahundt could elaborate on its purpose? I have noticed it mentioned in various documents but never explained.

@martinwicke
Copy link
Member

I am very confused by this and I'd like to find out whether there's a problem in TF somewhere or whether some compiler (versions) don't properly interpret -march=native.

Can someone who built with --config=opt, and who's running on the same machine they built on, and who nevertheless does get warnings about unused available optimizations please post the exact compiler and version (both the compiler version as well as the version tensorflow reports) they built with?

@gunan
Copy link
Contributor

gunan commented Feb 21, 2017

I just tried the following, and I was able to build an optimized binary:

bazel build --config=opt tensorflow/tools/pip_package:build_pip_package

So I will close this issue as not reproducible.

@gunan gunan closed this as completed Feb 21, 2017
@ahundt
Copy link
Contributor Author

ahundt commented Feb 21, 2017

@gunan was that with the versions I specified or is there a later commit than either bazel 0.4.4 or tf1.0 07bb8ea that may have resolved the issue?

@yaroslavvb
Copy link
Contributor

yaroslavvb commented Feb 21, 2017

@ahundt I think this could have more to do more with your gcc than bazel. IE, it seems as if you run gcc -march=native -Q and it promises to turn on -mavx flag for -march=native, but then it doesn't. Maybe you could validate it by building something else with gcc directly and seeing if problem persists?

@devfubar
Copy link

@gunan I'm not sure closing this is appropriate as there is obviously an issue for some people, maybe not for you but others. I'm pretty sure I could half my issue inbox if I just tried something and then said I couldn't reproduce. Not exactly diligent behaviour.

@yaroslavvb
Copy link
Contributor

My hunch is that this is some kind of gcc/bazel interaction for a configuration that's not available/common inside Google, so "community support" label seems appropriate

@yaroslavvb yaroslavvb reopened this Feb 22, 2017
@yaroslavvb yaroslavvb added stat:community support Status - Community Support and removed stat:awaiting response Status - Awaiting response from author labels Feb 22, 2017
@yaroslavvb yaroslavvb added stat:awaiting response Status - Awaiting response from author and removed type:build/install Build and install issues labels Feb 22, 2017
@devfubar
Copy link

@yaroslavvb thanks for re-opening.

@martinwicke ;

gcc --version
gcc (Ubuntu 5.4.0-6ubuntu1~16.04.4) 5.4.0 20160609
bazel version
Build label: 0.4.4
Build time: Wed Feb 1 18:54:21 2017 (1485975261)
Build timestamp: 1485975261
Build timestamp as int: 1485975261
Tensor flow tag: v1.0.0

@martinwicke
Copy link
Member

Thanks @devfubar. You said earlier, it works for you when building with

bazel build --linkopt='-lrt' -c opt --copt=-mavx --copt=-msse4.2 --copt=-msse4.1 --copt=-msse3 --copt=-mavx2 --copt=-mfma -k //tensorflow/tools/pip_package:build_pip_package

correct?

In that case, can you paste the contents of your tools/bazel.rc file?

@devfubar
Copy link

@martinwicke yes that is the command I used to stop the warning messages.

./configure
Please specify the location of python. [Default is /usr/bin/python]: /usr/bin/python3
Please specify optimization flags to use during compilation [Default is -march=native]: 
Do you wish to use jemalloc as the malloc implementation? (Linux only) [Y/n] 
jemalloc enabled on Linux
Do you wish to build TensorFlow with Google Cloud Platform support? [y/N] 
No Google Cloud Platform support will be enabled for TensorFlow
Do you wish to build TensorFlow with Hadoop File System support? [y/N] 
No Hadoop File System support will be enabled for TensorFlow
Do you wish to build TensorFlow with the XLA just-in-time compiler (experimental)? [y/N] 
No XLA support will be enabled for TensorFlow
Found possible Python library paths:
  /usr/local/lib/python3.5/dist-packages
  /usr/lib/python3/dist-packages
Please input the desired Python library path to use.  Default is [/usr/local/lib/python3.5/dist-packages]

Using python library path: /usr/local/lib/python3.5/dist-packages
Do you wish to build TensorFlow with OpenCL support? [y/N] 
No OpenCL support will be enabled for TensorFlow
Do you wish to build TensorFlow with CUDA support? [y/N] 
No CUDA support will be enabled for TensorFlow
Configuration finished
.........
INFO: Starting clean (this may take a while). Consider using --expunge_async if the clean takes more than several minutes.
.........
INFO: All external dependencies fetched successfully.
cat tools/bazel.rc
# Autogenerated by configure: DO NOT EDIT
build:cuda --crosstool_top=@local_config_cuda//crosstool:toolchain
build:cuda --define=using_cuda=true --define=using_cuda_nvcc=true
build:win-cuda --define=using_cuda=true --define=using_cuda_nvcc=true

build:sycl --crosstool_top=@local_config_sycl//crosstool:toolchain
build:sycl --define=using_sycl=true

build:sycl_asan --crosstool_top=@local_config_sycl//crosstool:toolchain
build:sycl_asan --define=using_sycl=true --copt -fno-omit-frame-pointer --copt -fsanitize-coverage=3 --copt -fsanitize=address --copt -DGPR_NO_DIRECT_SYSCALLS --linkopt -fPIC --linkopt -lasan

build --force_python=py3
build --host_force_python=py3
build --python3_path="/usr/bin/python3"
build --define=use_fast_cpp_protos=true
build --define=allow_oversize_protos=true

build --define PYTHON_BIN_PATH="/usr/bin/python3"
test --define PYTHON_BIN_PATH="/usr/bin/python3"
test --force_python=py3
test --host_force_python=py3
run --define PYTHON_BIN_PATH="/usr/bin/python3"

build --spawn_strategy=standalone
test --spawn_strategy=standalone
run --spawn_strategy=standalone

build --genrule_strategy=standalone
test --genrule_strategy=standalone
run --genrule_strategy=standalone

build -c opt
test -c opt
run -c opt

build:opt --cxxopt=-march=native --copt=-march=native

@aselle aselle added the stat:awaiting tensorflower Status - Awaiting response from tensorflower label Feb 23, 2017
@ahundt
Copy link
Contributor Author

ahundt commented Feb 23, 2017

Interesting I have the same gcc version. Perhaps I should confirm that tf and bazel are completely on the 1.0 release version.

gcc --version
gcc (Ubuntu 5.4.0-6ubuntu1~16.04.4) 5.4.0 20160609
Copyright (C) 2015 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.  There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

@aselle aselle removed stat:awaiting response Status - Awaiting response from author stat:awaiting tensorflower Status - Awaiting response from tensorflower labels Feb 24, 2017
@gunan
Copy link
Contributor

gunan commented Feb 25, 2017

@devfubar Sorry for preemptively closing the issue with little information. I tested our build instructions on all our supported platforms.

@ahundt After looking at what you are running, I think you are not installing the TF you just built.
https://github.com/ahundt/robotics_setup/blob/b5ee71f262ec36f8dbc8374ed2503c0812fb0f47/tensorflow.sh

In the above script, could you remove line 75
and modify line 87 to say:

pip install --upgrade /tmp/tensorflow_pkg/tensorflow-*

and try again?

@gunan
Copy link
Contributor

gunan commented Feb 27, 2017

Ping!
@ahundt, were you able to try the modified pip install command above?
Did it resolve your problem?

@devfubar Could you also share the exact commands you ran all the way from building TF from source, to installing the pip package and seeing the warning messages?

@devfubar
Copy link

Hi @gunan

So I created a brand new vanilla ubuntu machine to do some testing over the weekend with mixed results. First time I did not get the warning messages but on a second clean machine I did. I am just trying to narrow in on what exactly I did to either get the messages or what I did to not get them.

@ahundt
Copy link
Contributor Author

ahundt commented Feb 27, 2017

This worked for me! In my case this is resolved. Sorry that it ended up being a bug in my installation procedure and thus I was encountering something that was already fixed. Thanks!

@gunan
Copy link
Contributor

gunan commented Feb 27, 2017

@ahundt Thanks for the feedback. I am glad the issue is resolved for you.

@devfubar I highly suspect you are also having a problem with installing the correct pip package after you build it. Please try the following commands to build and run TF.
Please do not modify the script except for the commented lines.
You can try using docker to simulate having clean machines.

git clone https://github.com/tensorflow/tensorflow
cd tensorflow
git pull
git checkout r1.0

# Writing instructions for CPU, feel free to run configure manually here, 
# without any modifications to the optimization flags.
yes "" | ./configure

# Add config=cuda (config flag can be set multiple times) If you need GPU.
bazel build --config=opt tensorflow/tools/pip_package:build_pip_package
mkdir pip_pkg
bazel-bin/tensorflow/tools/pip_package/build_pip_package `pwd`/pip_pkg

cd pip_pkg
pip install --upgrade tensorflow-*
python -c 'import tensorflow as tf; print(tf.__version__); sess = tf.InteractiveSession(); sess.close();'

After running the above script without any modifications if you still see the warning messages I can continue investigating. But at the moment, I am convinced that there are no issues in TF.
The problem seems to be on the user side.

@martinwicke
Copy link
Member

Closing this. Thanks for the sleuthing, @gunan!

@martinwicke
Copy link
Member

(of course and as always, comment to reopen)

@martinwicke
Copy link
Member

martinwicke commented Jul 13, 2017 via email

@Emixam23
Copy link

Emixam23 commented Oct 2, 2017

At the end, I get that:

p3.6_smartchemixam23@pt-mguittet:~/tensorflow$ sudo pip install /tmp/tensorflow_pkg/tensorflow-1.3.0-cp36-cp36m-macosx_10_12_x86_64.whl
Password:
The directory '/Users/emixam23/Library/Caches/pip/http' or its parent directory is not owned by the current user and the cache has been disabled. Please check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag.
The directory '/Users/emixam23/Library/Caches/pip' or its parent directory is not owned by the current user and caching wheels has been disabled. check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag.
Requirement already satisfied: tensorflow==1.3.0 from file:///tmp/tensorflow_pkg/tensorflow-1.3.0-cp36-cp36m-macosx_10_12_x86_64.whl in /Users/emixam23/.local/share/virtualenvs/p3.6_smartch/lib/python3.6/site-packages
Requirement already satisfied: numpy>=1.11.0 in /Users/emixam23/.local/share/virtualenvs/p3.6_smartch/lib/python3.6/site-packages (from tensorflow==1.3.0)
Requirement already satisfied: wheel>=0.26 in /Users/emixam23/.local/share/virtualenvs/p3.6_smartch/lib/python3.6/site-packages (from tensorflow==1.3.0)
Requirement already satisfied: six>=1.10.0 in /Users/emixam23/.local/share/virtualenvs/p3.6_smartch/lib/python3.6/site-packages (from tensorflow==1.3.0)
Requirement already satisfied: tensorflow-tensorboard<0.2.0,>=0.1.0 in /Users/emixam23/.local/share/virtualenvs/p3.6_smartch/lib/python3.6/site-packages (from tensorflow==1.3.0)
Requirement already satisfied: protobuf>=3.3.0 in /Users/emixam23/.local/share/virtualenvs/p3.6_smartch/lib/python3.6/site-packages (from tensorflow==1.3.0)
Requirement already satisfied: html5lib==0.9999999 in /Users/emixam23/.local/share/virtualenvs/p3.6_smartch/lib/python3.6/site-packages (from tensorflow-tensorboard<0.2.0,>=0.1.0->tensorflow==1.3.0)
Requirement already satisfied: werkzeug>=0.11.10 in /Users/emixam23/.local/share/virtualenvs/p3.6_smartch/lib/python3.6/site-packages (from tensorflow-tensorboard<0.2.0,>=0.1.0->tensorflow==1.3.0)
Requirement already satisfied: bleach==1.5.0 in /Users/emixam23/.local/share/virtualenvs/p3.6_smartch/lib/python3.6/site-packages (from tensorflow-tensorboard<0.2.0,>=0.1.0->tensorflow==1.3.0)
Requirement already satisfied: markdown>=2.6.8 in /Users/emixam23/.local/share/virtualenvs/p3.6_smartch/lib/python3.6/site-packages (from tensorflow-tensorboard<0.2.0,>=0.1.0->tensorflow==1.3.0)
Requirement already satisfied: setuptools in /Users/emixam23/.local/share/virtualenvs/p3.6_smartch/lib/python3.6/site-packages (from protobuf>=3.3.0->tensorflow==1.3.0)

@gunan
Copy link
Contributor

gunan commented Oct 2, 2017

You have to add the --upgrade flag to the pip install .... command if tensorflow is already installed on your system.

@Emixam23
Copy link

Emixam23 commented Oct 2, 2017

Wow it works, but I still get 2017-10-02 22:40:06.689056: I tensorflow/core/platform/cpu_feature_guard.cc:137] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA Thank you so much, I'm searching since so long ! ><
Best !

@yaroslavvb
Copy link
Contributor

@Emixam23 you have to compile yourself, or use one of the pre-built optimized versions (ie, from https://github.com/yaroslavvb/tensorflow-community-wheels)

@Emixam23
Copy link

Emixam23 commented Oct 2, 2017

I already compiled it, that's why the warnings went away. But I got a new one, that's why I'm confused :/

@David-Levinthal
Copy link

see the same kind of issue on SKX if I use -march=native during configure
ubuntu 16.04 and history output below shows the installation

2004 git clone https://github.com/tensorflow/tensorflow
2005 cd tensorflow
2006 git checkout r1.4
2007 sudo apt-get install openjdk-8-jdk
2008 sudo echo "deb [arch=amd64] http://storage.googleapis.com/bazel-apt stable jdk1.8" | sudo tee /etc/apt/sources.list.d/bazel.list
2009 curl https://bazel.build/bazel-release.pub.gpg | sudo apt-key add -
2010 sudo apt-get update && sudo apt-get install bazel
2011 sudo apt-get upgrade bazel
2012 sudo apt-get install python-numpy python-dev python-pip python-wheel
2013 ./configure
2014 bazel build --config=mkl -c opt -c opt //tensorflow/tools/pip_package:build_pip_package > tf_build.log 2>&1
2015 vi tf_build.log
2016 mkdir ../tf_r1.4_mkl
2017 sudo bazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/tensorflow_pkg
2018 ls /tmp/tensorflow_pkg/
2019 sudo pip install /tmp/tensorflow_pkg/tensorflow-1.4.1-cp27-cp27mu-linux_x86_64.whl -t ~/tf_r1.4_mkl/
2020 cp /tmp/tensorflow_pkg/tensorflow-1.4.1-cp27-cp27mu-linux_x86_64.whl /tf_r1.4_mkl/
Thread model: posix
gcc version 5.4.0 20160609 (Ubuntu 5.4.0-6ubuntu1
16.04.5)
2025 export OMP_NUM_THREADS=52
2026 python tf_cnn_benchmarks.py --device=cpu --mkl=True --kmp_settings=1 --batch_size=64 --model=alexnet --forward_only=True --num_warmup_batches=10 --num_inter_threads 2 --num_intra_threads 56 > alexnet_skx_thr.log 2>&1
top of alexnet_skx_thr.log
2017-12-02 11:17:35.612237: I tensorflow/core/platform/cpu_feature_guard.cc:137] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 AVX512F FMA

@gunan
Copy link
Contributor

gunan commented Dec 6, 2017

When you add -march=native during configure, it wont automatically get added to your build.
also, -c is short for --compilation_mode, not --config
You will need to modify your build command as follows:

bazel build --config=mkl --config opt -c opt //tensorflow/tools/pip_package:build_pip_package

However, this will almost surely fail because there are known compilation issues with avx512+TF. You may need to debug and fix your build to get it working.

jlherren added a commit to jlherren/docs that referenced this issue Oct 16, 2019
Otherwise an already installed version will not be installed over, causing
confusion, for example tensorflow/tensorflow#7449
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
stat:community support Status - Community Support
Projects
None yet
Development

No branches or pull requests

10 participants