Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

can you build the tensorflow 1.8 with cuda 9.2? and publish the wheel #43

Open
gaohongfein opened this issue Jun 5, 2018 · 11 comments
Open

Comments

@gaohongfein
Copy link

Hi,
thanks for you works build the tensorflow with newest cuda.
can you build the tensorflow 1.8 with cuda 9.2? and publish the wheel

@DzianisH
Copy link

DzianisH commented Jun 8, 2018

+1

@gaohongfein
Copy link
Author

i have build with cuda 9.2

@0xDaksh
Copy link

0xDaksh commented Jun 24, 2018

I'm building, will post the results if i have any.

@alejandrohall
Copy link

alejandrohall commented Jul 2, 2018

Hi @DakshMiglani

Did you compile it? Thanks in advance!

@0xDaksh
Copy link

0xDaksh commented Jul 7, 2018

@alejandrohall didn't work sorry.

@beew
Copy link

beew commented Jul 7, 2018

I have successfully built and run tensorflow1.8 and 1.9rc1 against cuda9.2+patch1 and cudnn 7.1, with python3.5 on Ubuntu 16.04. I installed cuda9.2 stuffs in a separate test folder (use the .run files without sudo)

I source this script when building and whenever I run these versions of tf (tf1.9 compiled with openmpi. but need to change line 76 of tensorflow/tensorflow/contrib/mpi_collectives/kernels/mpi_ops.cc from se to stream_executor or build would fail)

$PYTHONUSERBASE is set to the test folder so pip3 install --user would install the test tf whl (only one of 1.8 or 1.9rc can exist of course)inside the test folder without messing up the system's version. To invoke it would need to prepend $PYTHONPATH accordingly.

This way it would invoke the test version of tf and it would point to the matching version of cuda (9.2 instead of system's 9.1)

export PREFIX=/home/beew/opt/cuda_test/cuda92
export PATH=$PREFIX/cuda/bin:$PREFIX/bin:$PATH
export CUDA_SDK_ROOT_DIR=$PREFIX/samples/common
export TENSORRT_PATH=$PREFIX/TensorRT-4.0.1.6

export LD_LIBRARY_PATH=$PREFIX/cuda/lib64:$PREFIX/cuda/extras/CUPTI/lib64:$LD_LIBRRAY_PATH:$TENSORRT_PATH/lib

export PYTHONUSERBASE=$PREFIX

export PYTHONPATH=$PREFIX/lib/python3.5/site-packages:$PYTHONPATH

export MPI_HOME=/usr/lib/openmpi

export CPATH=$PREFIX/include:$CPATH
export LIBRARY_PATH=$PREFIX/lib:$LIBRARY_PATH
export LD_LIBRARY_PATH=$PREFIX/lib:$LD_LIBRARY_PATH

alias nvblas92="LD_PRELOAD=$PREFIX/cuda/lib64/libnvblas.so"

It seems that 1.8 without mkl is faster than 1.8 with mkl same phenomenon with 1.9rc1

But then I have only one gpu, maybe the multiple gpu stuffs don't work?

@missionfission
Copy link

I have built a Tensorflow-1.9 with Cuda-9.2 here https://github.com/missionfission/tensorflow-wheel

@zychen423
Copy link

Well, I found error when installed using @missionfission's build.
So I build it by myself, TF 1.9 with CUDA 9.2
https://github.com/chen0423/TF-1.9-cp36-cuda9.2-wheel

Hope this will help anyone.

@pranman
Copy link

pranman commented Aug 21, 2018

@chen0423 This just saved me a few hours of compile time, cheers 👍

@lan2720
Copy link

lan2720 commented Sep 6, 2018

@chen0423 Can you provide more information about how to build by self? Thank you. When I used your .whl, an error raise "tensorflow-1.9.0-cp36-cp36m-linux_x86_64.whl is not a supported wheel on this platform.".

@zychen423
Copy link

zychen423 commented Sep 6, 2018

@lan2720
The detail of building it may need some time to write. I will try to write it down in couples of week.
But for quick advice for you, I think you can check the version with your pip, python, and OS, making what does your pip support
I might be able to help you if you post these information on.

Moreover, you can take a look at:

  1. https://stackoverflow.com/questions/28568070/filename-whl-is-not-supported-wheel-on-this-platform
  2. https://stackoverflow.com/questions/38866758/filename-whl-is-not-a-supported-wheel-on-this-platform
  3. (edit2) https://www.python36.com/how-to-install-tensorflow-gpu-with-cuda-9-2-for-python-on-ubuntu/

(edit1) In the end, I am thinking about if it's inappropriate to discuss it in this thread, since it's actually issue related to my repo.
If it's the case, then I am sorry about that.

Hope this helps and please forgive my poor language.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

9 participants