New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
XLA: Could not open input file: Is a directory #8947
Comments
This is a package we do not maintain. Either they can escalate to us whith information about how they built the package, or you can then share the information you got from the package maintainers with us. |
the package is built using bazel. TF_ROOT_DIR=$HOME/git/tensorflow
mkdir -p $HOME/git
if [ -d $TF_ROOT_DIR ]; then
cd $TF_ROOT_DIR
git pull
else
cd $HOME/git
git clone https://github.com/tensorflow/tensorflow
cd $TF_ROOT_DIR
fi
git checkout r1.1
echo $PREFIX
bazel clean
echo $PYTHON_BIN_PATH
PYTHON_BIN_PATH=$(which python) \
PYTHON_LIB_PATH=$PREFIX/lib/python3.6/site-packages \
TF_NEED_MKL=1 \
MKL_INSTALL_PATH=$PREFIX \
CC_OPT_FLAGS="-march=native" \
TF_NEED_JEMALLOC=1 \
TF_NEED_GCP=0 \
TF_NEED_HDFS=0 \
TF_ENABLE_XLA=1 \
TF_NEED_OPENCL=0 \
TF_NEED_CUDA=1 \
GCC_HOST_COMPILER_PATH=$(which gcc) \
TF_CUDA_VERSION="8.0" \
CUDA_TOOLKIT_PATH=$PREFIX \
TF_CUDNN_VERSION=6 \
CUDNN_INSTALL_PATH=$PREFIX \
TF_CUDA_COMPUTE_CAPABILITIES=6.1 \
./configure
bazel build -c opt --copt=-march=native --copt=-mavx --copt=-mavx2 --copt=-mfma --copt=-mfpmath=both --copt=-msse4.2 --config=cuda -k //tensorflow/tools/pip_package:build_pip_package
rm -rf /tmp/tensorflow_pkg
mkdir -p /tmp/tensorflow_pkg
bazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/tensorflow_pkg
pip install $(ls /tmp/tensorflow_pkg/tensorflow*) |
No, (at least) I have never seen this issue before. Note that the code example is setting CUDA_VISIBLE_DEVICES=0, and then enabling session-level JIT. Enabling session-level JIT only supports GPU, as explained here (in the starred blue box): It would be nice to not return a cryptic error, but at a high-level, setting CUDA_VISIBLE_DEVICES=0 and then enabling session-level JIT is at best not going to turn XLA on anyways. I'll advise against doing this. |
Oops, sorry, brain freeze. I just realized CUDA_VISIBLE_DEVICES=0 is selecting the 0th device, and the logs show it is being detected. So my response is back to - "no, I've never seen this, we should probably debug". |
I suspect it's because TensorFlow cannot find the CUDA libraries, though I'm not sure since I'm not sure what In particular, I'm interested in the log messages from |
Will help with issues like tensorflow#8947 Change: 152733558
Closing due to inactivity. If you're still running into this, please feel free to file an updated issue (including any output from suggestions above). Thanks! |
XLA failed with Could not open input file: Is a directory
Environment info
Operating System: Ubuntu 16.04
Installed version of CUDA and cuDNN:
(please attach the output of
ls -l /path/to/cuda/lib/libcud*
):code setup
If installed from binary pip package, provide:
code init
Log
The text was updated successfully, but these errors were encountered: