Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to use clang as a cuda compiler instead of nvcc? #46902

Open
HangJie720 opened this issue Oct 27, 2020 · 1 comment
Open

How to use clang as a cuda compiler instead of nvcc? #46902

HangJie720 opened this issue Oct 27, 2020 · 1 comment
Labels
enhancement Not as big of a feature, but technically not a bug. Should be easy to fix module: build Build system issues triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module

Comments

@HangJie720
Copy link

HangJie720 commented Oct 27, 2020

I want to ask if we can use clang as a cuda compiler instead of nvcc, such as 'TF_CUDA_CLANG', 'CLANG_CUDA_COMPILER_PATH' options similar to tensorflow/third_party/gpus/cuda_configure.bzl?

cc @malfet @seemethere @walterddr

@mrshenli mrshenli added module: build Build system issues triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module labels Oct 27, 2020
@malfet malfet added the enhancement Not as big of a feature, but technically not a bug. Should be easy to fix label Oct 27, 2020
@HangJie720
Copy link
Author

I try to use clang to compile pytorch v1.6.0, such as compile 'pytorch/caffe2/operators/piecewise_linear_transform_op.cu'
/usr/bin/clang++ -D__CUDACC__ -MD -MF /home/pytorch/build/caffe2/CMakeFiles/torch_cuda.dir/operators/torch_cuda_generated_piecewise_linear_transform_op.cu.o.NVCC-depend --cuda-path=/usr/local/cuda --cuda-gpu-arch=sm_61 -x cuda -std=c++14 -Xcompiler -fPIC -DCUDA_HAS_FP16=1 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ -I/usr/local/cuda/include -I/home/pytorch/build/aten/src -I/home/pytorch/aten/src -I/home/pytorch/build -I/home/pytorch -I/home/pytorch/third_party/protobuf/src -I/usr/include -I/home/pytorch/cmake/../third_party/eigen -I/usr/include/python3.6m -I/usr/local/lib/python3.6/dist-packages/numpy/core/include -I/home/pytorch/cmake/../third_party/pybind11/include -I/home/pytorch/cmake/../third_party/cub -I/home/pytorch/build/caffe2/contrib/aten -I/home/pytorch/third_party/onnx -I/home/pytorch/build/third_party/onnx -I/home/pytorch/third_party/foxi -I/home/pytorch/build/third_party/foxi -I/home/pytorch/build/caffe2/aten/src/TH -I/home/pytorch/aten/src/TH -I/home/pytorch/build/caffe2/aten/src/THC -I/home/pytorch/aten/src/THC -I/home/pytorch/aten/src/THCUNN -I/home/pytorch/aten/src/ATen/cuda -I/home/pytorch/build/caffe2/aten/src -I/home/pytorch/aten/../third_party/catch/single_include -I/home/pytorch/aten/src/ATen/.. -I/home/pytorch/build/caffe2/aten/src/ATen -I/home/pytorch/c10/cuda/../.. -I/home/pytorch/c10/../ -I/home/pytorch/caffe2/../torch/csrc/api -I/home/pytorch/caffe2/../torch/csrc/api/include /home/pytorch/caffe2/operators/piecewise_linear_transform_op.cu -c -o /home/pytorch/build/caffe2/CMakeFiles/torch_cuda.dir/operators/./torch_cuda_generated_piecewise_linear_transform_op.cu.ooperators/./torch_cuda_generated_piecewise_linear_transform_op.cu.o

error occured as follows:

error: reference to __host__ function 'parallel_for<thrust::cuda_cub::for_each_f<thrust::zip_iterator<thrust::tuple<thrust::detail::normal_iterator<thrust::pointer<float, thrust::cuda_cub::par_t, thrust::use_default, thrust::use_default> >, thrust::detail::normal_iterator<thrust::pointer<long, thrust::cuda_cub::par_t, thrust::use_default, thrust::use_default> >, thrust::null_type, thrust::null_type, thrust::null_type, thrust::null_type, thrust::null_type, thrust::null_type, thrust::null_type, thrust::null_type> >, thrust::detail::wrapped_function<thrust::system::detail::generic::detail::binary_search_functor<const float *, thrust::system::detail::generic::detail::binary_search_less, thrust::system::detail::generic::detail::lbf>, void> >, long>' in __host__ __device__ function cudaError_t status = __parallel_for::parallel_for(count, f, stream);

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement Not as big of a feature, but technically not a bug. Should be easy to fix module: build Build system issues triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module
Projects
None yet
Development

No branches or pull requests

3 participants