Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

If cuda version can not degrade to 10.2,how to reproduce your network? #66

Closed
LeopoldACC opened this issue May 5, 2021 · 21 comments
Closed

Comments

@LeopoldACC
Copy link

My machine's GPU is RTX3090,which is based on Ampere architecture and just support 11.0 as the oldest version of cuda.So is there any way to us CPU to reproduce your work?
The error log I tried to install on my machine is shown as below

python setup.py install
running install
running bdist_egg
running egg_info
writing torchsparse.egg-info/PKG-INFO
writing dependency_links to torchsparse.egg-info/dependency_links.txt
writing top-level names to torchsparse.egg-info/top_level.txt
/home/gz/anaconda3/envs/pc/lib/python3.8/site-packages/torch/utils/cpp_extension.py:339: UserWarning: Attempted to use ninja as the BuildExtension backend but we could not find ninja.. Falling back to using the slow distutils backend.
  warnings.warn(msg.format('we could not find ninja.'))
reading manifest file 'torchsparse.egg-info/SOURCES.txt'
writing manifest file 'torchsparse.egg-info/SOURCES.txt'
installing library code to build/bdist.linux-x86_64/egg
running install_lib
running build_py
running build_ext
building 'torchsparse_backend' extension
gcc -pthread -B /home/gz/anaconda3/envs/pc/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/home/gz/anaconda3/envs/pc/lib/python3.8/site-packages/torch/include -I/home/gz/anaconda3/envs/pc/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/home/gz/anaconda3/envs/pc/lib/python3.8/site-packages/torch/include/TH -I/home/gz/anaconda3/envs/pc/lib/python3.8/site-packages/torch/include/THC -I:/usr/local/cuda/include -I/home/gz/anaconda3/envs/pc/include/python3.8 -c torchsparse/src/torchsparse_bindings_gpu.cpp -o build/temp.linux-x86_64-3.8/torchsparse/src/torchsparse_bindings_gpu.o -g -O3 -fopenmp -lgomp -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=torchsparse_backend -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14
cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
In file included from torchsparse/src/convolution/convolution_gpu.h:4,
                 from torchsparse/src/torchsparse_bindings_gpu.cpp:9:
/home/gz/anaconda3/envs/pc/lib/python3.8/site-packages/torch/include/ATen/cuda/CUDAContext.h:5:10: fatal error: cuda_runtime_api.h: 没有那个文件或目录
    5 | #include <cuda_runtime_api.h>
      |          ^~~~~~~~~~~~~~~~~~~~
compilation terminated.
error: command 'gcc' failed with exit status 1

Thks!

@digital-idiot
Copy link
Contributor

digital-idiot commented May 7, 2021

@LeopoldACC Torchsparse compiles fine with CUDA 11.3 and Pytorch (GPU) 1.8.1. It seems your LD_LIBRARY_PATH is not set properly. I can share compiled wheel if you want.


+-----------------------------------------------------------------------------+
| NVIDIA-SMI 465.27       Driver Version: 465.27       CUDA Version: 11.3     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  NVIDIA GeForce ...  Off  | 00000000:01:00.0 Off |                  N/A |
| N/A   51C    P0    20W /  N/A |      5MiB /  7982MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
|    0   N/A  N/A      1029      G   /usr/lib/Xorg                       4MiB |
+-----------------------------------------------------------------------------+
$ python setup.py bdist_wheel
running bdist_wheel
running build
running build_py
creating build
creating build/lib.linux-x86_64-3.8
creating build/lib.linux-x86_64-3.8/torchsparse
copying torchsparse/sparse_tensor.py -> build/lib.linux-x86_64-3.8/torchsparse
copying torchsparse/point_tensor.py -> build/lib.linux-x86_64-3.8/torchsparse
copying torchsparse/__init__.py -> build/lib.linux-x86_64-3.8/torchsparse
creating build/lib.linux-x86_64-3.8/torchsparse/utils
copying torchsparse/utils/kernel_region.py -> build/lib.linux-x86_64-3.8/torchsparse/utils
copying torchsparse/utils/helpers.py -> build/lib.linux-x86_64-3.8/torchsparse/utils
copying torchsparse/utils/__init__.py -> build/lib.linux-x86_64-3.8/torchsparse/utils
creating build/lib.linux-x86_64-3.8/torchsparse/nn
copying torchsparse/nn/__init__.py -> build/lib.linux-x86_64-3.8/torchsparse/nn
creating build/lib.linux-x86_64-3.8/torchsparse/nn/modules
copying torchsparse/nn/modules/pooling.py -> build/lib.linux-x86_64-3.8/torchsparse/nn/modules
copying torchsparse/nn/modules/norm.py -> build/lib.linux-x86_64-3.8/torchsparse/nn/modules
copying torchsparse/nn/modules/crop.py -> build/lib.linux-x86_64-3.8/torchsparse/nn/modules
copying torchsparse/nn/modules/conv.py -> build/lib.linux-x86_64-3.8/torchsparse/nn/modules
copying torchsparse/nn/modules/activation.py -> build/lib.linux-x86_64-3.8/torchsparse/nn/modules
copying torchsparse/nn/modules/__init__.py -> build/lib.linux-x86_64-3.8/torchsparse/nn/modules
creating build/lib.linux-x86_64-3.8/torchsparse/nn/functional
copying torchsparse/nn/functional/voxelize.py -> build/lib.linux-x86_64-3.8/torchsparse/nn/functional
copying torchsparse/nn/functional/query.py -> build/lib.linux-x86_64-3.8/torchsparse/nn/functional
copying torchsparse/nn/functional/pooling.py -> build/lib.linux-x86_64-3.8/torchsparse/nn/functional
copying torchsparse/nn/functional/hash.py -> build/lib.linux-x86_64-3.8/torchsparse/nn/functional
copying torchsparse/nn/functional/downsample.py -> build/lib.linux-x86_64-3.8/torchsparse/nn/functional
copying torchsparse/nn/functional/devox.py -> build/lib.linux-x86_64-3.8/torchsparse/nn/functional
copying torchsparse/nn/functional/crop.py -> build/lib.linux-x86_64-3.8/torchsparse/nn/functional
copying torchsparse/nn/functional/count.py -> build/lib.linux-x86_64-3.8/torchsparse/nn/functional
copying torchsparse/nn/functional/convert_neighbor_map.py -> build/lib.linux-x86_64-3.8/torchsparse/nn/functional
copying torchsparse/nn/functional/conv.py -> build/lib.linux-x86_64-3.8/torchsparse/nn/functional
copying torchsparse/nn/functional/activation.py -> build/lib.linux-x86_64-3.8/torchsparse/nn/functional
copying torchsparse/nn/functional/__init__.py -> build/lib.linux-x86_64-3.8/torchsparse/nn/functional
running build_ext
building 'torchsparse_backend' extension
creating /tmp/torchsparse/build/temp.linux-x86_64-3.8
creating /tmp/torchsparse/build/temp.linux-x86_64-3.8/torchsparse
creating /tmp/torchsparse/build/temp.linux-x86_64-3.8/torchsparse/src
creating /tmp/torchsparse/build/temp.linux-x86_64-3.8/torchsparse/src/convolution
creating /tmp/torchsparse/build/temp.linux-x86_64-3.8/torchsparse/src/hash
creating /tmp/torchsparse/build/temp.linux-x86_64-3.8/torchsparse/src/hashmap
creating /tmp/torchsparse/build/temp.linux-x86_64-3.8/torchsparse/src/interpolation
creating /tmp/torchsparse/build/temp.linux-x86_64-3.8/torchsparse/src/others
/home/abhisek/.conda/envs/Py3Dev/lib/python3.8/site-packages/torch/utils/cpp_extension.py:283: UserWarning: 

                               !! WARNING !!

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
Your compiler (c++) is not compatible with the compiler Pytorch was
built with for this platform, which is g++ on linux. Please
use g++ to to compile your extension. Alternatively, you may
compile PyTorch from source using c++, and then you can also use
c++ to compile your extension.

See https://github.com/pytorch/pytorch/blob/master/CONTRIBUTING.md for help
with compiling PyTorch from source.
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

                              !! WARNING !!

  warnings.warn(WRONG_COMPILER_WARNING.format(
Emitting ninja build file /tmp/torchsparse/build/temp.linux-x86_64-3.8/build.ninja...
Compiling objects...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
[1/25] c++ -MMD -MF /tmp/torchsparse/build/temp.linux-x86_64-3.8/torchsparse/src/hashmap/hashmap_cpu.o.d -pthread -B /home/abhisek/.conda/envs/Py3Dev/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/home/abhisek/.conda/envs/Py3Dev/lib/python3.8/site-packages/torch/include -I/home/abhisek/.conda/envs/Py3Dev/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/home/abhisek/.conda/envs/Py3Dev/lib/python3.8/site-packages/torch/include/TH -I/home/abhisek/.conda/envs/Py3Dev/lib/python3.8/site-packages/torch/include/THC -I/opt/cuda/include -I/home/abhisek/.conda/envs/Py3Dev/include/python3.8 -c -c /tmp/torchsparse/torchsparse/src/hashmap/hashmap_cpu.cpp -o /tmp/torchsparse/build/temp.linux-x86_64-3.8/torchsparse/src/hashmap/hashmap_cpu.o -g -O3 -fopenmp -lgomp -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=torchsparse_backend -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14
cc1plus: warning: command-line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
/tmp/torchsparse/torchsparse/src/hashmap/hashmap_cpu.cpp: In member function ‘int HashTableCPU::insert_vals(const int64_t*, const int64_t*, int)’:
/tmp/torchsparse/torchsparse/src/hashmap/hashmap_cpu.cpp:37:1: warning: no return statement in function returning non-void [-Wreturn-type]
   37 | }
      | ^
[2/25] /opt/cuda/bin/nvcc --generate-dependencies-with-compile --dependency-output /tmp/torchsparse/build/temp.linux-x86_64-3.8/torchsparse/src/hash/hash_gpu.o.d -I/home/abhisek/.conda/envs/Py3Dev/lib/python3.8/site-packages/torch/include -I/home/abhisek/.conda/envs/Py3Dev/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/home/abhisek/.conda/envs/Py3Dev/lib/python3.8/site-packages/torch/include/TH -I/home/abhisek/.conda/envs/Py3Dev/lib/python3.8/site-packages/torch/include/THC -I/opt/cuda/include -I/home/abhisek/.conda/envs/Py3Dev/include/python3.8 -c -c /tmp/torchsparse/torchsparse/src/hash/hash_gpu.cu -o /tmp/torchsparse/build/temp.linux-x86_64-3.8/torchsparse/src/hash/hash_gpu.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -O3 -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=torchsparse_backend -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_75,code=compute_75 -gencode=arch=compute_75,code=sm_75 -std=c++14
[3/25] /opt/cuda/bin/nvcc --generate-dependencies-with-compile --dependency-output /tmp/torchsparse/build/temp.linux-x86_64-3.8/torchsparse/src/hashmap/hashmap.o.d -I/home/abhisek/.conda/envs/Py3Dev/lib/python3.8/site-packages/torch/include -I/home/abhisek/.conda/envs/Py3Dev/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/home/abhisek/.conda/envs/Py3Dev/lib/python3.8/site-packages/torch/include/TH -I/home/abhisek/.conda/envs/Py3Dev/lib/python3.8/site-packages/torch/include/THC -I/opt/cuda/include -I/home/abhisek/.conda/envs/Py3Dev/include/python3.8 -c -c /tmp/torchsparse/torchsparse/src/hashmap/hashmap.cu -o /tmp/torchsparse/build/temp.linux-x86_64-3.8/torchsparse/src/hashmap/hashmap.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -O3 -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=torchsparse_backend -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_75,code=compute_75 -gencode=arch=compute_75,code=sm_75 -std=c++14
/tmp/torchsparse/torchsparse/src/hashmap/hashmap.cu(30): warning: argument is incompatible with corresponding format string conversion

[4/25] /opt/cuda/bin/nvcc --generate-dependencies-with-compile --dependency-output /tmp/torchsparse/build/temp.linux-x86_64-3.8/torchsparse/src/others/convert_neighbor_map_gpu.o.d -I/home/abhisek/.conda/envs/Py3Dev/lib/python3.8/site-packages/torch/include -I/home/abhisek/.conda/envs/Py3Dev/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/home/abhisek/.conda/envs/Py3Dev/lib/python3.8/site-packages/torch/include/TH -I/home/abhisek/.conda/envs/Py3Dev/lib/python3.8/site-packages/torch/include/THC -I/opt/cuda/include -I/home/abhisek/.conda/envs/Py3Dev/include/python3.8 -c -c /tmp/torchsparse/torchsparse/src/others/convert_neighbor_map_gpu.cu -o /tmp/torchsparse/build/temp.linux-x86_64-3.8/torchsparse/src/others/convert_neighbor_map_gpu.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -O3 -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=torchsparse_backend -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_75,code=compute_75 -gencode=arch=compute_75,code=sm_75 -std=c++14
[5/25] /opt/cuda/bin/nvcc --generate-dependencies-with-compile --dependency-output /tmp/torchsparse/build/temp.linux-x86_64-3.8/torchsparse/src/interpolation/devox_gpu.o.d -I/home/abhisek/.conda/envs/Py3Dev/lib/python3.8/site-packages/torch/include -I/home/abhisek/.conda/envs/Py3Dev/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/home/abhisek/.conda/envs/Py3Dev/lib/python3.8/site-packages/torch/include/TH -I/home/abhisek/.conda/envs/Py3Dev/lib/python3.8/site-packages/torch/include/THC -I/opt/cuda/include -I/home/abhisek/.conda/envs/Py3Dev/include/python3.8 -c -c /tmp/torchsparse/torchsparse/src/interpolation/devox_gpu.cu -o /tmp/torchsparse/build/temp.linux-x86_64-3.8/torchsparse/src/interpolation/devox_gpu.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -O3 -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=torchsparse_backend -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_75,code=compute_75 -gencode=arch=compute_75,code=sm_75 -std=c++14
[6/25] /opt/cuda/bin/nvcc --generate-dependencies-with-compile --dependency-output /tmp/torchsparse/build/temp.linux-x86_64-3.8/torchsparse/src/interpolation/devox_deterministic_gpu.o.d -I/home/abhisek/.conda/envs/Py3Dev/lib/python3.8/site-packages/torch/include -I/home/abhisek/.conda/envs/Py3Dev/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/home/abhisek/.conda/envs/Py3Dev/lib/python3.8/site-packages/torch/include/TH -I/home/abhisek/.conda/envs/Py3Dev/lib/python3.8/site-packages/torch/include/THC -I/opt/cuda/include -I/home/abhisek/.conda/envs/Py3Dev/include/python3.8 -c -c /tmp/torchsparse/torchsparse/src/interpolation/devox_deterministic_gpu.cu -o /tmp/torchsparse/build/temp.linux-x86_64-3.8/torchsparse/src/interpolation/devox_deterministic_gpu.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -O3 -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=torchsparse_backend -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_75,code=compute_75 -gencode=arch=compute_75,code=sm_75 -std=c++14
[7/25] /opt/cuda/bin/nvcc --generate-dependencies-with-compile --dependency-output /tmp/torchsparse/build/temp.linux-x86_64-3.8/torchsparse/src/convolution/convolution_gpu.o.d -I/home/abhisek/.conda/envs/Py3Dev/lib/python3.8/site-packages/torch/include -I/home/abhisek/.conda/envs/Py3Dev/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/home/abhisek/.conda/envs/Py3Dev/lib/python3.8/site-packages/torch/include/TH -I/home/abhisek/.conda/envs/Py3Dev/lib/python3.8/site-packages/torch/include/THC -I/opt/cuda/include -I/home/abhisek/.conda/envs/Py3Dev/include/python3.8 -c -c /tmp/torchsparse/torchsparse/src/convolution/convolution_gpu.cu -o /tmp/torchsparse/build/temp.linux-x86_64-3.8/torchsparse/src/convolution/convolution_gpu.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -O3 -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=torchsparse_backend -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_75,code=compute_75 -gencode=arch=compute_75,code=sm_75 -std=c++14
[8/25] /opt/cuda/bin/nvcc --generate-dependencies-with-compile --dependency-output /tmp/torchsparse/build/temp.linux-x86_64-3.8/torchsparse/src/others/count_gpu.o.d -I/home/abhisek/.conda/envs/Py3Dev/lib/python3.8/site-packages/torch/include -I/home/abhisek/.conda/envs/Py3Dev/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/home/abhisek/.conda/envs/Py3Dev/lib/python3.8/site-packages/torch/include/TH -I/home/abhisek/.conda/envs/Py3Dev/lib/python3.8/site-packages/torch/include/THC -I/opt/cuda/include -I/home/abhisek/.conda/envs/Py3Dev/include/python3.8 -c -c /tmp/torchsparse/torchsparse/src/others/count_gpu.cu -o /tmp/torchsparse/build/temp.linux-x86_64-3.8/torchsparse/src/others/count_gpu.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -O3 -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=torchsparse_backend -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_75,code=compute_75 -gencode=arch=compute_75,code=sm_75 -std=c++14
[9/25] /opt/cuda/bin/nvcc --generate-dependencies-with-compile --dependency-output /tmp/torchsparse/build/temp.linux-x86_64-3.8/torchsparse/src/others/insertion_gpu.o.d -I/home/abhisek/.conda/envs/Py3Dev/lib/python3.8/site-packages/torch/include -I/home/abhisek/.conda/envs/Py3Dev/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/home/abhisek/.conda/envs/Py3Dev/lib/python3.8/site-packages/torch/include/TH -I/home/abhisek/.conda/envs/Py3Dev/lib/python3.8/site-packages/torch/include/THC -I/opt/cuda/include -I/home/abhisek/.conda/envs/Py3Dev/include/python3.8 -c -c /tmp/torchsparse/torchsparse/src/others/insertion_gpu.cu -o /tmp/torchsparse/build/temp.linux-x86_64-3.8/torchsparse/src/others/insertion_gpu.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -O3 -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=torchsparse_backend -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_75,code=compute_75 -gencode=arch=compute_75,code=sm_75 -std=c++14
[10/25] c++ -MMD -MF /tmp/torchsparse/build/temp.linux-x86_64-3.8/torchsparse/src/hash/hash_cpu.o.d -pthread -B /home/abhisek/.conda/envs/Py3Dev/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/home/abhisek/.conda/envs/Py3Dev/lib/python3.8/site-packages/torch/include -I/home/abhisek/.conda/envs/Py3Dev/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/home/abhisek/.conda/envs/Py3Dev/lib/python3.8/site-packages/torch/include/TH -I/home/abhisek/.conda/envs/Py3Dev/lib/python3.8/site-packages/torch/include/THC -I/opt/cuda/include -I/home/abhisek/.conda/envs/Py3Dev/include/python3.8 -c -c /tmp/torchsparse/torchsparse/src/hash/hash_cpu.cpp -o /tmp/torchsparse/build/temp.linux-x86_64-3.8/torchsparse/src/hash/hash_cpu.o -g -O3 -fopenmp -lgomp -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=torchsparse_backend -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14
cc1plus: warning: command-line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
[11/25] c++ -MMD -MF /tmp/torchsparse/build/temp.linux-x86_64-3.8/torchsparse/src/hash/hash.o.d -pthread -B /home/abhisek/.conda/envs/Py3Dev/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/home/abhisek/.conda/envs/Py3Dev/lib/python3.8/site-packages/torch/include -I/home/abhisek/.conda/envs/Py3Dev/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/home/abhisek/.conda/envs/Py3Dev/lib/python3.8/site-packages/torch/include/TH -I/home/abhisek/.conda/envs/Py3Dev/lib/python3.8/site-packages/torch/include/THC -I/opt/cuda/include -I/home/abhisek/.conda/envs/Py3Dev/include/python3.8 -c -c /tmp/torchsparse/torchsparse/src/hash/hash.cpp -o /tmp/torchsparse/build/temp.linux-x86_64-3.8/torchsparse/src/hash/hash.o -g -O3 -fopenmp -lgomp -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=torchsparse_backend -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14
cc1plus: warning: command-line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
[12/25] c++ -MMD -MF /tmp/torchsparse/build/temp.linux-x86_64-3.8/torchsparse/src/convolution/convolution_cpu.o.d -pthread -B /home/abhisek/.conda/envs/Py3Dev/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/home/abhisek/.conda/envs/Py3Dev/lib/python3.8/site-packages/torch/include -I/home/abhisek/.conda/envs/Py3Dev/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/home/abhisek/.conda/envs/Py3Dev/lib/python3.8/site-packages/torch/include/TH -I/home/abhisek/.conda/envs/Py3Dev/lib/python3.8/site-packages/torch/include/THC -I/opt/cuda/include -I/home/abhisek/.conda/envs/Py3Dev/include/python3.8 -c -c /tmp/torchsparse/torchsparse/src/convolution/convolution_cpu.cpp -o /tmp/torchsparse/build/temp.linux-x86_64-3.8/torchsparse/src/convolution/convolution_cpu.o -g -O3 -fopenmp -lgomp -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=torchsparse_backend -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14
cc1plus: warning: command-line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
[13/25] c++ -MMD -MF /tmp/torchsparse/build/temp.linux-x86_64-3.8/torchsparse/src/interpolation/devox.o.d -pthread -B /home/abhisek/.conda/envs/Py3Dev/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/home/abhisek/.conda/envs/Py3Dev/lib/python3.8/site-packages/torch/include -I/home/abhisek/.conda/envs/Py3Dev/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/home/abhisek/.conda/envs/Py3Dev/lib/python3.8/site-packages/torch/include/TH -I/home/abhisek/.conda/envs/Py3Dev/lib/python3.8/site-packages/torch/include/THC -I/opt/cuda/include -I/home/abhisek/.conda/envs/Py3Dev/include/python3.8 -c -c /tmp/torchsparse/torchsparse/src/interpolation/devox.cpp -o /tmp/torchsparse/build/temp.linux-x86_64-3.8/torchsparse/src/interpolation/devox.o -g -O3 -fopenmp -lgomp -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=torchsparse_backend -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14
cc1plus: warning: command-line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
/tmp/torchsparse/torchsparse/src/interpolation/devox.cpp: In function ‘at::Tensor devoxelize_forward(at::Tensor, at::Tensor, at::Tensor)’:
/tmp/torchsparse/torchsparse/src/interpolation/devox.cpp:15:7: warning: unused variable ‘b’ [-Wunused-variable]
   15 |   int b = feat.size(0);
      |       ^
[14/25] c++ -MMD -MF /tmp/torchsparse/build/temp.linux-x86_64-3.8/torchsparse/src/interpolation/devox_deterministic.o.d -pthread -B /home/abhisek/.conda/envs/Py3Dev/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/home/abhisek/.conda/envs/Py3Dev/lib/python3.8/site-packages/torch/include -I/home/abhisek/.conda/envs/Py3Dev/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/home/abhisek/.conda/envs/Py3Dev/lib/python3.8/site-packages/torch/include/TH -I/home/abhisek/.conda/envs/Py3Dev/lib/python3.8/site-packages/torch/include/THC -I/opt/cuda/include -I/home/abhisek/.conda/envs/Py3Dev/include/python3.8 -c -c /tmp/torchsparse/torchsparse/src/interpolation/devox_deterministic.cpp -o /tmp/torchsparse/build/temp.linux-x86_64-3.8/torchsparse/src/interpolation/devox_deterministic.o -g -O3 -fopenmp -lgomp -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=torchsparse_backend -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14
cc1plus: warning: command-line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
/tmp/torchsparse/torchsparse/src/interpolation/devox_deterministic.cpp: In function ‘at::Tensor deterministic_devoxelize_forward(at::Tensor, at::Tensor, at::Tensor)’:
/tmp/torchsparse/torchsparse/src/interpolation/devox_deterministic.cpp:12:7: warning: unused variable ‘b’ [-Wunused-variable]
   12 |   int b = feat.size(0);
      |       ^
[15/25] c++ -MMD -MF /tmp/torchsparse/build/temp.linux-x86_64-3.8/torchsparse/src/convolution/convolution.o.d -pthread -B /home/abhisek/.conda/envs/Py3Dev/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/home/abhisek/.conda/envs/Py3Dev/lib/python3.8/site-packages/torch/include -I/home/abhisek/.conda/envs/Py3Dev/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/home/abhisek/.conda/envs/Py3Dev/lib/python3.8/site-packages/torch/include/TH -I/home/abhisek/.conda/envs/Py3Dev/lib/python3.8/site-packages/torch/include/THC -I/opt/cuda/include -I/home/abhisek/.conda/envs/Py3Dev/include/python3.8 -c -c /tmp/torchsparse/torchsparse/src/convolution/convolution.cpp -o /tmp/torchsparse/build/temp.linux-x86_64-3.8/torchsparse/src/convolution/convolution.o -g -O3 -fopenmp -lgomp -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=torchsparse_backend -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14
cc1plus: warning: command-line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
[16/25] c++ -MMD -MF /tmp/torchsparse/build/temp.linux-x86_64-3.8/torchsparse/src/others/count.o.d -pthread -B /home/abhisek/.conda/envs/Py3Dev/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/home/abhisek/.conda/envs/Py3Dev/lib/python3.8/site-packages/torch/include -I/home/abhisek/.conda/envs/Py3Dev/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/home/abhisek/.conda/envs/Py3Dev/lib/python3.8/site-packages/torch/include/TH -I/home/abhisek/.conda/envs/Py3Dev/lib/python3.8/site-packages/torch/include/THC -I/opt/cuda/include -I/home/abhisek/.conda/envs/Py3Dev/include/python3.8 -c -c /tmp/torchsparse/torchsparse/src/others/count.cpp -o /tmp/torchsparse/build/temp.linux-x86_64-3.8/torchsparse/src/others/count.o -g -O3 -fopenmp -lgomp -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=torchsparse_backend -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14
cc1plus: warning: command-line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
[17/25] c++ -MMD -MF /tmp/torchsparse/build/temp.linux-x86_64-3.8/torchsparse/src/interpolation/devox_cpu.o.d -pthread -B /home/abhisek/.conda/envs/Py3Dev/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/home/abhisek/.conda/envs/Py3Dev/lib/python3.8/site-packages/torch/include -I/home/abhisek/.conda/envs/Py3Dev/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/home/abhisek/.conda/envs/Py3Dev/lib/python3.8/site-packages/torch/include/TH -I/home/abhisek/.conda/envs/Py3Dev/lib/python3.8/site-packages/torch/include/THC -I/opt/cuda/include -I/home/abhisek/.conda/envs/Py3Dev/include/python3.8 -c -c /tmp/torchsparse/torchsparse/src/interpolation/devox_cpu.cpp -o /tmp/torchsparse/build/temp.linux-x86_64-3.8/torchsparse/src/interpolation/devox_cpu.o -g -O3 -fopenmp -lgomp -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=torchsparse_backend -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14
cc1plus: warning: command-line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
/tmp/torchsparse/torchsparse/src/interpolation/devox_cpu.cpp: In function ‘at::Tensor cpu_devoxelize_forward(at::Tensor, at::Tensor, at::Tensor)’:
/tmp/torchsparse/torchsparse/src/interpolation/devox_cpu.cpp:12:7: warning: unused variable ‘b’ [-Wunused-variable]
   12 |   int b = feat.size(0);
      |       ^
[18/25] c++ -MMD -MF /tmp/torchsparse/build/temp.linux-x86_64-3.8/torchsparse/src/others/convert_neighbor_map.o.d -pthread -B /home/abhisek/.conda/envs/Py3Dev/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/home/abhisek/.conda/envs/Py3Dev/lib/python3.8/site-packages/torch/include -I/home/abhisek/.conda/envs/Py3Dev/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/home/abhisek/.conda/envs/Py3Dev/lib/python3.8/site-packages/torch/include/TH -I/home/abhisek/.conda/envs/Py3Dev/lib/python3.8/site-packages/torch/include/THC -I/opt/cuda/include -I/home/abhisek/.conda/envs/Py3Dev/include/python3.8 -c -c /tmp/torchsparse/torchsparse/src/others/convert_neighbor_map.cpp -o /tmp/torchsparse/build/temp.linux-x86_64-3.8/torchsparse/src/others/convert_neighbor_map.o -g -O3 -fopenmp -lgomp -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=torchsparse_backend -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14
cc1plus: warning: command-line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
[19/25] c++ -MMD -MF /tmp/torchsparse/build/temp.linux-x86_64-3.8/torchsparse/src/others/convert_neighbor_map_cpu.o.d -pthread -B /home/abhisek/.conda/envs/Py3Dev/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/home/abhisek/.conda/envs/Py3Dev/lib/python3.8/site-packages/torch/include -I/home/abhisek/.conda/envs/Py3Dev/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/home/abhisek/.conda/envs/Py3Dev/lib/python3.8/site-packages/torch/include/TH -I/home/abhisek/.conda/envs/Py3Dev/lib/python3.8/site-packages/torch/include/THC -I/opt/cuda/include -I/home/abhisek/.conda/envs/Py3Dev/include/python3.8 -c -c /tmp/torchsparse/torchsparse/src/others/convert_neighbor_map_cpu.cpp -o /tmp/torchsparse/build/temp.linux-x86_64-3.8/torchsparse/src/others/convert_neighbor_map_cpu.o -g -O3 -fopenmp -lgomp -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=torchsparse_backend -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14
cc1plus: warning: command-line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
[20/25] c++ -MMD -MF /tmp/torchsparse/build/temp.linux-x86_64-3.8/torchsparse/src/others/insertion.o.d -pthread -B /home/abhisek/.conda/envs/Py3Dev/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/home/abhisek/.conda/envs/Py3Dev/lib/python3.8/site-packages/torch/include -I/home/abhisek/.conda/envs/Py3Dev/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/home/abhisek/.conda/envs/Py3Dev/lib/python3.8/site-packages/torch/include/TH -I/home/abhisek/.conda/envs/Py3Dev/lib/python3.8/site-packages/torch/include/THC -I/opt/cuda/include -I/home/abhisek/.conda/envs/Py3Dev/include/python3.8 -c -c /tmp/torchsparse/torchsparse/src/others/insertion.cpp -o /tmp/torchsparse/build/temp.linux-x86_64-3.8/torchsparse/src/others/insertion.o -g -O3 -fopenmp -lgomp -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=torchsparse_backend -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14
cc1plus: warning: command-line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
[21/25] c++ -MMD -MF /tmp/torchsparse/build/temp.linux-x86_64-3.8/torchsparse/src/others/count_cpu.o.d -pthread -B /home/abhisek/.conda/envs/Py3Dev/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/home/abhisek/.conda/envs/Py3Dev/lib/python3.8/site-packages/torch/include -I/home/abhisek/.conda/envs/Py3Dev/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/home/abhisek/.conda/envs/Py3Dev/lib/python3.8/site-packages/torch/include/TH -I/home/abhisek/.conda/envs/Py3Dev/lib/python3.8/site-packages/torch/include/THC -I/opt/cuda/include -I/home/abhisek/.conda/envs/Py3Dev/include/python3.8 -c -c /tmp/torchsparse/torchsparse/src/others/count_cpu.cpp -o /tmp/torchsparse/build/temp.linux-x86_64-3.8/torchsparse/src/others/count_cpu.o -g -O3 -fopenmp -lgomp -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=torchsparse_backend -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14
cc1plus: warning: command-line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
[22/25] c++ -MMD -MF /tmp/torchsparse/build/temp.linux-x86_64-3.8/torchsparse/src/others/insertion_cpu.o.d -pthread -B /home/abhisek/.conda/envs/Py3Dev/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/home/abhisek/.conda/envs/Py3Dev/lib/python3.8/site-packages/torch/include -I/home/abhisek/.conda/envs/Py3Dev/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/home/abhisek/.conda/envs/Py3Dev/lib/python3.8/site-packages/torch/include/TH -I/home/abhisek/.conda/envs/Py3Dev/lib/python3.8/site-packages/torch/include/THC -I/opt/cuda/include -I/home/abhisek/.conda/envs/Py3Dev/include/python3.8 -c -c /tmp/torchsparse/torchsparse/src/others/insertion_cpu.cpp -o /tmp/torchsparse/build/temp.linux-x86_64-3.8/torchsparse/src/others/insertion_cpu.o -g -O3 -fopenmp -lgomp -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=torchsparse_backend -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14
cc1plus: warning: command-line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
/tmp/torchsparse/torchsparse/src/others/insertion_cpu.cpp: In function ‘at::Tensor cpu_insertion_backward(at::Tensor, at::Tensor, at::Tensor, int)’:
/tmp/torchsparse/torchsparse/src/others/insertion_cpu.cpp:39:7: warning: unused variable ‘N1’ [-Wunused-variable]
   39 |   int N1 = counts.size(0);
      |       ^~
[23/25] c++ -MMD -MF /tmp/torchsparse/build/temp.linux-x86_64-3.8/torchsparse/src/torchsparse_bindings_gpu.o.d -pthread -B /home/abhisek/.conda/envs/Py3Dev/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/home/abhisek/.conda/envs/Py3Dev/lib/python3.8/site-packages/torch/include -I/home/abhisek/.conda/envs/Py3Dev/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/home/abhisek/.conda/envs/Py3Dev/lib/python3.8/site-packages/torch/include/TH -I/home/abhisek/.conda/envs/Py3Dev/lib/python3.8/site-packages/torch/include/THC -I/opt/cuda/include -I/home/abhisek/.conda/envs/Py3Dev/include/python3.8 -c -c /tmp/torchsparse/torchsparse/src/torchsparse_bindings_gpu.cpp -o /tmp/torchsparse/build/temp.linux-x86_64-3.8/torchsparse/src/torchsparse_bindings_gpu.o -g -O3 -fopenmp -lgomp -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=torchsparse_backend -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14
cc1plus: warning: command-line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
[24/25] c++ -MMD -MF /tmp/torchsparse/build/temp.linux-x86_64-3.8/torchsparse/src/others/query.o.d -pthread -B /home/abhisek/.conda/envs/Py3Dev/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/home/abhisek/.conda/envs/Py3Dev/lib/python3.8/site-packages/torch/include -I/home/abhisek/.conda/envs/Py3Dev/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/home/abhisek/.conda/envs/Py3Dev/lib/python3.8/site-packages/torch/include/TH -I/home/abhisek/.conda/envs/Py3Dev/lib/python3.8/site-packages/torch/include/THC -I/opt/cuda/include -I/home/abhisek/.conda/envs/Py3Dev/include/python3.8 -c -c /tmp/torchsparse/torchsparse/src/others/query.cpp -o /tmp/torchsparse/build/temp.linux-x86_64-3.8/torchsparse/src/others/query.o -g -O3 -fopenmp -lgomp -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=torchsparse_backend -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14
cc1plus: warning: command-line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
[25/25] c++ -MMD -MF /tmp/torchsparse/build/temp.linux-x86_64-3.8/torchsparse/src/others/query_cpu.o.d -pthread -B /home/abhisek/.conda/envs/Py3Dev/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/home/abhisek/.conda/envs/Py3Dev/lib/python3.8/site-packages/torch/include -I/home/abhisek/.conda/envs/Py3Dev/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/home/abhisek/.conda/envs/Py3Dev/lib/python3.8/site-packages/torch/include/TH -I/home/abhisek/.conda/envs/Py3Dev/lib/python3.8/site-packages/torch/include/THC -I/opt/cuda/include -I/home/abhisek/.conda/envs/Py3Dev/include/python3.8 -c -c /tmp/torchsparse/torchsparse/src/others/query_cpu.cpp -o /tmp/torchsparse/build/temp.linux-x86_64-3.8/torchsparse/src/others/query_cpu.o -g -O3 -fopenmp -lgomp -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=torchsparse_backend -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14
cc1plus: warning: command-line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
g++ -pthread -shared -B /home/abhisek/.conda/envs/Py3Dev/compiler_compat -L/home/abhisek/.conda/envs/Py3Dev/lib -Wl,-rpath=/home/abhisek/.conda/envs/Py3Dev/lib -Wl,--no-as-needed -Wl,--sysroot=/ /tmp/torchsparse/build/temp.linux-x86_64-3.8/torchsparse/src/torchsparse_bindings_gpu.o /tmp/torchsparse/build/temp.linux-x86_64-3.8/torchsparse/src/convolution/convolution_cpu.o /tmp/torchsparse/build/temp.linux-x86_64-3.8/torchsparse/src/convolution/convolution.o /tmp/torchsparse/build/temp.linux-x86_64-3.8/torchsparse/src/convolution/convolution_gpu.o /tmp/torchsparse/build/temp.linux-x86_64-3.8/torchsparse/src/hash/hash_cpu.o /tmp/torchsparse/build/temp.linux-x86_64-3.8/torchsparse/src/hash/hash.o /tmp/torchsparse/build/temp.linux-x86_64-3.8/torchsparse/src/hash/hash_gpu.o /tmp/torchsparse/build/temp.linux-x86_64-3.8/torchsparse/src/hashmap/hashmap.o /tmp/torchsparse/build/temp.linux-x86_64-3.8/torchsparse/src/hashmap/hashmap_cpu.o /tmp/torchsparse/build/temp.linux-x86_64-3.8/torchsparse/src/interpolation/devox.o /tmp/torchsparse/build/temp.linux-x86_64-3.8/torchsparse/src/interpolation/devox_gpu.o /tmp/torchsparse/build/temp.linux-x86_64-3.8/torchsparse/src/interpolation/devox_deterministic.o /tmp/torchsparse/build/temp.linux-x86_64-3.8/torchsparse/src/interpolation/devox_deterministic_gpu.o /tmp/torchsparse/build/temp.linux-x86_64-3.8/torchsparse/src/interpolation/devox_cpu.o /tmp/torchsparse/build/temp.linux-x86_64-3.8/torchsparse/src/others/convert_neighbor_map.o /tmp/torchsparse/build/temp.linux-x86_64-3.8/torchsparse/src/others/convert_neighbor_map_gpu.o /tmp/torchsparse/build/temp.linux-x86_64-3.8/torchsparse/src/others/convert_neighbor_map_cpu.o /tmp/torchsparse/build/temp.linux-x86_64-3.8/torchsparse/src/others/count.o /tmp/torchsparse/build/temp.linux-x86_64-3.8/torchsparse/src/others/count_gpu.o /tmp/torchsparse/build/temp.linux-x86_64-3.8/torchsparse/src/others/count_cpu.o /tmp/torchsparse/build/temp.linux-x86_64-3.8/torchsparse/src/others/insertion.o /tmp/torchsparse/build/temp.linux-x86_64-3.8/torchsparse/src/others/insertion_gpu.o /tmp/torchsparse/build/temp.linux-x86_64-3.8/torchsparse/src/others/insertion_cpu.o /tmp/torchsparse/build/temp.linux-x86_64-3.8/torchsparse/src/others/query.o /tmp/torchsparse/build/temp.linux-x86_64-3.8/torchsparse/src/others/query_cpu.o -L/home/abhisek/.conda/envs/Py3Dev/lib/python3.8/site-packages/torch/lib -L/opt/cuda/lib64 -lc10 -ltorch -ltorch_cpu -ltorch_python -lcudart -lc10_cuda -ltorch_cuda_cu -ltorch_cuda_cpp -o build/lib.linux-x86_64-3.8/torchsparse_backend.cpython-38-x86_64-linux-gnu.so
installing to build/bdist.linux-x86_64/wheel
running install
running install_lib
creating build/bdist.linux-x86_64
creating build/bdist.linux-x86_64/wheel
copying build/lib.linux-x86_64-3.8/torchsparse_backend.cpython-38-x86_64-linux-gnu.so -> build/bdist.linux-x86_64/wheel
creating build/bdist.linux-x86_64/wheel/torchsparse
creating build/bdist.linux-x86_64/wheel/torchsparse/nn
creating build/bdist.linux-x86_64/wheel/torchsparse/nn/functional
copying build/lib.linux-x86_64-3.8/torchsparse/nn/functional/__init__.py -> build/bdist.linux-x86_64/wheel/torchsparse/nn/functional
copying build/lib.linux-x86_64-3.8/torchsparse/nn/functional/activation.py -> build/bdist.linux-x86_64/wheel/torchsparse/nn/functional
copying build/lib.linux-x86_64-3.8/torchsparse/nn/functional/conv.py -> build/bdist.linux-x86_64/wheel/torchsparse/nn/functional
copying build/lib.linux-x86_64-3.8/torchsparse/nn/functional/convert_neighbor_map.py -> build/bdist.linux-x86_64/wheel/torchsparse/nn/functional
copying build/lib.linux-x86_64-3.8/torchsparse/nn/functional/count.py -> build/bdist.linux-x86_64/wheel/torchsparse/nn/functional
copying build/lib.linux-x86_64-3.8/torchsparse/nn/functional/crop.py -> build/bdist.linux-x86_64/wheel/torchsparse/nn/functional
copying build/lib.linux-x86_64-3.8/torchsparse/nn/functional/devox.py -> build/bdist.linux-x86_64/wheel/torchsparse/nn/functional
copying build/lib.linux-x86_64-3.8/torchsparse/nn/functional/downsample.py -> build/bdist.linux-x86_64/wheel/torchsparse/nn/functional
copying build/lib.linux-x86_64-3.8/torchsparse/nn/functional/hash.py -> build/bdist.linux-x86_64/wheel/torchsparse/nn/functional
copying build/lib.linux-x86_64-3.8/torchsparse/nn/functional/pooling.py -> build/bdist.linux-x86_64/wheel/torchsparse/nn/functional
copying build/lib.linux-x86_64-3.8/torchsparse/nn/functional/query.py -> build/bdist.linux-x86_64/wheel/torchsparse/nn/functional
copying build/lib.linux-x86_64-3.8/torchsparse/nn/functional/voxelize.py -> build/bdist.linux-x86_64/wheel/torchsparse/nn/functional
creating build/bdist.linux-x86_64/wheel/torchsparse/nn/modules
copying build/lib.linux-x86_64-3.8/torchsparse/nn/modules/__init__.py -> build/bdist.linux-x86_64/wheel/torchsparse/nn/modules
copying build/lib.linux-x86_64-3.8/torchsparse/nn/modules/activation.py -> build/bdist.linux-x86_64/wheel/torchsparse/nn/modules
copying build/lib.linux-x86_64-3.8/torchsparse/nn/modules/conv.py -> build/bdist.linux-x86_64/wheel/torchsparse/nn/modules
copying build/lib.linux-x86_64-3.8/torchsparse/nn/modules/crop.py -> build/bdist.linux-x86_64/wheel/torchsparse/nn/modules
copying build/lib.linux-x86_64-3.8/torchsparse/nn/modules/norm.py -> build/bdist.linux-x86_64/wheel/torchsparse/nn/modules
copying build/lib.linux-x86_64-3.8/torchsparse/nn/modules/pooling.py -> build/bdist.linux-x86_64/wheel/torchsparse/nn/modules
copying build/lib.linux-x86_64-3.8/torchsparse/nn/__init__.py -> build/bdist.linux-x86_64/wheel/torchsparse/nn
creating build/bdist.linux-x86_64/wheel/torchsparse/utils
copying build/lib.linux-x86_64-3.8/torchsparse/utils/__init__.py -> build/bdist.linux-x86_64/wheel/torchsparse/utils
copying build/lib.linux-x86_64-3.8/torchsparse/utils/helpers.py -> build/bdist.linux-x86_64/wheel/torchsparse/utils
copying build/lib.linux-x86_64-3.8/torchsparse/utils/kernel_region.py -> build/bdist.linux-x86_64/wheel/torchsparse/utils
copying build/lib.linux-x86_64-3.8/torchsparse/__init__.py -> build/bdist.linux-x86_64/wheel/torchsparse
copying build/lib.linux-x86_64-3.8/torchsparse/point_tensor.py -> build/bdist.linux-x86_64/wheel/torchsparse
copying build/lib.linux-x86_64-3.8/torchsparse/sparse_tensor.py -> build/bdist.linux-x86_64/wheel/torchsparse
running install_egg_info
running egg_info
creating torchsparse.egg-info
writing torchsparse.egg-info/PKG-INFO
writing dependency_links to torchsparse.egg-info/dependency_links.txt
writing top-level names to torchsparse.egg-info/top_level.txt
writing manifest file 'torchsparse.egg-info/SOURCES.txt'
writing manifest file 'torchsparse.egg-info/SOURCES.txt'
Copying torchsparse.egg-info to build/bdist.linux-x86_64/wheel/torchsparse-1.2.0-py3.8.egg-info
running install_scripts
adding license file "LICENSE" (matched pattern "LICEN[CS]E*")
creating build/bdist.linux-x86_64/wheel/torchsparse-1.2.0.dist-info/WHEEL
creating 'dist/torchsparse-1.2.0-cp38-cp38-linux_x86_64.whl' and adding 'build/bdist.linux-x86_64/wheel' to it
adding 'torchsparse_backend.cpython-38-x86_64-linux-gnu.so'
adding 'torchsparse/__init__.py'
adding 'torchsparse/point_tensor.py'
adding 'torchsparse/sparse_tensor.py'
adding 'torchsparse/nn/__init__.py'
adding 'torchsparse/nn/functional/__init__.py'
adding 'torchsparse/nn/functional/activation.py'
adding 'torchsparse/nn/functional/conv.py'
adding 'torchsparse/nn/functional/convert_neighbor_map.py'
adding 'torchsparse/nn/functional/count.py'
adding 'torchsparse/nn/functional/crop.py'
adding 'torchsparse/nn/functional/devox.py'
adding 'torchsparse/nn/functional/downsample.py'
adding 'torchsparse/nn/functional/hash.py'
adding 'torchsparse/nn/functional/pooling.py'
adding 'torchsparse/nn/functional/query.py'
adding 'torchsparse/nn/functional/voxelize.py'
adding 'torchsparse/nn/modules/__init__.py'
adding 'torchsparse/nn/modules/activation.py'
adding 'torchsparse/nn/modules/conv.py'
adding 'torchsparse/nn/modules/crop.py'
adding 'torchsparse/nn/modules/norm.py'
adding 'torchsparse/nn/modules/pooling.py'
adding 'torchsparse/utils/__init__.py'
adding 'torchsparse/utils/helpers.py'
adding 'torchsparse/utils/kernel_region.py'
adding 'torchsparse-1.2.0.dist-info/LICENSE'
adding 'torchsparse-1.2.0.dist-info/METADATA'
adding 'torchsparse-1.2.0.dist-info/WHEEL'
adding 'torchsparse-1.2.0.dist-info/top_level.txt'
adding 'torchsparse-1.2.0.dist-info/RECORD'
removing build/bdist.linux-x86_64/wheel

@zhijian-liu
Copy link
Contributor

Thank you @digital-idiot!

@mtli77
Copy link

mtli77 commented May 8, 2021

Thank you @digital-idiot!
Could you please share the compiled wheel?

@LeopoldACC
Copy link
Author

Thank you @digital-idiot !,could you share compiled wheel?Thanks a lot!!

@digital-idiot
Copy link
Contributor

digital-idiot commented May 8, 2021

Thank you @digital-idiot !,could you share compiled wheel?Thanks a lot!!

Here you go @Violetit @LeopoldACC, compiled with Python 3.8, CUDA 11.1, Linux x64: Wheel @ Google Drive

You also need sparsehash to be installed.

@LeopoldACC
Copy link
Author

Thank you @digital-idiot !,could you share compiled wheel?Thanks a lot!!

Here you go @Violetit @LeopoldACC, compiled with Python 3.8, CUDA 11.1, Linux x64: Wheel @ Google Drive

You also need sparsehash to be installed.

Thanks a lot for your help!
By the way,What tutorial should I learn if I want to learn how to solve such problem by myself in the future?

@digital-idiot
Copy link
Contributor

By the way,What tutorial should I learn if I want to learn how to solve such problem by myself in the future?

@LeopoldACC I did not understand. Are you talking about compilation? The problem is clear from the error itself. The compiler could not find the header which is part of CUDA. That means either CUDA is not installed properly or the location of the header file is not added to the library path (collection of locations where the compiler looks for headers and libraries).

@LeopoldACC
Copy link
Author

LeopoldACC commented May 17, 2021

Hi @zhijian-liu ,I installed success as @digital-idiot says(change the torch version to 1.8.1,my cuda version is 11.1)but the model is still load on cuda failed.

CUDA error: the provided PTX was compiled with an unsupported toolchain.
  File "/home/gz/3DCVcode/segmentation/spvnas-master/model_zoo.py", line 43, in spvnas_specialized
    model = SPVNAS(
  File "/home/gz/3DCVcode/segmentation/spvnas-master/model_zoo.py", line 127, in <module>
    spvnas_specialized("SemanticKITTI_val_SPVNAS@65GMACs")

torch version

torch                   1.8.1+cu111
torchaudio              0.8.1
torchpack               0.3.1
torchsparse             1.2.0
torchvision             0.9.1+cu111
tornado                 6.1

cuda version

nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2020 NVIDIA Corporation
Built on Tue_Sep_15_19:10:02_PDT_2020
Cuda compilation tools, release 11.1, V11.1.74
Build cuda_11.1.TC455_06.29069683_0

@digital-idiot
Copy link
Contributor

digital-idiot commented May 17, 2021

nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2020 NVIDIA Corporation
Built on Tue_Sep_15_19:10:02_PDT_2020
Cuda compilation tools, release 11.1, V11.1.74
Build cuda_11.1.TC455_06.29069683_0

I think your nvcc version is dated. In my case

nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2021 NVIDIA Corporation
Built on Sun_Mar_21_19:15:46_PDT_2021
Cuda compilation tools, release 11.1, V11.1.78
Build cuda_11.1.r11.1/compiler.29745058_0

Also my driver version is 465.27 currently. Check the details using nvidia-smi.

@LeopoldACC
Copy link
Author

LeopoldACC commented May 17, 2021

nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2020 NVIDIA Corporation
Built on Tue_Sep_15_19:10:02_PDT_2020
Cuda compilation tools, release 11.1, V11.1.74
Build cuda_11.1.TC455_06.29069683_0

I think your nvcc version is dated. In my case

nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2021 NVIDIA Corporation
Built on Sun_Mar_21_19:15:46_PDT_2021
Cuda compilation tools, release 11.1, V11.1.58
Build cuda_11.1.r11.1/compiler.29745058_0

Also my driver version is 465.27 currently. Check the details using nvidia-smi.

The version of your cuda is 11.1.58.So what I need to do is to downgrade my cude to 11.1.58?
Thanks!

@digital-idiot
Copy link
Contributor

digital-idiot commented May 17, 2021

@LeopoldACC What is your output of nvidia-smi?

@LeopoldACC
Copy link
Author

LeopoldACC commented May 17, 2021

Hi, @digital-idiot , my nvidia-smi output as below

+-----------------------------------------------------------------------------+
| NVIDIA-SMI 460.73.01    Driver Version: 460.73.01    CUDA Version: 11.2     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  GeForce RTX 3090    Off  | 00000000:02:00.0  On |                  N/A |
|  0%   50C    P8    31W / 370W |    431MiB / 24265MiB |      5%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
|    0   N/A  N/A       914      G   /usr/lib/xorg/Xorg                265MiB |
|    0   N/A  N/A      1311      G   /usr/bin/gnome-shell               51MiB |
|    0   N/A  N/A      1512      G   ...nlogin/bin/sunloginclient        6MiB |
|    0   N/A  N/A   1872117      G   ...AAAAAAAA== --shared-files       31MiB |
|    0   N/A  N/A   2919241      G   ...AAAAAAAAA= --shared-files       72MiB |
+-----------------------------------------------------------------------------+

@digital-idiot
Copy link
Contributor

digital-idiot commented May 17, 2021

First, can you make a symlink of your CUDA installation directory (in my case it is /opt/cuda) to /usr/local/CUDA and try to compile torchsparse?

If the above did not work, update the NVIDIA driver to 465.27. Then in your conda environment install CUDA 11.1 to avoid messing with distro specific systemwide CUDA installation then check if the installed torchsparse works. Otherwise setup the CUDA paths and LD_LIBRARY_PATH properly following your distro specific wikis / forums then compile and install torchsparse yourself.

Following links might be helpful.

[1] https://wiki.archlinux.org/title/GPGPU#Development
[2] https://wiki.archlinux.org/title/Environment_variables

@zhijian-liu
Copy link
Contributor

Thank you so much @digital-idiot!

@LeopoldACC
Copy link
Author

LeopoldACC commented May 22, 2021

First, can you make a symlink of your CUDA installation directory (in my case it is /opt/cuda) to /usr/local/CUDA and try to compile torchsparse?

If the above did not work, update the NVIDIA driver to 465.27. Then in your conda environment install CUDA 11.1 to avoid messing with distro specific systemwide CUDA installation then check if the installed torchsparse works. Otherwise setup the CUDA paths and LD_LIBRARY_PATH properly following your distro specific wikis / forums then compile and install torchsparse yourself.

Following links might be helpful.

Thanks!
The first way is mean as shown below in ~/.bashrc?

export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/lib/x86_64-linux-gnu
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/lib/x86_64-linux-gnu
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda/lib64
export PATH=$PATH:/usr/local/cuda/bin
export CUDA_HOME=$CUDA_HOME:/usr/local/cuda

@LeopoldACC
Copy link
Author

First, can you make a symlink of your CUDA installation directory (in my case it is /opt/cuda) to /usr/local/CUDA and try to compile torchsparse?

If the above did not work, update the NVIDIA driver to 465.27. Then in your conda environment install CUDA 11.1 to avoid messing with distro specific systemwide CUDA installation then check if the installed torchsparse works. Otherwise setup the CUDA paths and LD_LIBRARY_PATH properly following your distro specific wikis / forums then compile and install torchsparse yourself.

Following links might be helpful.

[1] https://wiki.archlinux.org/title/GPGPU#Development
[2] https://wiki.archlinux.org/title/Environment_variables

Hi @digital-idiot
I solved it by the second way to reinstall NVIDIA-driver 465,Thanks a lot for your help!!!If possible,can I connect with you to learn how to compile by myself and solve such problem(paid is ok if you need)?

@HaFred
Copy link

HaFred commented Jul 11, 2021

First, can you make a symlink of your CUDA installation directory (in my case it is /opt/cuda) to /usr/local/CUDA and try to compile torchsparse?
If the above did not work, update the NVIDIA driver to 465.27. Then in your conda environment install CUDA 11.1 to avoid messing with distro specific systemwide CUDA installation then check if the installed torchsparse works. Otherwise setup the CUDA paths and LD_LIBRARY_PATH properly following your distro specific wikis / forums then compile and install torchsparse yourself.
Following links might be helpful.
[1] https://wiki.archlinux.org/title/GPGPU#Development
[2] https://wiki.archlinux.org/title/Environment_variables

Hi @digital-idiot
I solved it by the second way to reinstall NVIDIA-driver 465,Thanks a lot for your help!!!If possible,can I connect with you to learn how to compile by myself and solve such problem(paid is ok if you need)?

Hi, I am curious do you mean how to build the torchsparse wheel particularly? How is it going now? Thank you.

@L-Reichardt
Copy link

@digital-idiot Thanks a lot. I've been trying for days to successfully compile on different CUDA versions (which always lead to a freeze and crash so no error messages), however with your .whl-file I have had success.

@zouyuanpeng
Copy link

@digital-idiot I try to build torchsparse from source as your (FORCE_CUDA=1 python setup.py bdist_wheel), it succeed, while when I use it in SPVNAS project, it encounter errors: AttributeError: module 'torchsparse.backend' has no attribute 'hash_cuda'. Do you meet similar errors when build from source codes successfully and import it in other project?

@digital-idiot
Copy link
Contributor

@zouyuanpeng
I'm guessing that you probably did not install Google Sparse Hash library or you have improper installation of this library. Kindly check this. It is clearly mentioned in the README as dependency with notes on how to install it.

@zouyuanpeng
Copy link

@zouyuanpeng I'm guessing that you probably did not install Google Sparse Hash library or you have improper installation of this library. Kindly check this. It is clearly mentioned in the README as dependency with notes on how to install it.

I have installed libsparsehash by sudo apt-get install libsparsehash-dev

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants