New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
LibTorch gpu cmake error #1336
Comments
what's the version of your GCC? |
My gcc version is 7.3@robin1001 |
And I deployed libtorch_GPU in the provided docker, and there was also a cmake error! FROM ubuntu:latest cmake error in container: root@821526b736a0:/home/wenet/runtime/server/x86/build# cmake -DGPU=ON .. -- Configuring incomplete, errors occurred! |
@yuekaizhang @veelion any idea on the problem? |
Could you try one of nvidia docker image? nvcr.io/nvidia/pytorch:xx.xx-py3 For example, 22.01. |
Hi, the compilation problem has been solved, the gcc version needs to be greater than 7.3, and when cuda is installed, copy cudnn_version.h to the /usr/local/cuda/include/ directory, and it can be compiled perfectly, but during the calling process, the got the following error: export GLOG_logtostderr=1 I0801 13:29:29.093011 61770 params.h:135] Reading torch model /home/ZYJ/WeNet/wenet_gpu/wenet/runtime/LibTorch/model/20210815_unified_conformer_libtorch/final.zip Traceback of TorchScript, original code (most recent call last): QuantizedCPU: registered at ../aten/src/ATen/native/quantized/cpu/qlinear_prepack.cpp:324 [kernel] Exception raised from reportError at ../aten/src/ATen/core/dispatch/OperatorEntry.cpp:431 (most recent call first): I think it's still a libtorch problem, do you have any ideas? |
@robin1001@yuekaizhang |
please do not use quantized model for GPU. |
I don't use the quantitative model, use the Checkpoint Model test, it also got an error: (wenet) [ZYJ@localhost LibTorch]$ model_dir=./model/20220506_u2pp_conformer_exp how can i solve it?@robin1001 |
decode_main requires runtime model. If GPU is compiled in, it requires float runtime model. We only give checkpoint model and quantized runtime model in |
Can the pre-trained models you released use GPU inference in websocket_server_main and grpc_server_main? I want to use GPU for inference test now, besides tritonserver_GPU, is there any good way? Please let me know, thanks! |
https://github.com/wenet-e2e/wenet/blob/main/examples/aishell/s0/run.sh#L204 You could export them from checkpoint models. Or if you just would like test, you could modify wenet/bin/recognize.py |
ok, i'll try it! thank you very much |
As you suggested: |
Hello, when I execute " mkdir build && cd build && cmake -DGRPC=ON ..", the following error is reported,
Native environment:
centors 7.9
nvidia: 11.3
cuda version: 11
(wenet_gpu) [ZYJ@localhost build]$ cmake -DGPU=ON ..
-- The C compiler identification is GNU 4.8.5
-- The CXX compiler identification is GNU 4.8.5
-- Check for working C compiler: /usr/bin/cc
-- Check for working C compiler: /usr/bin/cc -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Detecting C compile features
-- Detecting C compile features - done
-- Check for working CXX compiler: /usr/bin/c++
-- Check for working CXX compiler: /usr/bin/c++ -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Populating libtorch
-- Configuring done
-- Generating done
-- Build files have been written to: /home/ZYJ/WeNet/wenet_gpu/wenet/runtime/LibTorch/fc_base/libtorch-subbuild
[ 11%] Performing download step (download, verify and extract) for 'libtorch-populate'
-- verifying file...
file='/home/ZYJ/WeNet/wenet_gpu/wenet/runtime/LibTorch/fc_base/libtorch-subbuild/libtorch-populate-prefix/src/libtorch-shared-with-deps-1.10.0%2Bcu113.zip'
-- File already exists and hash match (skip download):
file='/home/ZYJ/WeNet/wenet_gpu/wenet/runtime/LibTorch/fc_base/libtorch-subbuild/libtorch-populate-prefix/src/libtorch-shared-with-deps-1.10.0%2Bcu113.zip'
SHA256='0996a6a4ea8bbc1137b4fb0476eeca25b5efd8ed38955218dec1b73929090053'
-- extracting...
src='/home/ZYJ/WeNet/wenet_gpu/wenet/runtime/LibTorch/fc_base/libtorch-subbuild/libtorch-populate-prefix/src/libtorch-shared-with-deps-1.10.0%2Bcu113.zip'
dst='/home/ZYJ/WeNet/wenet_gpu/wenet/runtime/LibTorch/fc_base/libtorch-src'
-- extracting... [tar xfz]
-- extracting... [analysis]
-- extracting... [rename]
-- extracting... [clean up]
-- extracting... done
[ 22%] No patch step for 'libtorch-populate'
[ 33%] No update step for 'libtorch-populate'
[ 44%] No configure step for 'libtorch-populate'
[ 55%] No build step for 'libtorch-populate'
[ 66%] No install step for 'libtorch-populate'
[ 77%] No test step for 'libtorch-populate'
[ 88%] Completed 'libtorch-populate'
[100%] Built target libtorch-populate
-- Looking for pthread.h
-- Looking for pthread.h - found
-- Looking for pthread_create
-- Looking for pthread_create - not found
-- Looking for pthread_create in pthreads
-- Looking for pthread_create in pthreads - not found
-- Looking for pthread_create in pthread
-- Looking for pthread_create in pthread - found
-- Found Threads: TRUE
-- Found CUDA: /usr/local/cuda-11.3 (found version "11.3")
-- Caffe2: CUDA detected: 11.3
-- Caffe2: CUDA nvcc is: /usr/local/cuda-11.3/bin/nvcc
-- Caffe2: CUDA toolkit directory: /usr/local/cuda-11.3
CMake Error at fc_base/libtorch-src/share/cmake/Caffe2/public/cuda.cmake:75 (message):
Caffe2: Couldn't determine version from header: Change Dir:
/home/ZYJ/WeNet/wenet_gpu/wenet/runtime/LibTorch/build/CMakeFiles/CMakeTmp
Run Build Command(s):/usr/bin/gmake cmTC_3d968/fast
/usr/bin/gmake -f CMakeFiles/cmTC_3d968.dir/build.make
CMakeFiles/cmTC_3d968.dir/build
gmake[1]:
进入目录“/home/ZYJ/WeNet/wenet_gpu/wenet/runtime/LibTorch/build/CMakeFiles/CMakeTmp”
Building CXX object CMakeFiles/cmTC_3d968.dir/detect_cuda_version.cc.o
/usr/bin/c++ -I/usr/local/cuda-11.3/include -std=c++14 -pthread -fPIC -o
CMakeFiles/cmTC_3d968.dir/detect_cuda_version.cc.o -c
/home/ZYJ/WeNet/wenet_gpu/wenet/runtime/LibTorch/build/detect_cuda_version.cc
c++: 错误:unrecognized command line option ‘-std=c++14’
gmake[1]: *** [CMakeFiles/cmTC_3d968.dir/detect_cuda_version.cc.o] 错误 1
gmake[1]:
离开目录“/home/ZYJ/WeNet/wenet_gpu/wenet/runtime/LibTorch/build/CMakeFiles/CMakeTmp”
gmake: *** [cmTC_3d968/fast] 错误 2
Call Stack (most recent call first):
fc_base/libtorch-src/share/cmake/Caffe2/Caffe2Config.cmake:88 (include)
fc_base/libtorch-src/share/cmake/Torch/TorchConfig.cmake:68 (find_package)
cmake/libtorch.cmake:52 (find_package)
CMakeLists.txt:35 (include)
-- Configuring incomplete, errors occurred!
See also "/home/ZYJ/WeNet/wenet_gpu/wenet/runtime/LibTorch/build/CMakeFiles/CMakeOutput.log".
See also "/home/ZYJ/WeNet/wenet_gpu/wenet/runtime/LibTorch/build/CMakeFiles/CMakeError.log".
please what should Ido?
The text was updated successfully, but these errors were encountered: