-
Notifications
You must be signed in to change notification settings - Fork 409
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Issue with building and running sherpa-onnx gpu on Windows #878
Comments
That won't affect the onnxruntime used in sherpa-onnx. Could you try CUDA 11.8 since we are using onnxruntime 1.17.1 in sherpa-onnx ? |
I have uninstalled CUDA 12.4 and installed CUDA 11.8. Then I used Then I ran the
After some google searching, I think this means that I need to update CUDA to a newer version?
|
CUDA 11.8 contains I did not rebuild and reinstall sherpa-onnx for gpu. I tried running the Next, I reinstalled 11.8, keeping 12.2 as well since it is possible to have multiple installations. I updated the path back to 11.8. I retried the example py again and it got further. This time, it produced an error about I got
|
I was using the precomplied ddls for 32 bit. I downloaded the correct 64 bit ones from http://www.winimage.com/zLibDll/ and now there are no errors when running the Thank you for help! @csukuangfj By the way, is there a performance issue with onnxruntime gpu? I am finding that cpu is faster than gpu when measuring the "time in seconds to receive the first message" for generating the tts audio. My gpu is RTX 3090. My cpu is i9-14900k. 2024-05-15 00:50:46.3377129 [W:onnxruntime:, transformer_memcpy.cc:74 onnxruntime::MemcpyTransformer::ApplyImpl] 28 Memcpy nodes are added to the graph torch_jit for CUDAExecutionProvider. It might have negative impact on performance (including unable to run CUDA graph). Set session_options.log_severity_level=1 to see the detail logs before this message. |
Glad to hear that you finally managed to run sherpa-onnx with GPU on Windows.
GPU needs warmup also the advantage of GPU is parallel processing. Moving data between CPU and GPU also takes time. In other words, GPU is not necessarily faster than CPU if you want to synthesize a single utterance. |
Since www.winimage.com is unreachable now, I just upload the dll here zlib123dllx64.zip (Downloaded from http://www.winimage.com/zLibDll/zlib123dllx64.zip). You can try placing the |
I am unsure if this is an issue with sherpa-onnx gpu installation or onnxruntime-gpu installation.
I am using Windows 11, python 3.10.11.
I have CUDA 12.4, CUDNN 8.9.2.26 and zlib 1.3.1 installed and added to the PATH.
I followed the requirement guidelines from: https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements
"ONNX Runtime built with CUDA 11.8 should be compatible with any CUDA 11.x version; ONNX Runtime built with CUDA 12.2 should be compatible with any CUDA 12.x version.
ONNX Runtime built with cuDNN 8.x are not compatible with cuDNN 9.x."
I installed
onnxruntime-gpu
specifically for CUDA 12.x following the instructions from https://onnxruntime.ai/docs/installI ran
python setup.py install
from the sherpa-onnx repo directory following the Method 2 Nvidia GPU (CUDA) install instructions at https://k2-fsa.github.io/sherpa/onnx/python/install.htmlThe installation succeeds but running tts-offline-example (https://github.com/k2-fsa/sherpa-onnx/blob/master/python-api-examples/offline-tts-play.py) with
--provider cuda
results in the following error.Running
offline-tts-play.py
example code Logs:Some errors that appear in the build logs are:
-- Failed to find all ICU components (missing: ICU_INCLUDE_DIR ICU_LIBRARY _ICU_REQUIRED_LIBS_FOUND)
-- Could NOT find ZLIB (missing: ZLIB_INCLUDE_DIR)
-- Could NOT find ASIOSDK (missing: ASIOSDK_ROOT_DIR ASIOSDK_INCLUDE_DIR)
python setup.py install
Logs:The text was updated successfully, but these errors were encountered: