Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Performance issue with --cuda #343

Open
thanhnew2001 opened this issue Jan 9, 2024 · 3 comments
Open

Performance issue with --cuda #343

thanhnew2001 opened this issue Jan 9, 2024 · 3 comments

Comments

@thanhnew2001
Copy link

Hello,
I tried to run the piper command with --cuda but it shows lots of warning and the running time is 2x, 3x time slower than the command without --cuda. Can someone let me know what I did wrong?

echo 'Welcome to the world of speech synthesis!' | piper --cuda --model en_US-lessac-medium --output_file welcome.wav**

INFO:piper.download:Downloaded /home/ph/piperpython/en_US-lessac-medium.onnx.json (https://huggingface.co/rhasspy/piper-voices/resolve/v1.0.0/en/en_US/lessac/medium/en_US-lessac-medium.onnx.json)
2024-01-09 20:37:15.439686543 [W:onnxruntime:, session_state.cc:1162 VerifyEachNodeIsAssignedToAnEp] Some nodes were not assigned to the preferred execution providers which may or may not have an negative impact on performance. e.g. ORT explicitly assigns shape related ops to CPU to improve perf.

2024-01-09 20:37:15.439707536 [W:onnxruntime:, session_state.cc:1164 VerifyEachNodeIsAssignedToAnEp] Rerunning with verbose output on a non-minimal build will show node assignments.
ph@ph-All-Series:~/piperpython$ echo 'Welcome to the world of speech synthesis!' | piper --model en_US-lessac-medium --output_file welcome.wav
WARNING:piper.download:Wrong size (expected=7010, actual=4885) for /home/ph/piperpython/en_US-lessac-medium.onnx.json
INFO:piper.download:Downloaded /home/ph/piperpython/en_US-lessac-medium.onnx.json (https://huggingface.co/rhasspy/piper-voices/resolve/v1.0.0/en/en_US/lessac/medium/en_US-lessac-medium.onnx.json)

@thanhnew2001
Copy link
Author

It turns out that the issue has not been fixed in (#172) even it claimed to be so.
Even I had to fix it manually, the difference in inferencing between cpu and gpu is not much. I saw in a video the ratio between inference time and audio time is about 0.7 which mean that it is very slow.

@tn-17
Copy link

tn-17 commented Jan 22, 2024

@thanhnew2001 what did you do to fix the issue?

@thanhnew2001
Copy link
Author

Well, I had to check out source code and went on to fix 1 line in the code. Which file I don't remember but you follow the bug you can see it #172

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants