-
-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Problem with TTS in 2.8 #1707
Comments
this should be already fixed in master images - @Jasonthefirst could you please test it out? |
@mudler not sure if your fix was also deployed for vllm, but I just tried loading a model with vllm backend with the master image and got the same error:
|
@golgeek I've tried only with TTS models (vall-e-x specifically), can you confirm that? please open up another issue for vLLM |
We tested it with the master branch (master-cublas-cuda12-ffmpeg). As input we used the standard curl and we got the following error: curl:
|
Sorry, the error seemed too suspiciously similar, and I thought it might be the same origin. Was coming back to report the same as @Jasonthefirst. And I opened #1710 for vLLM. |
@golgeek / @Jasonthefirst any chance you can give #1711 a shot? |
waiting for feedback, merged the PR so master images are going to be built soon so we can try it out much easier just consuming master images |
Sorry that it took me forever to realize that the images weren't pushed, then an equal amount of time to build a docker image from your branch. I just ran a quick test for vLLM and the model loaded successfully, so I'd say it's fixed but maybe it's better to wait a bit more and confirm with the images from the master branch. |
I ran some tests again with master images, and can confirm that #1710 is fixed (just closed the issue, thanks a lot!). As for this issue specifically, I tested with
I'm getting:
It appears there might be a mixup between the |
ouch, good catch, this is a regression introduced in #1692. |
fixes #1707 Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
lol I've read that line at least four times before writing it looked legit |
fixes #1707 Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
my muscle memory still use "fixes" - and GH automatically closes the issue :) |
v2.8.2 images has been released with all the fixes @golgeek / @Jasonthefirst could you test it? |
Just tested, v2.8.2 image worked flawlessly! Thanks @mudler!
|
cool, thanks for checking it out! |
Thank you so much. (nearly) everything works now. But besides that it is awesesome and the speed that this got fixed is nice as well. We really appreciate it. |
Please open separate tickets for it with full logs and how to reproduce it, thanks! |
We are using LocalAI in Docker but have Problems with all TTS models described in TTS in LocalAI .
But when calling the following curl:
curl http://localhost:8080/tts -H "Content-Type: application/json" -d '{ "backend": "bark", "input":"Hello!" }' | aplay
we get the following error:
stderr OSError: /opt/conda/envs/transformers/lib/python3.11/site-packages/torchaudio/lib/libtorchaudio.so: undefined symbol: _ZN2at4_ops10zeros_like4callERKNS_6TensorEN3c108optionalINS5_10ScalarTypeEEENS6_INS5_6LayoutEEENS6_INS5_6DeviceEEENS6_IbEENS6_INS5_12MemoryFormatEEE
This error is thrown with bark, qoqui and Vall-E-X. Piper works.
LocalAI version:
v2.8.0-cublas-cuda12-ffmpeg
Environment, CPU architecture, OS, and Version:
Linux aifb-bis-mlpc 5.15.0-92-generic #102-Ubuntu SMP Wed Jan 10 09:33:48 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
To Reproduce
Run v2.8.0-cublas-cuda12-ffmpeg LocalAI on Server an the curl command.
Expected behavior
LocalAI shouldn't return an error but a tts file.
Logs
I added a log file. _Shared_LocalAI_logs.txt
The text was updated successfully, but these errors were encountered: