Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cannot run coqui tts - Error: grpc process not found (image and local docker build) #1727

Closed
blob42 opened this issue Feb 20, 2024 · 3 comments · Fixed by #1730
Closed

Cannot run coqui tts - Error: grpc process not found (image and local docker build) #1727

blob42 opened this issue Feb 20, 2024 · 3 comments · Fixed by #1730
Labels
bug Something isn't working unconfirmed

Comments

@blob42
Copy link
Contributor

blob42 commented Feb 20, 2024

LocalAI version:

  • v2.8.2-cublas-cuda12-ffmpeg
  • local docker-compose build with following:
           FFMPEG: true
           BUILD_TYPE: cublas
           CUDA_MAJOR_VERSION: 12
           CUDA_MINOR_VERSION: 3
           GO_TAGS: "tts stablediffusion"
           BUILD_GRPC: true

Environment, CPU architecture, OS, and Version:
docker 24.0.7 on ArchLinux - Linux 6.6.6-zen1-1-zen

Describe the bug
I get the following error when I try to run coqui TTS using the example from documentation I see a grpc connection error. I am using docker so manually running the local-ai tts ... command from the container shows the following detailed error:

11:56PM INF Loading model 'tts_models/en/ljspeech/glow-tts' with backend coqui
11:56PM DBG Loading model in memory from file: /build/models/tts_models/en/ljspeech/glow-tts
11:56PM DBG Loading Model tts_models/en/ljspeech/glow-tts with gRPC (file: /build/models/tts_models/en/ljspeech/glow-tts) (backend: coqui): {backendString:coqui model:tts_models/en/ljspeech/glow-tts thr
eads:0 assetDir:/tmp/localai/backend_data context:{emptyCtx:{}} gRPCOptions:0xc0002c4000 externalBackends:map[] grpcAttempts:20 grpcAttemptsDelay:2 singleActiveBackend:false parallelRequests:false}
11:56PM ERR error: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/coqui. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS

with DEBUG=true

localai-1  | 2:05AM DBG Loading external backend: /build/backend/python/coqui/run.sh
localai-1  | 2:05AM DBG Loading GRPC Process: /build/backend/python/coqui/run.sh
localai-1  | 2:05AM DBG GRPC Service for tts_models/en/ljspeech/glow-tts will be running at: '127.0.0.1:44865'
localai-1  | 2:05AM DBG GRPC Service state dir: /tmp/go-processmanager1634371744
localai-1  | 2:05AM DBG GRPC Service Started
localai-1  | 2:05AM DBG GRPC(tts_models/en/ljspeech/glow-tts-127.0.0.1:44865): stderr Traceback (most recent call last):
localai-1  | 2:05AM DBG GRPC(tts_models/en/ljspeech/glow-tts-127.0.0.1:44865): stderr   File "/build/backend/python/coqui/coqui_server.py", line 15, in <module>
localai-1  | 2:05AM DBG GRPC(tts_models/en/ljspeech/glow-tts-127.0.0.1:44865): stderr     from TTS.api import TTS
localai-1  | 2:05AM DBG GRPC(tts_models/en/ljspeech/glow-tts-127.0.0.1:44865): stderr ModuleNotFoundError: No module named 'TTS'

To Reproduce
Just pulled or build the v2.8.2 image. For info the inference with cuda works.

Expected behavior
working coqui tts endpoint

@blob42 blob42 added bug Something isn't working unconfirmed labels Feb 20, 2024
@blob42
Copy link
Contributor Author

blob42 commented Feb 20, 2024

I think the reason is me using nvidia based build. the common-env/transformers-nvidia.yml does not contain TTS as a requirement

https://github.com/mudler/LocalAI/blob/9f2235c208b8a490f105774f984aa7225c4642b7/backend/python/common-env/transformers/transformers.yml#L36C1-L36C20

@olariuromeo
Copy link

olariuromeo commented Feb 20, 2024

try to build the backend first with this comand make clean and BUILD_GRPC_FOR_BACKEND_LLAMA=ON make GO_TAGS=stablediffusion,tts build or make GO_TAGS=stablediffusion,tts,tinydream build for all the bakends before to run docker, it would be more useful if you could give more details about your worklfow: .env, go and python version, docker-compose.yaml, dockerfile and your ssh comand. after when you have the GO_TAGS rebuild the image simply run docker-compose up --build or docker-compose up -d you ned to setup REBUILD=true and GO_TAGS= tts in your .env file in order to use tts .

@blob42
Copy link
Contributor Author

blob42 commented Feb 20, 2024

I can confirm the grpc backend works by adding the TTS dependency to the nvidia requirements file

blob42 added a commit to blob42/LocalAI that referenced this issue Feb 20, 2024
Signed-off-by: Chakib Benziane <contact@blob42.xyz>
mudler pushed a commit that referenced this issue Feb 20, 2024
Signed-off-by: Chakib Benziane <contact@blob42.xyz>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working unconfirmed
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants