Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Too many router/tokenizer threads #404

Closed
2 of 4 tasks
askervin opened this issue Sep 11, 2024 · 1 comment · Fixed by #410
Closed
2 of 4 tasks

Too many router/tokenizer threads #404

askervin opened this issue Sep 11, 2024 · 1 comment · Fixed by #410

Comments

@askervin
Copy link

System Info

text-embeddings-router 1.5.0
from image: ghcr.io/huggingface/text-embeddings-inference:cpu-1.5

Information

  • Docker
  • The CLI directly

Tasks

  • An officially supported command
  • My own modifications

Reproduction

Review the code in router/src/lib.rs:

let tokenization_workers = tokenization_workers.unwrap_or_else(num_cpus::get_physical);

or inspect the number of threads of text-embeddings-router when running in a cgroup with limited cpuset.cpus on a host with many CPUs, or for instance, text-embeddings-interface:cpu-1.5 image in a container with restricted cpuset.cpus.

Expected behavior

The number of worker threads should not exceed the number of allowed CPUs for the process.

Containerized apps and services must not think they could use all CPUs, memory or other resources in the system, but only those available in their containers.

Currently the performance of text-embeddings-interface service is horribly bad when running on systems with lots of CPUs (256, for instance), and when the number of CPUs of its container is limited (down to 4, for instance).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
2 participants