-
-
Notifications
You must be signed in to change notification settings - Fork 12.6k
Closed as not planned
Closed as not planned
Copy link
Labels
performancePerformance-related issuesPerformance-related issuesstaleOver 90 days of inactivityOver 90 days of inactivity
Description
Proposal to improve performance
No response
Report of performance regression
No response
Misc discussion on performance
We often encounter situations where the CPU is fully loaded during inference. So why not distribute the workload across more processes on the CPU? This way, we can take full advantage of the server's multi-core capabilities.
For example, in this case.
https://github.com/vllm-project/vllm/pull/10640
In this PR, the multimodal preprocessing workload has been shifted from the backend to the frontend, transferring part of the computational burden to the frontend processes. However, this process is CPU-intensive and will likely block subsequent requests in the frontend. Should we consider adding multiple additional processes to handle this type of computation?
Just like the diagram below:
Your current environment (if you think it is necessary)
The output of `python collect_env.py`
Before submitting a new issue...
- Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.
Metadata
Metadata
Assignees
Labels
performancePerformance-related issuesPerformance-related issuesstaleOver 90 days of inactivityOver 90 days of inactivity
