-
-
Notifications
You must be signed in to change notification settings - Fork 8.8k
Description
Your current environment
PyTorch version: 2.5.1+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Rocky Linux 8.6 (Green Obsidian) (x86_64)
GCC version: (GCC) 8.5.0 20210514 (Red Hat 8.5.0-22)
Clang version: Could not collect
CMake version: version 3.26.5
Libc version: glibc-2.28
Python version: 3.12.8 | packaged by Anaconda, Inc. | (main, Dec 11 2024, 16:31:09) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-4.18.0-477.21.1.el8_8.x86_64-x86_64-with-glibc2.28
Is CUDA available: True
How would you like to use vllm
I'm using v1 engine in vllm v0.7.1. I would like to know whether I can get access to vllm.v1.core.scheduler.Scheduler
. In particular, if I have a llm object from llm = LLM(model=model_name
, can I get the scheduler from llm
object? It seems scheduler
is a member of vllm.v1.engine.core.EngineCore
and EngineCore
is running in a background process. Thus I cannot directly access this object.
I'd happy to do so and it will help me learn the source code of vllm. Looking forward to your kindly answer
Before submitting a new issue...
- Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.