Closed as not planned
Closed as not planned
Description
Your current environment
The output of `python collect_env.py`
PyTorch version: 2.5.1+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.31.4
Libc version: glibc-2.35
Python version: 3.12.9 (main, Feb 5 2025, 08:49:00) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-118-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.1.105
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA RTX A6000
GPU 1: NVIDIA RTX A6000
GPU 2: NVIDIA RTX A6000
GPU 3: NVIDIA RTX A6000
Nvidia driver version: 555.42.06
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 64
On-line CPU(s) list: 0-63
Vendor ID: AuthenticAMD
Model name: AMD Ryzen Threadripper PRO 5975WX 32-Cores
CPU family: 25
Model: 8
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 1
Stepping: 2
Frequency boost: enabled
CPU max MHz: 7006.6401
CPU min MHz: 1800.0000
BogoMIPS: 7186.50
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq m
onitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 invpcid_
single hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin a
rat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca fsrm
Virtualization: AMD-V
L1d cache: 1 MiB (32 instances)
L1i cache: 1 MiB (32 instances)
L2 cache: 16 MiB (32 instances)
L3 cache: 128 MiB (4 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-63
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Mitigation; safe RET, no microcode
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; IBRS_FW; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-ml-py==12.570.86
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] pyzmq==26.2.1
[pip3] torch==2.5.1
[pip3] torchaudio==2.5.1
[pip3] torchvision==0.20.1
[pip3] transformers==4.48.2
[pip3] triton==3.1.0
[conda] Could not collect
ROCM Version: Could not collect
Neuron SDK Version: N/A
vLLM Version: 0.7.2
vLLM Build Flags:
CUDA Archs: Not Set; ROCm: Disabled; Neuron: Disabled
GPU Topology:
GPU0 GPU1 GPU2 GPU3 CPU Affinity NUMA Affinity GPU NUMA ID
GPU0 X SYS SYS NV4 0-63 0 N/A
GPU1 SYS X NV4 SYS 0-63 0 N/A
GPU2 SYS NV4 X SYS 0-63 0 N/A
GPU3 NV4 SYS SYS X 0-63 0 N/A
Legend:
X = Self
SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
PIX = Connection traversing at most a single PCIe bridge
NV# = Connection traversing a bonded set of # NVLinks
NVIDIA_VISIBLE_DEVICES=all
NVIDIA_REQUIRE_CUDA=cuda>=12.1 brand=tesla,driver>=470,driver<471 brand=unknown,driver>=470,driver<471 brand=nvidia,driver>=470,driver<471 brand=nvidiartx,driver>=470,driver<471 brand=geforce,driver>=470,driver<471 brand=geforcertx,driver>=470,driver<471 brand=quadro,driv
er>=470,driver<471 brand=quadrortx,driver>=470,driver<471 brand=titan,driver>=470,driver<471 brand=titanrtx,driver>=470,driver<471 brand=tesla,driver>=525,driver<526 brand=unknown,driver>=525,driver<526 brand=nvidia,driver>=525,driver<526 brand=nvidiartx,driver>=525,drive
r<526 brand=geforce,driver>=525,driver<526 brand=geforcertx,driver>=525,driver<526 brand=quadro,driver>=525,driver<526 brand=quadrortx,driver>=525,driver<526 brand=titan,driver>=525,driver<526 brand=titanrtx,driver>=525,driver<526
NCCL_VERSION=2.17.1-1
NVIDIA_DRIVER_CAPABILITIES=compute,utility
NVIDIA_PRODUCT_NAME=CUDA
VLLM_USAGE_SOURCE=production-docker-image
NVIDIA_CUDA_END_OF_LIFE=1
CUDA_VERSION=12.1.0
CUDA_VISIBLE_DEVICES=0,1,2,3
CUDA_VISIBLE_DEVICES=0,1,2,3
VLLM_ALLOW_RUNTIME_LORA_UPDATING=True
LD_LIBRARY_PATH=/usr/local/nvidia/lib:/usr/local/nvidia/lib64
NCCL_CUMEM_ENABLE=0
TORCHINDUCTOR_COMPILE_THREADS=1
CUDA_MODULE_LOADING=LAZY
🐛 Describe the bug
Running an OpenAI-compatible server with both the --enable-lora
and --num-scheduler-steps
flags results in a "RuntimeError: LoRA is not enabled" exception, even when VLLM_USE_V1=0
.
Command to reproduce the bug::
docker run --gpus all -e CUDA_VISIBLE_DEVICES=0,1,2,3 -e VLLM_ALLOW_RUNTIME_LORA_UPDATING=True -e VLLM_USE_V1=0 --ipc=host -p 8000:8000 -v /dev/shm:/dev/shm vllm/vllm-openai:v0.7.2 --model Qwen/Qwen2.5-Coder-32B-Instruct --tensor-parallel-size 4 --disable-log-stats --dtype bfloat16 --enable-lora --lora-modules dummy-adapter=<path to lora adapter> --max-model-len 4096 --num-scheduler-steps 8
Stack trace:
(VllmWorkerProcess pid=349) ERROR 02-17 01:38:54 multiproc_worker_utils.py:242] Exception in worker VllmWorkerProcess while processing method add_lora.
(VllmWorkerProcess pid=349) ERROR 02-17 01:38:54 multiproc_worker_utils.py:242] Traceback (most recent call last):
(VllmWorkerProcess pid=349) ERROR 02-17 01:38:54 multiproc_worker_utils.py:242] File "/usr/local/lib/python3.12/dist-packages/vllm/executor/multiproc_worker_utils.py", line 236, in _run_worker_process
(VllmWorkerProcess pid=349) ERROR 02-17 01:38:54 multiproc_worker_utils.py:242] output = run_method(worker, method, args, kwargs)
(VllmWorkerProcess pid=349) ERROR 02-17 01:38:54 multiproc_worker_utils.py:242] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(VllmWorkerProcess pid=349) ERROR 02-17 01:38:54 multiproc_worker_utils.py:242] File "/usr/local/lib/python3.12/dist-packages/vllm/utils.py", line 2220, in run_method
(VllmWorkerProcess pid=349) ERROR 02-17 01:38:54 multiproc_worker_utils.py:242] return func(*args, **kwargs)
(VllmWorkerProcess pid=349) ERROR 02-17 01:38:54 multiproc_worker_utils.py:242] ^^^^^^^^^^^^^^^^^^^^^
(VllmWorkerProcess pid=349) ERROR 02-17 01:38:54 multiproc_worker_utils.py:242] File "/usr/local/lib/python3.12/dist-packages/vllm/worker/worker.py", line 454, in add_lora
(VllmWorkerProcess pid=349) ERROR 02-17 01:38:54 multiproc_worker_utils.py:242] return self.model_runner.add_lora(lora_request)
(VllmWorkerProcess pid=349) ERROR 02-17 01:38:54 multiproc_worker_utils.py:242] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(VllmWorkerProcess pid=349) ERROR 02-17 01:38:54 multiproc_worker_utils.py:242] File "/usr/local/lib/python3.12/dist-packages/vllm/worker/model_runner.py", line 1367, in add_lora
(VllmWorkerProcess pid=349) ERROR 02-17 01:38:54 multiproc_worker_utils.py:242] raise RuntimeError("LoRA is not enabled.")
(VllmWorkerProcess pid=349) ERROR 02-17 01:38:54 multiproc_worker_utils.py:242] RuntimeError: LoRA is not enabled.
(VllmWorkerProcess pid=350) ERROR 02-17 01:38:54 multiproc_worker_utils.py:242] Exception in worker VllmWorkerProcess while processing method add_lora.
(VllmWorkerProcess pid=350) ERROR 02-17 01:38:54 multiproc_worker_utils.py:242] Traceback (most recent call last):
(VllmWorkerProcess pid=350) ERROR 02-17 01:38:54 multiproc_worker_utils.py:242] File "/usr/local/lib/python3.12/dist-packages/vllm/executor/multiproc_worker_utils.py", line 236, in _run_worker_process
(VllmWorkerProcess pid=350) ERROR 02-17 01:38:54 multiproc_worker_utils.py:242] output = run_method(worker, method, args, kwargs)
(VllmWorkerProcess pid=350) ERROR 02-17 01:38:54 multiproc_worker_utils.py:242] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(VllmWorkerProcess pid=350) ERROR 02-17 01:38:54 multiproc_worker_utils.py:242] File "/usr/local/lib/python3.12/dist-packages/vllm/utils.py", line 2220, in run_method
(VllmWorkerProcess pid=350) ERROR 02-17 01:38:54 multiproc_worker_utils.py:242] return func(*args, **kwargs)
(VllmWorkerProcess pid=350) ERROR 02-17 01:38:54 multiproc_worker_utils.py:242] ^^^^^^^^^^^^^^^^^^^^^
(VllmWorkerProcess pid=350) ERROR 02-17 01:38:54 multiproc_worker_utils.py:242] File "/usr/local/lib/python3.12/dist-packages/vllm/worker/worker.py", line 454, in add_lora
(VllmWorkerProcess pid=350) ERROR 02-17 01:38:54 multiproc_worker_utils.py:242] return self.model_runner.add_lora(lora_request)
(VllmWorkerProcess pid=350) ERROR 02-17 01:38:54 multiproc_worker_utils.py:242] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(VllmWorkerProcess pid=350) ERROR 02-17 01:38:54 multiproc_worker_utils.py:242] File "/usr/local/lib/python3.12/dist-packages/vllm/worker/model_runner.py", line 1367, in add_lora
(VllmWorkerProcess pid=350) ERROR 02-17 01:38:54 multiproc_worker_utils.py:242] raise RuntimeError("LoRA is not enabled.")
(VllmWorkerProcess pid=350) ERROR 02-17 01:38:54 multiproc_worker_utils.py:242] RuntimeError: LoRA is not enabled.
(VllmWorkerProcess pid=351) ERROR 02-17 01:38:54 multiproc_worker_utils.py:242] Exception in worker VllmWorkerProcess while processing method add_lora.
(VllmWorkerProcess pid=351) ERROR 02-17 01:38:54 multiproc_worker_utils.py:242] Traceback (most recent call last):
(VllmWorkerProcess pid=351) ERROR 02-17 01:38:54 multiproc_worker_utils.py:242] File "/usr/local/lib/python3.12/dist-packages/vllm/executor/multiproc_worker_utils.py", line 236, in _run_worker_process
(VllmWorkerProcess pid=351) ERROR 02-17 01:38:54 multiproc_worker_utils.py:242] output = run_method(worker, method, args, kwargs)
(VllmWorkerProcess pid=351) ERROR 02-17 01:38:54 multiproc_worker_utils.py:242] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(VllmWorkerProcess pid=351) ERROR 02-17 01:38:54 multiproc_worker_utils.py:242] File "/usr/local/lib/python3.12/dist-packages/vllm/utils.py", line 2220, in run_method
(VllmWorkerProcess pid=351) ERROR 02-17 01:38:54 multiproc_worker_utils.py:242] return func(*args, **kwargs)
(VllmWorkerProcess pid=351) ERROR 02-17 01:38:54 multiproc_worker_utils.py:242] ^^^^^^^^^^^^^^^^^^^^^
(VllmWorkerProcess pid=351) ERROR 02-17 01:38:54 multiproc_worker_utils.py:242] File "/usr/local/lib/python3.12/dist-packages/vllm/worker/worker.py", line 454, in add_lora
(VllmWorkerProcess pid=351) ERROR 02-17 01:38:54 multiproc_worker_utils.py:242] return self.model_runner.add_lora(lora_request)
(VllmWorkerProcess pid=351) ERROR 02-17 01:38:54 multiproc_worker_utils.py:242] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(VllmWorkerProcess pid=351) ERROR 02-17 01:38:54 multiproc_worker_utils.py:242] File "/usr/local/lib/python3.12/dist-packages/vllm/worker/model_runner.py", line 1367, in add_lora
(VllmWorkerProcess pid=351) ERROR 02-17 01:38:54 multiproc_worker_utils.py:242] raise RuntimeError("LoRA is not enabled.")
(VllmWorkerProcess pid=351) ERROR 02-17 01:38:54 multiproc_worker_utils.py:242] RuntimeError: LoRA is not enabled.
INFO 02-17 01:38:55 multiproc_worker_utils.py:141] Terminating local vLLM worker processes
Traceback (most recent call last):
File "<frozen runpy>", line 198, in _run_module_as_main
File "<frozen runpy>", line 88, in _run_code
File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/api_server.py", line 911, in <module>
uvloop.run(run_server(args))
File "/usr/local/lib/python3.12/dist-packages/uvloop/__init__.py", line 109, in run
return __asyncio.run(
^^^^^^^^^^^^^^
File "/usr/lib/python3.12/asyncio/runners.py", line 195, in run
return runner.run(main)
^^^^^^^^^^^^^^^^
File "/usr/lib/python3.12/asyncio/runners.py", line 118, in run
return self._loop.run_until_complete(task)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "uvloop/loop.pyx", line 1518, in uvloop.loop.Loop.run_until_complete
File "/usr/local/lib/python3.12/dist-packages/uvloop/__init__.py", line 61, in wrapper
return await main
^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/api_server.py", line 879, in run_server
await init_app_state(engine_client, model_config, app.state, args)
File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/api_server.py", line 765, in init_app_state
await state.openai_serving_models.init_static_loras()
File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/serving_models.py", line 96, in init_static_loras
raise ValueError(load_result.message)
ValueError: LoRA is not enabled.
/usr/lib/python3.12/multiprocessing/resource_tracker.py:255: UserWarning: resource_tracker: There appear to be 1 leaked shared_memory objects to clean up at shutdown
warnings.warn('resource_tracker: There appear to be %d '
Before submitting a new issue...
- Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.