-
-
Notifications
You must be signed in to change notification settings - Fork 11.6k
Closed as not planned
Labels
bugSomething isn't workingSomething isn't workingstaleOver 90 days of inactivityOver 90 days of inactivity
Description
Your current environment
The output of python collect_env.py
Collecting environment information...
==============================
System Info
==============================
OS : Ubuntu 22.04.5 LTS (x86_64)
GCC version : (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version : Could not collect
CMake version : Could not collect
Libc version : glibc-2.35
==============================
PyTorch Info
==============================
PyTorch version : 2.7.0+cu126
Is debug build : False
CUDA used to build PyTorch : 12.6
ROCM used to build PyTorch : N/A
==============================
Python Environment
==============================
Python version : 3.10.12 (main, Feb 4 2025, 14:57:36) [GCC 11.4.0] (64-bit runtime)
Python platform : Linux-5.15.0-139-generic-x86_64-with-glibc2.35
==============================
CUDA / GPU Info
==============================
Is CUDA available : True
CUDA runtime version : 12.6.85
CUDA_MODULE_LOADING set to : LAZY
GPU models and configuration :
GPU 0: NVIDIA H100 80GB HBM3
GPU 1: NVIDIA H100 80GB HBM3
GPU 2: NVIDIA H100 80GB HBM3
GPU 3: NVIDIA H100 80GB HBM3
GPU 4: NVIDIA H100 80GB HBM3
GPU 5: NVIDIA H100 80GB HBM3
GPU 6: NVIDIA H100 80GB HBM3
GPU 7: NVIDIA H100 80GB HBM3
Nvidia driver version : 550.163.01
cuDNN version : Could not collect
HIP runtime version : N/A
MIOpen runtime version : N/A
Is XNNPACK available : True
==============================
CPU Info
==============================
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 224
On-line CPU(s) list: 0-223
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8480+
CPU family: 6
Model: 143
Thread(s) per core: 2
Core(s) per socket: 56
Socket(s): 2
Stepping: 8
BogoMIPS: 4000.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clf
lushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm arat pln pts avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 5.3 MiB (112 instances)
L1i cache: 3.5 MiB (112 instances)
L2 cache: 224 MiB (112 instances)
L3 cache: 210 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-55,112-167
NUMA node1 CPU(s): 56-111,168-223
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
==============================
Versions of relevant libraries
==============================
[pip3] numpy==2.2.6
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-nvrtc-cu12==12.6.77
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-cufile-cu12==1.11.1.6
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.26.2
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvtx-cu12==12.6.77
[pip3] pyzmq==26.4.0
[pip3] torch==2.7.0
[pip3] torchaudio==2.7.0
[pip3] torchvision==0.22.0
[pip3] transformers==4.52.4
[pip3] triton==3.3.0
[conda] Could not collect
==============================
vLLM Info
==============================
ROCM Version : Could not collect
Neuron SDK Version : N/A
vLLM Version : 0.9.1
vLLM Build Flags:
CUDA Archs: Not Set; ROCm: Disabled; Neuron: Disabled
GPU Topology:
GPU0 GPU1 GPU2 GPU3 GPU4 GPU5 GPU6 GPU7 NIC0 NIC1 NIC2 NIC3 NIC4 NIC5 NIC6 NIC7 NIC8 NIC9 CPU Affinity NUMA Affinity GPU NUMA ID
GPU0 X NV18 NV18 NV18 NV18 NV18 NV18 NV18 PIX NODE NODE NODE NODE NODE SYS SYS SYS SYS 0-55,112-167 0 N/A
GPU1 NV18 X NV18 NV18 NV18 NV18 NV18 NV18 NODE PIX NODE NODE NODE NODE SYS SYS SYS SYS 0-55,112-167 0 N/A
GPU2 NV18 NV18 X NV18 NV18 NV18 NV18 NV18 NODE NODE PIX NODE NODE NODE SYS SYS SYS SYS 0-55,112-167 0 N/A
GPU3 NV18 NV18 NV18 X NV18 NV18 NV18 NV18 NODE NODE NODE NODE NODE PIX SYS SYS SYS SYS 0-55,112-167 0 N/A
GPU4 NV18 NV18 NV18 NV18 X NV18 NV18 NV18 SYS SYS SYS SYS SYS SYS PIX NODE NODE NODE 56-111,168-223 1 N/A
GPU5 NV18 NV18 NV18 NV18 NV18 X NV18 NV18 SYS SYS SYS SYS SYS SYS NODE PIX NODE NODE 56-111,168-223 1 N/A
GPU6 NV18 NV18 NV18 NV18 NV18 NV18 X NV18 SYS SYS SYS SYS SYS SYS NODE NODE PIX NODE 56-111,168-223 1 N/A
GPU7 NV18 NV18 NV18 NV18 NV18 NV18 NV18 X SYS SYS SYS SYS SYS SYS NODE NODE NODE PIX 56-111,168-223 1 N/A
NIC0 PIX NODE NODE NODE SYS SYS SYS SYS X NODE NODE NODE NODE NODE SYS SYS SYS SYS
NIC1 NODE PIX NODE NODE SYS SYS SYS SYS NODE X NODE NODE NODE NODE SYS SYS SYS SYS
NIC2 NODE NODE PIX NODE SYS SYS SYS SYS NODE NODE X NODE NODE NODE SYS SYS SYS SYS
NIC3 NODE NODE NODE NODE SYS SYS SYS SYS NODE NODE NODE X PXB NODE SYS SYS SYS SYS
NIC4 NODE NODE NODE NODE SYS SYS SYS SYS NODE NODE NODE PXB X NODE SYS SYS SYS SYS
NIC5 NODE NODE NODE PIX SYS SYS SYS SYS NODE NODE NODE NODE NODE X SYS SYS SYS SYS
NIC6 SYS SYS SYS SYS PIX NODE NODE NODE SYS SYS SYS SYS SYS SYS X NODE NODE NODE
NIC7 SYS SYS SYS SYS NODE PIX NODE NODE SYS SYS SYS SYS SYS SYS NODE X NODE NODE
NIC8 SYS SYS SYS SYS NODE NODE PIX NODE SYS SYS SYS SYS SYS SYS NODE NODE X NODE
NIC9 SYS SYS SYS SYS NODE NODE NODE PIX SYS SYS SYS SYS SYS SYS NODE NODE NODE X
🐛 Describe the bug
After this crash the server goes down which is also unexpected
vllm serve "hf-100/Jamba-1.6-large-Spellbound-StoryWriter-398B94A-instruct-0.1-chkpt-468" --host 0.0.0.0 --port 8000 --gpu-memory-utilization .95 86 --max-model-len 20000 --pipeline-parallel-size 8 --quantization experts_int8
ERROR 06-14 04:44:54 [serving_chat.py:911] raise result
ERROR 06-14 04:44:54 [serving_chat.py:911] File "/home/ubuntu/.local/lib/python3.10/site-packages/vllm/entrypoints/openai/serving_chat.py", line 481, in chat_completion_stream_generator
ERROR 06-14 04:44:54 [serving_chat.py:911] async for res in result_generator:
ERROR 06-14 04:44:54 [serving_chat.py:911] File "/home/ubuntu/.local/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 976, in generate
ERROR 06-14 04:44:54 [serving_chat.py:911] async for output in await self.add_request(
ERROR 06-14 04:44:54 [serving_chat.py:911] File "/home/ubuntu/.local/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 115, in generator
ERROR 06-14 04:44:54 [serving_chat.py:911] raise result
ERROR 06-14 04:44:54 [serving_chat.py:911] File "/home/ubuntu/.local/lib/python3.10/site-packages/vllm/entrypoints/openai/serving_chat.py", line 481, in chat_completion_stream_generator
ERROR 06-14 04:44:54 [serving_chat.py:911] async for res in result_generator:
ERROR 06-14 04:44:54 [serving_chat.py:911] File "/home/ubuntu/.local/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 976, in generate
ERROR 06-14 04:44:54 [serving_chat.py:911] async for output in await self.add_request(
ERROR 06-14 04:44:54 [serving_chat.py:911] File "/home/ubuntu/.local/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 115, in generator
ERROR 06-14 04:44:54 [serving_chat.py:911] raise result
ERROR 06-14 04:44:54 [serving_chat.py:911] File "/home/ubuntu/.local/lib/python3.10/site-packages/vllm/entrypoints/openai/serving_chat.py", line 481, in chat_completion_stream_generator
ERROR 06-14 04:44:54 [serving_chat.py:911] async for res in result_generator:
ERROR 06-14 04:44:54 [serving_chat.py:911] File "/home/ubuntu/.local/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 976, in generate
ERROR 06-14 04:44:54 [serving_chat.py:911] async for output in await self.add_request(
ERROR 06-14 04:44:54 [serving_chat.py:911] File "/home/ubuntu/.local/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 115, in generator
ERROR 06-14 04:44:54 [serving_chat.py:911] raise result
ERROR 06-14 04:44:54 [serving_chat.py:911] File "/home/ubuntu/.local/lib/python3.10/site-packages/vllm/entrypoints/openai/serving_chat.py", line 481, in chat_completion_stream_generator
ERROR 06-14 04:44:54 [serving_chat.py:911] async for res in result_generator:
ERROR 06-14 04:44:54 [serving_chat.py:911] File "/home/ubuntu/.local/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 976, in generate
ERROR 06-14 04:44:54 [serving_chat.py:911] async for output in await self.add_request(
ERROR 06-14 04:44:54 [serving_chat.py:911] File "/home/ubuntu/.local/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 115, in generator
ERROR 06-14 04:44:54 [serving_chat.py:911] raise result
ERROR 06-14 04:44:54 [serving_chat.py:911] File "/home/ubuntu/.local/lib/python3.10/site-packages/vllm/entrypoints/openai/serving_chat.py", line 481, in chat_completion_stream_generator
ERROR 06-14 04:44:54 [serving_chat.py:911] async for res in result_generator:
ERROR 06-14 04:44:54 [serving_chat.py:911] File "/home/ubuntu/.local/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 976, in generate
ERROR 06-14 04:44:54 [serving_chat.py:911] async for output in await self.add_request(
ERROR 06-14 04:44:54 [serving_chat.py:911] File "/home/ubuntu/.local/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 115, in generator
ERROR 06-14 04:44:54 [serving_chat.py:911] raise result
ERROR 06-14 04:44:54 [serving_chat.py:911] File "/home/ubuntu/.local/lib/python3.10/site-packages/vllm/entrypoints/openai/serving_chat.py", line 481, in chat_completion_stream_generator
ERROR 06-14 04:44:54 [serving_chat.py:911] async for res in result_generator:
ERROR 06-14 04:44:54 [serving_chat.py:911] File "/home/ubuntu/.local/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 976, in generate
ERROR 06-14 04:44:54 [serving_chat.py:911] async for output in await self.add_request(
ERROR 06-14 04:44:54 [serving_chat.py:911] File "/home/ubuntu/.local/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 115, in generator
ERROR 06-14 04:44:54 [serving_chat.py:911] raise result
ERROR 06-14 04:44:54 [serving_chat.py:911] File "/home/ubuntu/.local/lib/python3.10/site-packages/vllm/entrypoints/openai/serving_chat.py", line 481, in chat_completion_stream_generator
ERROR 06-14 04:44:54 [serving_chat.py:911] async for res in result_generator:
ERROR 06-14 04:44:54 [serving_chat.py:911] File "/home/ubuntu/.local/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 976, in generate
ERROR 06-14 04:44:54 [serving_chat.py:911] async for output in await self.add_request(
ERROR 06-14 04:44:54 [serving_chat.py:911] File "/home/ubuntu/.local/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 115, in generator
ERROR 06-14 04:44:54 [serving_chat.py:911] raise result
ERROR 06-14 04:44:54 [serving_chat.py:911] File "/home/ubuntu/.local/lib/python3.10/site-packages/vllm/entrypoints/openai/serving_chat.py", line 481, in chat_completion_stream_generator
ERROR 06-14 04:44:54 [serving_chat.py:911] async for res in result_generator:
ERROR 06-14 04:44:54 [serving_chat.py:911] File "/home/ubuntu/.local/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 976, in generate
ERROR 06-14 04:44:54 [serving_chat.py:911] async for output in await self.add_request(
ERROR 06-14 04:44:54 [serving_chat.py:911] File "/home/ubuntu/.local/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 115, in generator
ERROR 06-14 04:44:54 [serving_chat.py:911] raise result
ERROR 06-14 04:44:54 [serving_chat.py:911] File "/home/ubuntu/.local/lib/python3.10/site-packages/vllm/entrypoints/openai/serving_chat.py", line 481, in chat_completion_stream_generator
ERROR 06-14 04:44:54 [serving_chat.py:911] async for res in result_generator:
ERROR 06-14 04:44:54 [serving_chat.py:911] File "/home/ubuntu/.local/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 976, in generate
ERROR 06-14 04:44:54 [serving_chat.py:911] async for output in await self.add_request(
ERROR 06-14 04:44:54 [serving_chat.py:911] File "/home/ubuntu/.local/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 115, in generator
ERROR 06-14 04:44:54 [serving_chat.py:911] raise result
ERROR 06-14 04:44:54 [serving_chat.py:911] File "/home/ubuntu/.local/lib/python3.10/site-packages/vllm/entrypoints/openai/serving_chat.py", line 481, in chat_completion_stream_generator
ERROR 06-14 04:44:54 [serving_chat.py:911] async for res in result_generator:
ERROR 06-14 04:44:54 [serving_chat.py:911] File "/home/ubuntu/.local/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 976, in generate
ERROR 06-14 04:44:54 [serving_chat.py:911] async for output in await self.add_request(
ERROR 06-14 04:44:54 [serving_chat.py:911] File "/home/ubuntu/.local/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 115, in generator
ERROR 06-14 04:44:54 [serving_chat.py:911] raise result
ERROR 06-14 04:44:54 [serving_chat.py:911] File "/home/ubuntu/.local/lib/python3.10/site-packages/vllm/entrypoints/openai/serving_chat.py", line 481, in chat_completion_stream_generator
ERROR 06-14 04:44:54 [serving_chat.py:911] async for res in result_generator:
ERROR 06-14 04:44:54 [serving_chat.py:911] File "/home/ubuntu/.local/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 976, in generate
ERROR 06-14 04:44:54 [serving_chat.py:911] async for output in await self.add_request(
ERROR 06-14 04:44:54 [serving_chat.py:911] File "/home/ubuntu/.local/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 115, in generator
ERROR 06-14 04:44:54 [serving_chat.py:911] raise result
ERROR 06-14 04:44:54 [serving_chat.py:911] File "/home/ubuntu/.local/lib/python3.10/site-packages/vllm/entrypoints/openai/serving_chat.py", line 481, in chat_completion_stream_generator
ERROR 06-14 04:44:54 [serving_chat.py:911] async for res in result_generator:
ERROR 06-14 04:44:54 [serving_chat.py:911] File "/home/ubuntu/.local/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 976, in generate
ERROR 06-14 04:44:54 [serving_chat.py:911] async for output in await self.add_request(
ERROR 06-14 04:44:54 [serving_chat.py:911] File "/home/ubuntu/.local/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 115, in generator
ERROR 06-14 04:44:54 [serving_chat.py:911] raise result
ERROR 06-14 04:44:54 [serving_chat.py:911] File "/home/ubuntu/.local/lib/python3.10/site-packages/vllm/entrypoints/openai/serving_chat.py", line 481, in chat_completion_stream_generator
ERROR 06-14 04:44:54 [serving_chat.py:911] async for res in result_generator:
ERROR 06-14 04:44:54 [serving_chat.py:911] File "/home/ubuntu/.local/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 976, in generate
ERROR 06-14 04:44:54 [serving_chat.py:911] async for output in await self.add_request(
ERROR 06-14 04:44:54 [serving_chat.py:911] File "/home/ubuntu/.local/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 115, in generator
ERROR 06-14 04:44:54 [serving_chat.py:911] raise result
ERROR 06-14 04:44:54 [serving_chat.py:911] File "/home/ubuntu/.local/lib/python3.10/site-packages/vllm/entrypoints/openai/serving_chat.py", line 481, in chat_completion_stream_generator
ERROR 06-14 04:44:54 [serving_chat.py:911] async for res in result_generator:
ERROR 06-14 04:44:54 [serving_chat.py:911] File "/home/ubuntu/.local/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 976, in generate
ERROR 06-14 04:44:54 [serving_chat.py:911] async for output in await self.add_request(
ERROR 06-14 04:44:54 [serving_chat.py:911] File "/home/ubuntu/.local/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 115, in generator
ERROR 06-14 04:44:54 [serving_chat.py:911] raise result
ERROR 06-14 04:44:54 [serving_chat.py:911] File "/home/ubuntu/.local/lib/python3.10/site-packages/vllm/entrypoints/openai/serving_chat.py", line 481, in chat_completion_stream_generator
ERROR 06-14 04:44:54 [serving_chat.py:911] async for res in result_generator:
ERROR 06-14 04:44:54 [serving_chat.py:911] File "/home/ubuntu/.local/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 976, in generate
ERROR 06-14 04:44:54 [serving_chat.py:911] async for output in await self.add_request(
ERROR 06-14 04:44:54 [serving_chat.py:911] File "/home/ubuntu/.local/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 115, in generator
ERROR 06-14 04:44:54 [serving_chat.py:911] raise result
ERROR 06-14 04:44:54 [serving_chat.py:911] File "/home/ubuntu/.local/lib/python3.10/site-packages/vllm/entrypoints/openai/serving_chat.py", line 481, in chat_completion_stream_generator
ERROR 06-14 04:44:54 [serving_chat.py:911] async for res in result_generator:
ERROR 06-14 04:44:54 [serving_chat.py:911] File "/home/ubuntu/.local/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 976, in generate
ERROR 06-14 04:44:54 [serving_chat.py:911] async for output in await self.add_request(
ERROR 06-14 04:44:54 [serving_chat.py:911] File "/home/ubuntu/.local/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 115, in generator
ERROR 06-14 04:44:54 [serving_chat.py:911] raise result
ERROR 06-14 04:44:54 [serving_chat.py:911] File "/home/ubuntu/.local/lib/python3.10/site-packages/vllm/entrypoints/openai/serving_chat.py", line 481, in chat_completion_stream_generator
ERROR 06-14 04:44:54 [serving_chat.py:911] async for res in result_generator:
ERROR 06-14 04:44:54 [serving_chat.py:911] File "/home/ubuntu/.local/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 976, in generate
ERROR 06-14 04:44:54 [serving_chat.py:911] async for output in await self.add_request(
ERROR 06-14 04:44:54 [serving_chat.py:911] File "/home/ubuntu/.local/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 115, in generator
ERROR 06-14 04:44:54 [serving_chat.py:911] raise result
ERROR 06-14 04:44:54 [serving_chat.py:911] File "/home/ubuntu/.local/lib/python3.10/site-packages/vllm/entrypoints/openai/serving_chat.py", line 481, in chat_completion_stream_generator
ERROR 06-14 04:44:54 [serving_chat.py:911] async for res in result_generator:
ERROR 06-14 04:44:54 [serving_chat.py:911] File "/home/ubuntu/.local/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 976, in generate
ERROR 06-14 04:44:54 [serving_chat.py:911] async for output in await self.add_request(
ERROR 06-14 04:44:54 [serving_chat.py:911] File "/home/ubuntu/.local/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 115, in generator
ERROR 06-14 04:44:54 [serving_chat.py:911] raise result
ERROR 06-14 04:44:54 [serving_chat.py:911] File "/home/ubuntu/.local/lib/python3.10/site-packages/vllm/entrypoints/openai/serving_chat.py", line 481, in chat_completion_stream_generator
ERROR 06-14 04:44:54 [serving_chat.py:911] async for res in result_generator:
ERROR 06-14 04:44:54 [serving_chat.py:911] File "/home/ubuntu/.local/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 976, in generate
ERROR 06-14 04:44:54 [serving_chat.py:911] async for output in await self.add_request(
ERROR 06-14 04:44:54 [serving_chat.py:911] File "/home/ubuntu/.local/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 115, in generator
ERROR 06-14 04:44:54 [serving_chat.py:911] raise result
ERROR 06-14 04:44:54 [serving_chat.py:911] File "/home/ubuntu/.local/lib/python3.10/site-packages/vllm/entrypoints/openai/serving_chat.py", line 481, in chat_completion_stream_generator
ERROR 06-14 04:44:54 [serving_chat.py:911] async for res in result_generator:
ERROR 06-14 04:44:54 [serving_chat.py:911] File "/home/ubuntu/.local/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 976, in generate
ERROR 06-14 04:44:54 [serving_chat.py:911] async for output in await self.add_request(
ERROR 06-14 04:44:54 [serving_chat.py:911] File "/home/ubuntu/.local/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 115, in generator
ERROR 06-14 04:44:54 [serving_chat.py:911] raise result
ERROR 06-14 04:44:54 [serving_chat.py:911] File "/home/ubuntu/.local/lib/python3.10/site-packages/vllm/entrypoints/openai/serving_chat.py", line 481, in chat_completion_stream_generator
ERROR 06-14 04:44:54 [serving_chat.py:911] async for res in result_generator:
ERROR 06-14 04:44:54 [serving_chat.py:911] File "/home/ubuntu/.local/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 976, in generate
ERROR 06-14 04:44:54 [serving_chat.py:911] async for output in await self.add_request(
ERROR 06-14 04:44:54 [serving_chat.py:911] File "/home/ubuntu/.local/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 115, in generator
ERROR 06-14 04:44:54 [serving_chat.py:911] raise result
ERROR 06-14 04:44:54 [serving_chat.py:911] File "/home/ubuntu/.local/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 57, in _log_task_completion
ERROR 06-14 04:44:54 [serving_chat.py:911] return_value = task.result()
ERROR 06-14 04:44:54 [serving_chat.py:911] File "/home/ubuntu/.local/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 834, in run_engine_loop
ERROR 06-14 04:44:54 [serving_chat.py:911] result = task.result()
ERROR 06-14 04:44:54 [serving_chat.py:911] File "/home/ubuntu/.local/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 757, in engine_step
ERROR 06-14 04:44:54 [serving_chat.py:911] request_outputs = await self.engine.step_async(virtual_engine)
ERROR 06-14 04:44:54 [serving_chat.py:911] File "/home/ubuntu/.local/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 355, in step_async
ERROR 06-14 04:44:54 [serving_chat.py:911] outputs = await self.model_executor.execute_model_async(
ERROR 06-14 04:44:54 [serving_chat.py:911] File "/home/ubuntu/.local/lib/python3.10/site-packages/vllm/executor/executor_base.py", line 370, in execute_model_async
ERROR 06-14 04:44:54 [serving_chat.py:911] return await self._driver_execute_model_async(execute_model_req)
ERROR 06-14 04:44:54 [serving_chat.py:911] File "/home/ubuntu/.local/lib/python3.10/site-packages/vllm/executor/mp_distributed_executor.py", line 234, in _driver_execute_model_async
ERROR 06-14 04:44:54 [serving_chat.py:911] results = await asyncio.gather(*tasks)
ERROR 06-14 04:44:54 [serving_chat.py:911] File "/home/ubuntu/.local/lib/python3.10/site-packages/vllm/utils.py", line 1672, in _run_task_with_lock
ERROR 06-14 04:44:54 [serving_chat.py:911] return await task(*args, **kwargs)
ERROR 06-14 04:44:54 [serving_chat.py:911] File "/usr/lib/python3.10/concurrent/futures/thread.py", line 58, in run
ERROR 06-14 04:44:54 [serving_chat.py:911] result = self.fn(*self.args, **self.kwargs)
ERROR 06-14 04:44:54 [serving_chat.py:911] File "/home/ubuntu/.local/lib/python3.10/site-packages/vllm/worker/worker_base.py", line 421, in execute_model
ERROR 06-14 04:44:54 [serving_chat.py:911] output = self.model_runner.execute_model(
ERROR 06-14 04:44:54 [serving_chat.py:911] File "/home/ubuntu/.local/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
ERROR 06-14 04:44:54 [serving_chat.py:911] return func(*args, **kwargs)
ERROR 06-14 04:44:54 [serving_chat.py:911] File "/home/ubuntu/.local/lib/python3.10/site-packages/vllm/worker/model_runner.py", line 1844, in execute_model
ERROR 06-14 04:44:54 [serving_chat.py:911] hidden_or_intermediate_states = model_executable(
ERROR 06-14 04:44:54 [serving_chat.py:911] File "/home/ubuntu/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
ERROR 06-14 04:44:54 [serving_chat.py:911] return self._call_impl(*args, **kwargs)
ERROR 06-14 04:44:54 [serving_chat.py:911] File "/home/ubuntu/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
ERROR 06-14 04:44:54 [serving_chat.py:911] return forward_call(*args, **kwargs)
ERROR 06-14 04:44:54 [serving_chat.py:911] File "/home/ubuntu/.local/lib/python3.10/site-packages/vllm/model_executor/models/jamba.py", line 433, in forward
ERROR 06-14 04:44:54 [serving_chat.py:911] mamba_cache_params = self.mamba_cache.current_run_tensors(**kwargs)
ERROR 06-14 04:44:54 [serving_chat.py:911] File "/home/ubuntu/.local/lib/python3.10/site-packages/vllm/model_executor/models/mamba_cache.py", line 63, in current_run_tensors
ERROR 06-14 04:44:54 [serving_chat.py:911] cache_tensors, state_indices_tensor = super().current_run_tensors(
ERROR 06-14 04:44:54 [serving_chat.py:911] File "/home/ubuntu/.local/lib/python3.10/site-packages/vllm/model_executor/models/constant_size_cache.py", line 44, in current_run_tensors
ERROR 06-14 04:44:54 [serving_chat.py:911] state_indices = self._prepare_current_run_cache(
ERROR 06-14 04:44:54 [serving_chat.py:911] File "/home/ubuntu/.local/lib/python3.10/site-packages/vllm/model_executor/models/constant_size_cache.py", line 123, in _prepare_current_run_cache
ERROR 06-14 04:44:54 [serving_chat.py:911] return [
ERROR 06-14 04:44:54 [serving_chat.py:911] File "/home/ubuntu/.local/lib/python3.10/site-packages/vllm/model_executor/models/constant_size_cache.py", line 124, in <listcomp>
ERROR 06-14 04:44:54 [serving_chat.py:911] self._assign_seq_id_to_cache_index(req_id, seq_id,
ERROR 06-14 04:44:54 [serving_chat.py:911] File "/home/ubuntu/.local/lib/python3.10/site-packages/vllm/model_executor/models/constant_size_cache.py", line 102, in _assign_seq_id_to_cache_index
ERROR 06-14 04:44:54 [serving_chat.py:911] destination_index = self.free_cache_indices.pop()
ERROR 06-14 04:44:54 [serving_chat.py:911] IndexError: pop from empty list
(VllmWorkerProcess pid=19604) ERROR 06-14 04:44:54 [multiproc_worker_utils.py:239] Exception in worker VllmWorkerProcess while processing method execute_model.
(VllmWorkerProcess pid=19604) ERROR 06-14 04:44:54 [multiproc_worker_utils.py:239] Traceback (most recent call last):
(VllmWorkerProcess pid=19604) ERROR 06-14 04:44:54 [multiproc_worker_utils.py:239] File "/home/ubuntu/.local/lib/python3.10/site-packages/vllm/executor/multiproc_worker_utils.py", line 233, in _run_worker_process
(VllmWorkerProcess pid=19604) ERROR 06-14 04:44:54 [multiproc_worker_utils.py:239] output = run_method(worker, method, args, kwargs)
(VllmWorkerProcess pid=19604) ERROR 06-14 04:44:54 [multiproc_worker_utils.py:239] File "/home/ubuntu/.local/lib/python3.10/site-packages/vllm/utils.py", line 2671, in run_method
(VllmWorkerProcess pid=19604) ERROR 06-14 04:44:54 [multiproc_worker_utils.py:239] return func(*args, **kwargs)
(VllmWorkerProcess pid=19604) ERROR 06-14 04:44:54 [multiproc_worker_utils.py:239] File "/home/ubuntu/.local/lib/python3.10/site-packages/vllm/worker/worker_base.py", line 421, in execute_model
(VllmWorkerProcess pid=19604) ERROR 06-14 04:44:54 [multiproc_worker_utils.py:239] output = self.model_runner.execute_model(
(VllmWorkerProcess pid=19604) ERROR 06-14 04:44:54 [multiproc_worker_utils.py:239] File "/home/ubuntu/.local/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
(VllmWorkerProcess pid=19604) ERROR 06-14 04:44:54 [multiproc_worker_utils.py:239] return func(*args, **kwargs)
(VllmWorkerProcess pid=19604) ERROR 06-14 04:44:54 [multiproc_worker_utils.py:239] File "/home/ubuntu/.local/lib/python3.10/site-packages/vllm/worker/model_runner.py", line 1844, in execute_model
(VllmWorkerProcess pid=19604) ERROR 06-14 04:44:54 [multiproc_worker_utils.py:239] hidden_or_intermediate_states = model_executable(
(VllmWorkerProcess pid=19604) ERROR 06-14 04:44:54 [multiproc_worker_utils.py:239] File "/home/ubuntu/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
(VllmWorkerProcess pid=19604) ERROR 06-14 04:44:54 [multiproc_worker_utils.py:239] return self._call_impl(*args, **kwargs)
(VllmWorkerProcess pid=19604) ERROR 06-14 04:44:54 [multiproc_worker_utils.py:239] File "/home/ubuntu/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
(VllmWorkerProcess pid=19604) ERROR 06-14 04:44:54 [multiproc_worker_utils.py:239] return forward_call(*args, **kwargs)
(VllmWorkerProcess pid=19604) ERROR 06-14 04:44:54 [multiproc_worker_utils.py:239] File "/home/ubuntu/.local/lib/python3.10/site-packages/vllm/model_executor/models/jamba.py", line 433, in forward
(VllmWorkerProcess pid=19604) ERROR 06-14 04:44:54 [multiproc_worker_utils.py:239] mamba_cache_params = self.mamba_cache.current_run_tensors(**kwargs)
(VllmWorkerProcess pid=19604) ERROR 06-14 04:44:54 [multiproc_worker_utils.py:239] File "/home/ubuntu/.local/lib/python3.10/site-packages/vllm/model_executor/models/mamba_cache.py", line 63, in current_run_tensors
(VllmWorkerProcess pid=19604) ERROR 06-14 04:44:54 [multiproc_worker_utils.py:239] cache_tensors, state_indices_tensor = super().current_run_tensors(
(VllmWorkerProcess pid=19604) ERROR 06-14 04:44:54 [multiproc_worker_utils.py:239] File "/home/ubuntu/.local/lib/python3.10/site-packages/vllm/model_executor/models/constant_size_cache.py", line 44, in current_run_tensors
(VllmWorkerProcess pid=19604) ERROR 06-14 04:44:54 [multiproc_worker_utils.py:239] state_indices = self._prepare_current_run_cache(
(VllmWorkerProcess pid=19604) ERROR 06-14 04:44:54 [multiproc_worker_utils.py:239] File "/home/ubuntu/.local/lib/python3.10/site-packages/vllm/model_executor/models/constant_size_cache.py", line 123, in _prepare_current_run_cache
(VllmWorkerProcess pid=19604) ERROR 06-14 04:44:54 [multiproc_worker_utils.py:239] return [
(VllmWorkerProcess pid=19604) ERROR 06-14 04:44:54 [multiproc_worker_utils.py:239] File "/home/ubuntu/.local/lib/python3.10/site-packages/vllm/model_executor/models/constant_size_cache.py", line 124, in <listcomp>
(VllmWorkerProcess pid=19604) ERROR 06-14 04:44:54 [multiproc_worker_utils.py:239] self._assign_seq_id_to_cache_index(req_id, seq_id,
(VllmWorkerProcess pid=19604) ERROR 06-14 04:44:54 [multiproc_worker_utils.py:239] File "/home/ubuntu/.local/lib/python3.10/site-packages/vllm/model_executor/models/constant_size_cache.py", line 102, in _assign_seq_id_to_cache_index
(VllmWorkerProcess pid=19604) ERROR 06-14 04:44:54 [multiproc_worker_utils.py:239] destination_index = self.free_cache_indices.pop()
(VllmWorkerProcess pid=19604) ERROR 06-14 04:44:54 [multiproc_worker_utils.py:239] IndexError: pop from empty list
INFO: 34.34.253.248:0 - "POST /v1/chat/completions HTTP/1.1" 500 Internal Server Error
Before submitting a new issue...
- Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.
Metadata
Metadata
Assignees
Labels
bugSomething isn't workingSomething isn't workingstaleOver 90 days of inactivityOver 90 days of inactivity