Skip to content

[Bug]: RTX5080 got CUDA error: no kernel image is available for execution on the device #19906

Open
@thangnguyenduc1-vti

Description

@thangnguyenduc1-vti

Your current environment

The output of python collect_env.py
INFO 06-20 12:32:13 [__init__.py:244] Automatically detected platform cuda.
Collecting environment information...
==============================
        System Info
==============================
OS                           : Ubuntu 22.04.5 LTS (x86_64)
GCC version                  : (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version                : Could not collect
CMake version                : Could not collect
Libc version                 : glibc-2.35

==============================
       PyTorch Info
==============================
PyTorch version              : 2.7.0+cu128
Is debug build               : False
CUDA used to build PyTorch   : 12.8
ROCM used to build PyTorch   : N/A

==============================
      Python Environment
==============================
Python version               : 3.12.11 (main, Jun 12 2025, 12:40:51) [Clang 20.1.4 ] (64-bit runtime)
Python platform              : Linux-5.15.0-141-generic-x86_64-with-glibc2.35

==============================
       CUDA / GPU Info
==============================
Is CUDA available            : True
CUDA runtime version         : 12.8.93
CUDA_MODULE_LOADING set to   : LAZY
GPU models and configuration :
GPU 0: NVIDIA GeForce RTX 5080
GPU 1: NVIDIA GeForce RTX 5080

Nvidia driver version        : 570.86.10
cuDNN version                : Could not collect
HIP runtime version          : N/A
MIOpen runtime version       : N/A
Is XNNPACK available         : True

==============================
          CPU Info
==============================
Architecture:                         x86_64
CPU op-mode(s):                       32-bit, 64-bit
Address sizes:                        46 bits physical, 48 bits virtual
Byte Order:                           Little Endian
CPU(s):                               32
On-line CPU(s) list:                  0-31
Vendor ID:                            GenuineIntel
Model name:                           Intel(R) Core(TM) i9-14900K
CPU family:                           6
Model:                                183
Thread(s) per core:                   2
Core(s) per socket:                   24
Socket(s):                            1
Stepping:                             1
CPU max MHz:                          6000.0000
CPU min MHz:                          800.0000
BogoMIPS:                             6374.40
Flags:                                fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req umip pku ospke waitpkg gfni vaes vpclmulqdq tme rdpid movdiri movdir64b fsrm md_clear serialize pconfig arch_lbr flush_l1d arch_capabilities
Virtualization:                       VT-x
L1d cache:                            896 KiB (24 instances)
L1i cache:                            1.3 MiB (24 instances)
L2 cache:                             32 MiB (12 instances)
L3 cache:                             36 MiB (1 instance)
NUMA node(s):                         1
NUMA node0 CPU(s):                    0-31
Vulnerability Gather data sampling:   Not affected
Vulnerability Itlb multihit:          Not affected
Vulnerability L1tf:                   Not affected
Vulnerability Mds:                    Not affected
Vulnerability Meltdown:               Not affected
Vulnerability Mmio stale data:        Not affected
Vulnerability Reg file data sampling: Mitigation; Clear Register File
Vulnerability Retbleed:               Not affected
Vulnerability Spec rstack overflow:   Not affected
Vulnerability Spec store bypass:      Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1:             Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2:             Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds:                  Not affected
Vulnerability Tsx async abort:        Not affected

==============================
Versions of relevant libraries
==============================
[pip3] numpy==2.2.6
[pip3] nvidia-cublas-cu12==12.8.3.14
[pip3] nvidia-cuda-cupti-cu12==12.8.57
[pip3] nvidia-cuda-nvrtc-cu12==12.8.61
[pip3] nvidia-cuda-runtime-cu12==12.8.57
[pip3] nvidia-cudnn-cu12==9.7.1.26
[pip3] nvidia-cufft-cu12==11.3.3.41
[pip3] nvidia-cufile-cu12==1.13.0.11
[pip3] nvidia-curand-cu12==10.3.9.55
[pip3] nvidia-cusolver-cu12==11.7.2.55
[pip3] nvidia-cusparse-cu12==12.5.7.53
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.26.2
[pip3] nvidia-nvjitlink-cu12==12.8.61
[pip3] nvidia-nvtx-cu12==12.8.55
[pip3] pyzmq==27.0.0
[pip3] torch==2.7.0+cu128
[pip3] torchaudio==2.7.0+cu128
[pip3] torchvision==0.22.0+cu128
[pip3] transformers==4.52.4
[pip3] triton==3.3.0
[conda] Could not collect

==============================
         vLLM Info
==============================
ROCM Version                 : Could not collect
Neuron SDK Version           : N/A
vLLM Version                 : 0.9.1
vLLM Build Flags:
  CUDA Archs: 12.0; ROCm: Disabled; Neuron: Disabled
GPU Topology:
        GPU0    GPU1    CPU Affinity    NUMA Affinity   GPU NUMA ID
GPU0     X      PHB     0-31    0               N/A
GPU1    PHB      X      0-31    0               N/A

Legend:

  X    = Self
  SYS  = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
  NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
  PHB  = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
  PXB  = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
  PIX  = Connection traversing at most a single PCIe bridge
  NV#  = Connection traversing a bonded set of # NVLinks

==============================
     Environment Variables
==============================
TORCH_CUDA_ARCH_LIST=12.0
LD_LIBRARY_PATH=/usr/local/cuda-12.8/lib64
NCCL_CUMEM_ENABLE=0
PYTORCH_NVML_BASED_CUDA_CHECK=1
TORCHINDUCTOR_COMPILE_THREADS=1
CUDA_MODULE_LOADING=LAZY

🐛 Describe the bug

I am trying to run Qwen/Qwen2.5-14B-Instruct-AWQ on my server. I am using below command to run it.

python3 -m vllm.entrypoints.openai.api_server --model Qwen/Qwen2.5-14B-Instruct-AWQ --port 8000 --tensor-parallel-size 2 --trust-remote-code --quantization awq

and then i got this error

Got RuntimeError: Worker failed with error 'CUDA error: no kernel image is available for execution on the device
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect

Full log here

INFO 06-20 12:26:06 [__init__.py:244] Automatically detected platform cuda.
INFO 06-20 12:26:12 [api_server.py:1287] vLLM API server version 0.9.1
INFO 06-20 12:26:12 [cli_args.py:309] non-default args: {'model': 'Qwen/Qwen2.5-14B-Instruct-AWQ', 'trust_remote_code': True, 'quantization': 'awq', 'tensor_parallel_size': 2}
INFO 06-20 12:26:27 [config.py:823] This model supports multiple tasks: {'reward', 'generate', 'classify', 'embed', 'score'}. Defaulting to 'generate'.
INFO 06-20 12:26:32 [awq_marlin.py:120] Detected that the model can run with awq_marlin, however you specified quantization=awq explicitly, so forcing awq. Use quantization=awq_marlin for faster inference
WARNING 06-20 12:26:32 [config.py:931] awq quantization is not fully optimized yet. The speed can be slower than non-quantized models.
INFO 06-20 12:26:32 [config.py:1946] Defaulting to use mp for distributed inference
INFO 06-20 12:26:32 [config.py:2195] Chunked prefill is enabled with max_num_batched_tokens=2048.
WARNING 06-20 12:26:34 [env_override.py:17] NCCL_CUMEM_ENABLE is set to 0, skipping override. This may increase memory overhead with cudagraph+allreduce: https://github.com/NVIDIA/nccl/issues/1234
INFO 06-20 12:26:35 [__init__.py:244] Automatically detected platform cuda.
INFO 06-20 12:26:36 [core.py:455] Waiting for init message from front-end.
INFO 06-20 12:26:36 [core.py:70] Initializing a V1 LLM engine (v0.9.1) with config: model='Qwen/Qwen2.5-14B-Instruct-AWQ', speculative_config=None, tokenizer='Qwen/Qwen2.5-14B-Instruct-AWQ', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, override_neuron_config={}, tokenizer_revision=None, trust_remote_code=True, dtype=torch.float16, max_seq_len=32768, download_dir=None, load_format=LoadFormat.AUTO, tensor_parallel_size=2, pipeline_parallel_size=1, disable_custom_all_reduce=False, quantization=awq, enforce_eager=False, kv_cache_dtype=auto,  device_config=cuda, decoding_config=DecodingConfig(backend='auto', disable_fallback=False, disable_any_whitespace=False, disable_additional_properties=False, reasoning_backend=''), observability_config=ObservabilityConfig(show_hidden_metrics_for_version=None, otlp_traces_endpoint=None, collect_detailed_traces=None), seed=0, served_model_name=Qwen/Qwen2.5-14B-Instruct-AWQ, num_scheduler_steps=1, multi_step_stream_outputs=True, enable_prefix_caching=True, chunked_prefill_enabled=True, use_async_output_proc=True, pooler_config=None, compilation_config={"level":3,"debug_dump_path":"","cache_dir":"","backend":"","custom_ops":["none"],"splitting_ops":["vllm.unified_attention","vllm.unified_attention_with_output"],"use_inductor":true,"compile_sizes":[],"inductor_compile_config":{"enable_auto_functionalized_v2":false},"inductor_passes":{},"use_cudagraph":true,"cudagraph_num_of_warmups":1,"cudagraph_capture_sizes":[512,504,496,488,480,472,464,456,448,440,432,424,416,408,400,392,384,376,368,360,352,344,336,328,320,312,304,296,288,280,272,264,256,248,240,232,224,216,208,200,192,184,176,168,160,152,144,136,128,120,112,104,96,88,80,72,64,56,48,40,32,24,16,8,4,2,1],"cudagraph_copy_inputs":false,"full_cuda_graph":false,"max_capture_size":512,"local_cache_dir":null}
WARNING 06-20 12:26:36 [multiproc_worker_utils.py:307] Reducing Torch parallelism from 24 threads to 1 to avoid unnecessary CPU contention. Set OMP_NUM_THREADS in the external environment to tune this value as needed.
INFO 06-20 12:26:36 [shm_broadcast.py:289] vLLM message queue communication handle: Handle(local_reader_ranks=[0, 1], buffer_handle=(2, 16777216, 10, 'psm_7c13a51d'), local_subscribe_addr='ipc:///tmp/046d9817-67a4-449f-af92-67c408b9c435', remote_subscribe_addr=None, remote_addr_ipv6=False)
WARNING 06-20 12:26:36 [env_override.py:17] NCCL_CUMEM_ENABLE is set to 0, skipping override. This may increase memory overhead with cudagraph+allreduce: https://github.com/NVIDIA/nccl/issues/1234
WARNING 06-20 12:26:36 [env_override.py:17] NCCL_CUMEM_ENABLE is set to 0, skipping override. This may increase memory overhead with cudagraph+allreduce: https://github.com/NVIDIA/nccl/issues/1234
INFO 06-20 12:26:37 [__init__.py:244] Automatically detected platform cuda.
INFO 06-20 12:26:37 [__init__.py:244] Automatically detected platform cuda.
WARNING 06-20 12:26:39 [utils.py:2737] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x7fb6628b02f0>
(VllmWorker rank=1 pid=63591) INFO 06-20 12:26:39 [shm_broadcast.py:289] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_965adadc'), local_subscribe_addr='ipc:///tmp/db73406b-3281-4ea8-aa15-194602a78d4a', remote_subscribe_addr=None, remote_addr_ipv6=False)
WARNING 06-20 12:26:39 [utils.py:2737] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x7f14513e7800>
(VllmWorker rank=0 pid=63590) INFO 06-20 12:26:39 [shm_broadcast.py:289] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_8b931380'), local_subscribe_addr='ipc:///tmp/e6223c96-8e93-4e01-80b7-6d0a556be90c', remote_subscribe_addr=None, remote_addr_ipv6=False)
(VllmWorker rank=1 pid=63591) INFO 06-20 12:26:39 [utils.py:1126] Found nccl from library libnccl.so.2
(VllmWorker rank=0 pid=63590) INFO 06-20 12:26:39 [utils.py:1126] Found nccl from library libnccl.so.2
(VllmWorker rank=1 pid=63591) INFO 06-20 12:26:39 [pynccl.py:70] vLLM is using nccl==2.26.2
(VllmWorker rank=0 pid=63590) INFO 06-20 12:26:39 [pynccl.py:70] vLLM is using nccl==2.26.2
(VllmWorker rank=0 pid=63590) INFO 06-20 12:26:57 [custom_all_reduce_utils.py:246] reading GPU P2P access cache from /home/ubuntu/.cache/vllm/gpu_p2p_access_cache_for_0,1.json
(VllmWorker rank=1 pid=63591) INFO 06-20 12:26:57 [custom_all_reduce_utils.py:246] reading GPU P2P access cache from /home/ubuntu/.cache/vllm/gpu_p2p_access_cache_for_0,1.json
(VllmWorker rank=0 pid=63590) WARNING 06-20 12:26:57 [custom_all_reduce.py:147] Custom allreduce is disabled because your platform lacks GPU P2P capability or P2P test failed. To silence this warning, specify disable_custom_all_reduce=True explicitly.
(VllmWorker rank=1 pid=63591) WARNING 06-20 12:26:57 [custom_all_reduce.py:147] Custom allreduce is disabled because your platform lacks GPU P2P capability or P2P test failed. To silence this warning, specify disable_custom_all_reduce=True explicitly.
(VllmWorker rank=0 pid=63590) INFO 06-20 12:26:57 [shm_broadcast.py:289] vLLM message queue communication handle: Handle(local_reader_ranks=[1], buffer_handle=(1, 4194304, 6, 'psm_a6cd576c'), local_subscribe_addr='ipc:///tmp/b74e0718-bb7c-4c0e-9edc-24e31c69571b', remote_subscribe_addr=None, remote_addr_ipv6=False)
(VllmWorker rank=0 pid=63590) INFO 06-20 12:26:57 [parallel_state.py:1065] rank 0 in world size 2 is assigned as DP rank 0, PP rank 0, TP rank 0, EP rank 0
(VllmWorker rank=1 pid=63591) INFO 06-20 12:26:57 [parallel_state.py:1065] rank 1 in world size 2 is assigned as DP rank 0, PP rank 0, TP rank 1, EP rank 1
(VllmWorker rank=0 pid=63590) WARNING 06-20 12:26:57 [topk_topp_sampler.py:59] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
(VllmWorker rank=1 pid=63591) WARNING 06-20 12:26:57 [topk_topp_sampler.py:59] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
(VllmWorker rank=1 pid=63591) INFO 06-20 12:26:57 [gpu_model_runner.py:1595] Starting to load model Qwen/Qwen2.5-14B-Instruct-AWQ...
(VllmWorker rank=0 pid=63590) INFO 06-20 12:26:57 [gpu_model_runner.py:1595] Starting to load model Qwen/Qwen2.5-14B-Instruct-AWQ...
(VllmWorker rank=1 pid=63591) INFO 06-20 12:26:57 [gpu_model_runner.py:1600] Loading model from scratch...
(VllmWorker rank=0 pid=63590) INFO 06-20 12:26:57 [gpu_model_runner.py:1600] Loading model from scratch...
(VllmWorker rank=1 pid=63591) INFO 06-20 12:26:57 [cuda.py:252] Using Flash Attention backend on V1 engine.
(VllmWorker rank=0 pid=63590) INFO 06-20 12:26:57 [cuda.py:252] Using Flash Attention backend on V1 engine.
(VllmWorker rank=1 pid=63591) INFO 06-20 12:26:58 [weight_utils.py:292] Using model weights format ['*.safetensors']
(VllmWorker rank=0 pid=63590) INFO 06-20 12:26:58 [weight_utils.py:292] Using model weights format ['*.safetensors']
Loading safetensors checkpoint shards:   0% Completed | 0/3 [00:00<?, ?it/s]
Loading safetensors checkpoint shards:  33% Completed | 1/3 [00:00<00:00,  8.39it/s]
Loading safetensors checkpoint shards:  67% Completed | 2/3 [00:00<00:00,  4.12it/s]
Loading safetensors checkpoint shards: 100% Completed | 3/3 [00:00<00:00,  3.35it/s]
Loading safetensors checkpoint shards: 100% Completed | 3/3 [00:00<00:00,  3.69it/s]
(VllmWorker rank=0 pid=63590)
(VllmWorker rank=0 pid=63590) INFO 06-20 12:27:00 [default_loader.py:272] Loading weights took 0.84 seconds
(VllmWorker rank=0 pid=63590) INFO 06-20 12:27:01 [gpu_model_runner.py:1624] Model loading took 4.6720 GiB and 3.036429 seconds
(VllmWorker rank=1 pid=63591) INFO 06-20 12:27:05 [default_loader.py:272] Loading weights took 6.27 seconds
(VllmWorker rank=1 pid=63591) INFO 06-20 12:27:05 [gpu_model_runner.py:1624] Model loading took 4.6720 GiB and 7.922536 seconds
(VllmWorker rank=1 pid=63591) INFO 06-20 12:27:10 [backends.py:462] Using cache directory: /home/ubuntu/.cache/vllm/torch_compile_cache/cc159163dc/rank_1_0 for vLLM's torch.compile
(VllmWorker rank=1 pid=63591) INFO 06-20 12:27:10 [backends.py:472] Dynamo bytecode transform time: 4.61 s
(VllmWorker rank=0 pid=63590) INFO 06-20 12:27:10 [backends.py:462] Using cache directory: /home/ubuntu/.cache/vllm/torch_compile_cache/cc159163dc/rank_0_0 for vLLM's torch.compile
(VllmWorker rank=0 pid=63590) INFO 06-20 12:27:10 [backends.py:472] Dynamo bytecode transform time: 4.68 s
(VllmWorker rank=1 pid=63591) INFO 06-20 12:27:12 [backends.py:161] Cache the graph of shape None for later use
(VllmWorker rank=0 pid=63590) INFO 06-20 12:27:12 [backends.py:161] Cache the graph of shape None for later use
(VllmWorker rank=0 pid=63590) INFO 06-20 12:27:29 [backends.py:173] Compiling a graph for general shape takes 18.41 s
(VllmWorker rank=1 pid=63591) INFO 06-20 12:27:29 [backends.py:173] Compiling a graph for general shape takes 18.53 s
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527] WorkerProc hit an exception.
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527] Traceback (most recent call last):
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]   File "/home/ubuntu/.venv/lib/python3.12/site-packages/vllm/v1/executor/multiproc_executor.py", line 522, in worker_busy_loop
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]     output = func(*args, **kwargs)
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]              ^^^^^^^^^^^^^^^^^^^^^
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]   File "/home/ubuntu/.venv/lib/python3.12/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]     return func(*args, **kwargs)
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]            ^^^^^^^^^^^^^^^^^^^^^
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]   File "/home/ubuntu/.venv/lib/python3.12/site-packages/vllm/v1/worker/gpu_worker.py", line 205, in determine_available_memory
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]     self.model_runner.profile_run()
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]   File "/home/ubuntu/.venv/lib/python3.12/site-packages/vllm/v1/worker/gpu_model_runner.py", line 2012, in profile_run
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]     hidden_states = self._dummy_run(self.max_num_tokens)
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]   File "/home/ubuntu/.venv/lib/python3.12/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]     return func(*args, **kwargs)
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]            ^^^^^^^^^^^^^^^^^^^^^
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]   File "/home/ubuntu/.venv/lib/python3.12/site-packages/vllm/v1/worker/gpu_model_runner.py", line 1847, in _dummy_run
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]     outputs = model(
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]               ^^^^^^
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]   File "/home/ubuntu/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]     return self._call_impl(*args, **kwargs)
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]   File "/home/ubuntu/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]     return forward_call(*args, **kwargs)
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]   File "/home/ubuntu/.venv/lib/python3.12/site-packages/vllm/model_executor/models/qwen2.py", line 477, in forward
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]     hidden_states = self.model(input_ids, positions, intermediate_tensors,
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]   File "/home/ubuntu/.venv/lib/python3.12/site-packages/vllm/compilation/decorators.py", line 239, in __call__
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]     output = self.compiled_callable(*args, **kwargs)
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]   File "/home/ubuntu/.venv/lib/python3.12/site-packages/torch/_dynamo/eval_frame.py", line 655, in _fn
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]     return fn(*args, **kwargs)
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]            ^^^^^^^^^^^^^^^^^^^
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]   File "/home/ubuntu/.venv/lib/python3.12/site-packages/vllm/model_executor/models/qwen2.py", line 336, in forward
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]     def forward(
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]   File "/home/ubuntu/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]     return self._call_impl(*args, **kwargs)
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]   File "/home/ubuntu/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]     return forward_call(*args, **kwargs)
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]   File "/home/ubuntu/.venv/lib/python3.12/site-packages/torch/_dynamo/eval_frame.py", line 838, in _fn
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]     return fn(*args, **kwargs)
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]            ^^^^^^^^^^^^^^^^^^^
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]   File "/home/ubuntu/.venv/lib/python3.12/site-packages/torch/fx/graph_module.py", line 830, in call_wrapped
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]     return self._wrapped_call(self, *args, **kwargs)
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]   File "/home/ubuntu/.venv/lib/python3.12/site-packages/torch/fx/graph_module.py", line 406, in __call__
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]     raise e
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]   File "/home/ubuntu/.venv/lib/python3.12/site-packages/torch/fx/graph_module.py", line 393, in __call__
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]     return super(self.cls, obj).__call__(*args, **kwargs)  # type: ignore[misc]
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]   File "/home/ubuntu/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]     return self._call_impl(*args, **kwargs)
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]   File "/home/ubuntu/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]     return forward_call(*args, **kwargs)
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]   File "<eval_with_key>.98", line 730, in forward
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]     submod_0 = self.submod_0(l_input_ids_, s0, l_self_modules_embed_tokens_parameters_weight_, l_self_modules_layers_modules_0_modules_input_layernorm_parameters_weight_, l_self_modules_layers_modules_0_modules_self_attn_modules_qkv_proj_parameters_qweight_, l_self_modules_layers_modules_0_modules_self_attn_modules_qkv_proj_parameters_scales_, l_self_modules_layers_modules_0_modules_self_attn_modules_qkv_proj_parameters_qzeros_, l_self_modules_layers_modules_0_modules_self_attn_modules_qkv_proj_parameters_bias_, l_positions_, l_self_modules_layers_modules_0_modules_self_attn_modules_rotary_emb_buffers_cos_sin_cache_);  l_input_ids_ = l_self_modules_embed_tokens_parameters_weight_ = l_self_modules_layers_modules_0_modules_input_layernorm_parameters_weight_ = l_self_modules_layers_modules_0_modules_self_attn_modules_qkv_proj_parameters_qweight_ = l_self_modules_layers_modules_0_modules_self_attn_modules_qkv_proj_parameters_scales_ = l_self_modules_layers_modules_0_modules_self_attn_modules_qkv_proj_parameters_qzeros_ = l_self_modules_layers_modules_0_modules_self_attn_modules_qkv_proj_parameters_bias_ = None
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]   File "/home/ubuntu/.venv/lib/python3.12/site-packages/vllm/compilation/cuda_piecewise_backend.py", line 111, in __call__
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]     return self.compiled_graph_for_general_shape(*args)
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]   File "/home/ubuntu/.venv/lib/python3.12/site-packages/torch/_dynamo/eval_frame.py", line 838, in _fn
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]     return fn(*args, **kwargs)
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]            ^^^^^^^^^^^^^^^^^^^
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]   File "/home/ubuntu/.venv/lib/python3.12/site-packages/torch/_functorch/aot_autograd.py", line 1201, in forward
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]     return compiled_fn(full_args)
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]            ^^^^^^^^^^^^^^^^^^^^^^
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]   File "/home/ubuntu/.venv/lib/python3.12/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 328, in runtime_wrapper
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]     all_outs = call_func_at_runtime_with_args(
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]   File "/home/ubuntu/.venv/lib/python3.12/site-packages/torch/_functorch/_aot_autograd/utils.py", line 126, in call_func_at_runtime_with_args
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]     out = normalize_as_list(f(args))
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]                             ^^^^^^^
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]   File "/home/ubuntu/.venv/lib/python3.12/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 689, in inner_fn
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]     outs = compiled_fn(args)
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]            ^^^^^^^^^^^^^^^^^
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]   File "/home/ubuntu/.venv/lib/python3.12/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 495, in wrapper
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]     return compiled_fn(runtime_args)
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]            ^^^^^^^^^^^^^^^^^^^^^^^^^
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]   File "/home/ubuntu/.venv/lib/python3.12/site-packages/torch/_inductor/output_code.py", line 460, in __call__
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]     return self.current_callable(inputs)
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]   File "/home/ubuntu/.venv/lib/python3.12/site-packages/torch/_inductor/utils.py", line 2404, in run
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]     return model(new_inputs)
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]            ^^^^^^^^^^^^^^^^^
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]   File "/home/ubuntu/.cache/vllm/torch_compile_cache/cc159163dc/rank_0_0/inductor_cache/dl/cdlzwp6pqanp2hvteblxo3dha7n5vo43k2phaofsreeh3bu2eutd.py", line 401, in call
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]     buf7 = empty_strided_cuda((s0, 3584), (3584, 1), torch.float16)
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527] RuntimeError: CUDA error: no kernel image is available for execution on the device
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527] CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527] For debugging consider passing CUDA_LAUNCH_BLOCKING=1
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527] Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527] Traceback (most recent call last):
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]   File "/home/ubuntu/.venv/lib/python3.12/site-packages/vllm/v1/executor/multiproc_executor.py", line 522, in worker_busy_loop
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]     output = func(*args, **kwargs)
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]              ^^^^^^^^^^^^^^^^^^^^^
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]   File "/home/ubuntu/.venv/lib/python3.12/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]     return func(*args, **kwargs)
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]            ^^^^^^^^^^^^^^^^^^^^^
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]   File "/home/ubuntu/.venv/lib/python3.12/site-packages/vllm/v1/worker/gpu_worker.py", line 205, in determine_available_memory
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]     self.model_runner.profile_run()
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]   File "/home/ubuntu/.venv/lib/python3.12/site-packages/vllm/v1/worker/gpu_model_runner.py", line 2012, in profile_run
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]     hidden_states = self._dummy_run(self.max_num_tokens)
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]   File "/home/ubuntu/.venv/lib/python3.12/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]     return func(*args, **kwargs)
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]            ^^^^^^^^^^^^^^^^^^^^^
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]   File "/home/ubuntu/.venv/lib/python3.12/site-packages/vllm/v1/worker/gpu_model_runner.py", line 1847, in _dummy_run
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]     outputs = model(
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]               ^^^^^^
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]   File "/home/ubuntu/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]     return self._call_impl(*args, **kwargs)
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]   File "/home/ubuntu/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]     return forward_call(*args, **kwargs)
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]   File "/home/ubuntu/.venv/lib/python3.12/site-packages/vllm/model_executor/models/qwen2.py", line 477, in forward
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]     hidden_states = self.model(input_ids, positions, intermediate_tensors,
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]   File "/home/ubuntu/.venv/lib/python3.12/site-packages/vllm/compilation/decorators.py", line 239, in __call__
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]     output = self.compiled_callable(*args, **kwargs)
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]   File "/home/ubuntu/.venv/lib/python3.12/site-packages/torch/_dynamo/eval_frame.py", line 655, in _fn
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]     return fn(*args, **kwargs)
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]            ^^^^^^^^^^^^^^^^^^^
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]   File "/home/ubuntu/.venv/lib/python3.12/site-packages/vllm/model_executor/models/qwen2.py", line 336, in forward
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]     def forward(
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]   File "/home/ubuntu/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]     return self._call_impl(*args, **kwargs)
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]   File "/home/ubuntu/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]     return forward_call(*args, **kwargs)
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]   File "/home/ubuntu/.venv/lib/python3.12/site-packages/torch/_dynamo/eval_frame.py", line 838, in _fn
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]     return fn(*args, **kwargs)
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]            ^^^^^^^^^^^^^^^^^^^
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]   File "/home/ubuntu/.venv/lib/python3.12/site-packages/torch/fx/graph_module.py", line 830, in call_wrapped
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]     return self._wrapped_call(self, *args, **kwargs)
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]   File "/home/ubuntu/.venv/lib/python3.12/site-packages/torch/fx/graph_module.py", line 406, in __call__
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]     raise e
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]   File "/home/ubuntu/.venv/lib/python3.12/site-packages/torch/fx/graph_module.py", line 393, in __call__
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]     return super(self.cls, obj).__call__(*args, **kwargs)  # type: ignore[misc]
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]   File "/home/ubuntu/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]     return self._call_impl(*args, **kwargs)
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]   File "/home/ubuntu/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]     return forward_call(*args, **kwargs)
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]   File "<eval_with_key>.98", line 730, in forward
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]     submod_0 = self.submod_0(l_input_ids_, s0, l_self_modules_embed_tokens_parameters_weight_, l_self_modules_layers_modules_0_modules_input_layernorm_parameters_weight_, l_self_modules_layers_modules_0_modules_self_attn_modules_qkv_proj_parameters_qweight_, l_self_modules_layers_modules_0_modules_self_attn_modules_qkv_proj_parameters_scales_, l_self_modules_layers_modules_0_modules_self_attn_modules_qkv_proj_parameters_qzeros_, l_self_modules_layers_modules_0_modules_self_attn_modules_qkv_proj_parameters_bias_, l_positions_, l_self_modules_layers_modules_0_modules_self_attn_modules_rotary_emb_buffers_cos_sin_cache_);  l_input_ids_ = l_self_modules_embed_tokens_parameters_weight_ = l_self_modules_layers_modules_0_modules_input_layernorm_parameters_weight_ = l_self_modules_layers_modules_0_modules_self_attn_modules_qkv_proj_parameters_qweight_ = l_self_modules_layers_modules_0_modules_self_attn_modules_qkv_proj_parameters_scales_ = l_self_modules_layers_modules_0_modules_self_attn_modules_qkv_proj_parameters_qzeros_ = l_self_modules_layers_modules_0_modules_self_attn_modules_qkv_proj_parameters_bias_ = None
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]   File "/home/ubuntu/.venv/lib/python3.12/site-packages/vllm/compilation/cuda_piecewise_backend.py", line 111, in __call__
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]     return self.compiled_graph_for_general_shape(*args)
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]   File "/home/ubuntu/.venv/lib/python3.12/site-packages/torch/_dynamo/eval_frame.py", line 838, in _fn
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]     return fn(*args, **kwargs)
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]            ^^^^^^^^^^^^^^^^^^^
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]   File "/home/ubuntu/.venv/lib/python3.12/site-packages/torch/_functorch/aot_autograd.py", line 1201, in forward
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]     return compiled_fn(full_args)
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]            ^^^^^^^^^^^^^^^^^^^^^^
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]   File "/home/ubuntu/.venv/lib/python3.12/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 328, in runtime_wrapper
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]     all_outs = call_func_at_runtime_with_args(
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]   File "/home/ubuntu/.venv/lib/python3.12/site-packages/torch/_functorch/_aot_autograd/utils.py", line 126, in call_func_at_runtime_with_args
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]     out = normalize_as_list(f(args))
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]                             ^^^^^^^
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]   File "/home/ubuntu/.venv/lib/python3.12/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 689, in inner_fn
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]     outs = compiled_fn(args)
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]            ^^^^^^^^^^^^^^^^^
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]   File "/home/ubuntu/.venv/lib/python3.12/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 495, in wrapper
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]     return compiled_fn(runtime_args)
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]            ^^^^^^^^^^^^^^^^^^^^^^^^^
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]   File "/home/ubuntu/.venv/lib/python3.12/site-packages/torch/_inductor/output_code.py", line 460, in __call__
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]     return self.current_callable(inputs)
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]   File "/home/ubuntu/.venv/lib/python3.12/site-packages/torch/_inductor/utils.py", line 2404, in run
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]     return model(new_inputs)
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]            ^^^^^^^^^^^^^^^^^
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]   File "/home/ubuntu/.cache/vllm/torch_compile_cache/cc159163dc/rank_0_0/inductor_cache/dl/cdlzwp6pqanp2hvteblxo3dha7n5vo43k2phaofsreeh3bu2eutd.py", line 401, in call
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]     buf7 = empty_strided_cuda((s0, 3584), (3584, 1), torch.float16)
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527] RuntimeError: CUDA error: no kernel image is available for execution on the device
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527] CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527] For debugging consider passing CUDA_LAUNCH_BLOCKING=1
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527] Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]
(VllmWorker rank=0 pid=63590) ERROR 06-20 12:27:37 [multiproc_executor.py:527]
ERROR 06-20 12:27:37 [core.py:515] EngineCore failed to start.
ERROR 06-20 12:27:37 [core.py:515] Traceback (most recent call last):
ERROR 06-20 12:27:37 [core.py:515]   File "/home/ubuntu/.venv/lib/python3.12/site-packages/vllm/v1/engine/core.py", line 506, in run_engine_core
ERROR 06-20 12:27:37 [core.py:515]     engine_core = EngineCoreProc(*args, **kwargs)
ERROR 06-20 12:27:37 [core.py:515]                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 06-20 12:27:37 [core.py:515]   File "/home/ubuntu/.venv/lib/python3.12/site-packages/vllm/v1/engine/core.py", line 390, in __init__
ERROR 06-20 12:27:37 [core.py:515]     super().__init__(vllm_config, executor_class, log_stats,
ERROR 06-20 12:27:37 [core.py:515]   File "/home/ubuntu/.venv/lib/python3.12/site-packages/vllm/v1/engine/core.py", line 83, in __init__
ERROR 06-20 12:27:37 [core.py:515]     self._initialize_kv_caches(vllm_config)
ERROR 06-20 12:27:37 [core.py:515]   File "/home/ubuntu/.venv/lib/python3.12/site-packages/vllm/v1/engine/core.py", line 141, in _initialize_kv_caches
ERROR 06-20 12:27:37 [core.py:515]     available_gpu_memory = self.model_executor.determine_available_memory()
ERROR 06-20 12:27:37 [core.py:515]                            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 06-20 12:27:37 [core.py:515]   File "/home/ubuntu/.venv/lib/python3.12/site-packages/vllm/v1/executor/abstract.py", line 76, in determine_available_memory
ERROR 06-20 12:27:37 [core.py:515]     output = self.collective_rpc("determine_available_memory")
ERROR 06-20 12:27:37 [core.py:515]              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 06-20 12:27:37 [core.py:515]   File "/home/ubuntu/.venv/lib/python3.12/site-packages/vllm/v1/executor/multiproc_executor.py", line 220, in collective_rpc
ERROR 06-20 12:27:37 [core.py:515]     result = get_response(w, dequeue_timeout)
ERROR 06-20 12:27:37 [core.py:515]              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 06-20 12:27:37 [core.py:515]   File "/home/ubuntu/.venv/lib/python3.12/site-packages/vllm/v1/executor/multiproc_executor.py", line 207, in get_response
ERROR 06-20 12:27:37 [core.py:515]     raise RuntimeError(
ERROR 06-20 12:27:37 [core.py:515] RuntimeError: Worker failed with error 'CUDA error: no kernel image is available for execution on the device
ERROR 06-20 12:27:37 [core.py:515] CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
ERROR 06-20 12:27:37 [core.py:515] For debugging consider passing CUDA_LAUNCH_BLOCKING=1
ERROR 06-20 12:27:37 [core.py:515] Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
ERROR 06-20 12:27:37 [core.py:515] ', please check the stack trace above for the root cause
ERROR 06-20 12:27:38 [multiproc_executor.py:140] Worker proc VllmWorker-1 died unexpectedly, shutting down executor.
Process EngineCore_0:
Traceback (most recent call last):
  File "/home/ubuntu/.local/share/uv/python/cpython-3.12.11-linux-x86_64-gnu/lib/python3.12/multiprocessing/process.py", line 314, in _bootstrap
    self.run()
  File "/home/ubuntu/.local/share/uv/python/cpython-3.12.11-linux-x86_64-gnu/lib/python3.12/multiprocessing/process.py", line 108, in run
    self._target(*self._args, **self._kwargs)
  File "/home/ubuntu/.venv/lib/python3.12/site-packages/vllm/v1/engine/core.py", line 519, in run_engine_core
    raise e
  File "/home/ubuntu/.venv/lib/python3.12/site-packages/vllm/v1/engine/core.py", line 506, in run_engine_core
    engine_core = EngineCoreProc(*args, **kwargs)
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/.venv/lib/python3.12/site-packages/vllm/v1/engine/core.py", line 390, in __init__
    super().__init__(vllm_config, executor_class, log_stats,
  File "/home/ubuntu/.venv/lib/python3.12/site-packages/vllm/v1/engine/core.py", line 83, in __init__
    self._initialize_kv_caches(vllm_config)
  File "/home/ubuntu/.venv/lib/python3.12/site-packages/vllm/v1/engine/core.py", line 141, in _initialize_kv_caches
    available_gpu_memory = self.model_executor.determine_available_memory()
                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/.venv/lib/python3.12/site-packages/vllm/v1/executor/abstract.py", line 76, in determine_available_memory
    output = self.collective_rpc("determine_available_memory")
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/.venv/lib/python3.12/site-packages/vllm/v1/executor/multiproc_executor.py", line 220, in collective_rpc
    result = get_response(w, dequeue_timeout)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/.venv/lib/python3.12/site-packages/vllm/v1/executor/multiproc_executor.py", line 207, in get_response
    raise RuntimeError(
RuntimeError: Worker failed with error 'CUDA error: no kernel image is available for execution on the device
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
', please check the stack trace above for the root cause
Traceback (most recent call last):
  File "<frozen runpy>", line 198, in _run_module_as_main
  File "<frozen runpy>", line 88, in _run_code
  File "/home/ubuntu/.venv/lib/python3.12/site-packages/vllm/entrypoints/openai/api_server.py", line 1387, in <module>
    uvloop.run(run_server(args))
  File "/home/ubuntu/.venv/lib/python3.12/site-packages/uvloop/__init__.py", line 109, in run
    return __asyncio.run(
           ^^^^^^^^^^^^^^
  File "/home/ubuntu/.local/share/uv/python/cpython-3.12.11-linux-x86_64-gnu/lib/python3.12/asyncio/runners.py", line 195, in run
    return runner.run(main)
           ^^^^^^^^^^^^^^^^
  File "/home/ubuntu/.local/share/uv/python/cpython-3.12.11-linux-x86_64-gnu/lib/python3.12/asyncio/runners.py", line 118, in run
    return self._loop.run_until_complete(task)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "uvloop/loop.pyx", line 1518, in uvloop.loop.Loop.run_until_complete
  File "/home/ubuntu/.venv/lib/python3.12/site-packages/uvloop/__init__.py", line 61, in wrapper
    return await main
           ^^^^^^^^^^
  File "/home/ubuntu/.venv/lib/python3.12/site-packages/vllm/entrypoints/openai/api_server.py", line 1323, in run_server
    await run_server_worker(listen_address, sock, args, **uvicorn_kwargs)
  File "/home/ubuntu/.venv/lib/python3.12/site-packages/vllm/entrypoints/openai/api_server.py", line 1343, in run_server_worker
    async with build_async_engine_client(args, client_config) as engine_client:
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/.local/share/uv/python/cpython-3.12.11-linux-x86_64-gnu/lib/python3.12/contextlib.py", line 210, in __aenter__
    return await anext(self.gen)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/.venv/lib/python3.12/site-packages/vllm/entrypoints/openai/api_server.py", line 155, in build_async_engine_client
    async with build_async_engine_client_from_engine_args(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/.local/share/uv/python/cpython-3.12.11-linux-x86_64-gnu/lib/python3.12/contextlib.py", line 210, in __aenter__
    return await anext(self.gen)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/.venv/lib/python3.12/site-packages/vllm/entrypoints/openai/api_server.py", line 191, in build_async_engine_client_from_engine_args
    async_llm = AsyncLLM.from_vllm_config(
                ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/.venv/lib/python3.12/site-packages/vllm/v1/engine/async_llm.py", line 162, in from_vllm_config
    return cls(
           ^^^^
  File "/home/ubuntu/.venv/lib/python3.12/site-packages/vllm/v1/engine/async_llm.py", line 124, in __init__
    self.engine_core = EngineCoreClient.make_async_mp_client(
                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/.venv/lib/python3.12/site-packages/vllm/v1/engine/core_client.py", line 93, in make_async_mp_client
    return AsyncMPClient(vllm_config, executor_class, log_stats,
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/.venv/lib/python3.12/site-packages/vllm/v1/engine/core_client.py", line 716, in __init__
    super().__init__(
  File "/home/ubuntu/.venv/lib/python3.12/site-packages/vllm/v1/engine/core_client.py", line 422, in __init__
    self._init_engines_direct(vllm_config, local_only,
  File "/home/ubuntu/.venv/lib/python3.12/site-packages/vllm/v1/engine/core_client.py", line 491, in _init_engines_direct
    self._wait_for_engine_startup(handshake_socket, input_address,
  File "/home/ubuntu/.venv/lib/python3.12/site-packages/vllm/v1/engine/core_client.py", line 511, in _wait_for_engine_startup
    wait_for_engine_startup(
  File "/home/ubuntu/.venv/lib/python3.12/site-packages/vllm/v1/utils.py", line 494, in wait_for_engine_startup
    raise RuntimeError("Engine core initialization failed. "
RuntimeError: Engine core initialization failed. See root cause above. Failed core proc(s): {}

Before submitting a new issue...

  • Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions