Skip to content

[Bug]: fail to load OpenGVLab/InternVL3-78B with vllm #19856

@mdztravelling

Description

@mdztravelling

Your current environment

The output of python collect_env.py
Your output of `python collect_env.py` here

🐛 Describe the bug

failed to load OpenGVLab/InternVL3-78B model.

use command:
python3.10 -m vllm.entrypoints.openai.api_server --host 0.0.0.0 --port 8080 --model OpenGVLab/InternVL3-78B/ --tensor-parallel-size 8 --trust-remote-code

(VllmWorker rank=3 pid=16966) INFO 06-19 05:08:42 [topk_topp_sampler.py:44] Currently, FlashInfer top-p & top-k sampling sampler is disabled because FlashInfer>=v0.2.3 is not backward compatible. Falling back to the PyTorch-native implementation of top-p & top-k sampling.
(VllmWorker rank=0 pid=16860) INFO 06-19 05:08:42 [topk_topp_sampler.py:44] Currently, FlashInfer top-p & top-k sampling sampler is disabled because FlashInfer>=v0.2.3 is not backward compatible. Falling back to the PyTorch-native implementation of top-p & top-k sampling.
(VllmWorker rank=1 pid=16892) INFO 06-19 05:08:42 [topk_topp_sampler.py:44] Currently, FlashInfer top-p & top-k sampling sampler is disabled because FlashInfer>=v0.2.3 is not backward compatible. Falling back to the PyTorch-native implementation of top-p & top-k sampling.
Loading safetensors checkpoint shards: 0% Completed | 0/31 [00:00<?, ?it/s]
Loading safetensors checkpoint shards: 3% Completed | 1/31 [00:00<00:06, 4.60it/s]
Loading safetensors checkpoint shards: 6% Completed | 2/31 [00:00<00:08, 3.49it/s]
Loading safetensors checkpoint shards: 10% Completed | 3/31 [00:00<00:09, 3.10it/s]
Loading safetensors checkpoint shards: 13% Completed | 4/31 [00:01<00:08, 3.01it/s]
Loading safetensors checkpoint shards: 16% Completed | 5/31 [00:01<00:08, 3.04it/s]
Loading safetensors checkpoint shards: 19% Completed | 6/31 [00:01<00:07, 3.15it/s]
Loading safetensors checkpoint shards: 23% Completed | 7/31 [00:02<00:07, 3.34it/s]
Loading safetensors checkpoint shards: 26% Completed | 8/31 [00:02<00:07, 3.22it/s]
Loading safetensors checkpoint shards: 29% Completed | 9/31 [00:02<00:06, 3.26it/s]
Loading safetensors checkpoint shards: 32% Completed | 10/31 [00:03<00:06, 3.15it/s]
Loading safetensors checkpoint shards: 35% Completed | 11/31 [00:03<00:06, 2.96it/s]
Loading safetensors checkpoint shards: 39% Completed | 12/31 [00:03<00:06, 2.94it/s]
Loading safetensors checkpoint shards: 42% Completed | 13/31 [00:04<00:06, 2.57it/s]
Loading safetensors checkpoint shards: 45% Completed | 14/31 [00:04<00:06, 2.58it/s]
Loading safetensors checkpoint shards: 48% Completed | 15/31 [00:05<00:05, 2.79it/s]
Loading safetensors checkpoint shards: 52% Completed | 16/31 [00:05<00:05, 2.92it/s]
Loading safetensors checkpoint shards: 55% Completed | 17/31 [00:05<00:04, 2.98it/s]
Loading safetensors checkpoint shards: 58% Completed | 18/31 [00:05<00:04, 2.97it/s]
Loading safetensors checkpoint shards: 61% Completed | 19/31 [00:06<00:03, 3.11it/s]
Loading safetensors checkpoint shards: 65% Completed | 20/31 [00:06<00:03, 3.16it/s]
Loading safetensors checkpoint shards: 68% Completed | 21/31 [00:06<00:03, 3.32it/s]
Loading safetensors checkpoint shards: 71% Completed | 22/31 [00:07<00:02, 3.41it/s]
Loading safetensors checkpoint shards: 74% Completed | 23/31 [00:07<00:01, 4.07it/s]
Loading safetensors checkpoint shards: 77% Completed | 24/31 [00:07<00:01, 4.18it/s]
Loading safetensors checkpoint shards: 81% Completed | 25/31 [00:07<00:01, 3.85it/s]
Loading safetensors checkpoint shards: 84% Completed | 26/31 [00:08<00:01, 3.68it/s]
Loading safetensors checkpoint shards: 87% Completed | 27/31 [00:08<00:01, 3.57it/s]
Loading safetensors checkpoint shards: 90% Completed | 28/31 [00:08<00:00, 3.53it/s]
Loading safetensors checkpoint shards: 94% Completed | 29/31 [00:08<00:00, 3.54it/s]
(VllmWorker rank=5 pid=17041) INFO 06-19 05:08:51 [loader.py:458] Loading weights took 9.09 seconds
Loading safetensors checkpoint shards: 97% Completed | 30/31 [00:09<00:00, 3.65it/s]
(VllmWorker rank=5 pid=17041) Process SpawnProcess-1:6:
CRITICAL 06-19 05:08:51 [multiproc_executor.py:49] MulitprocExecutor got fatal signal from worker processes, shutting down. See stack trace above for root cause issue.
(VllmWorker rank=2 pid=16927) INFO 06-19 05:08:51 [loader.py:458] Loading weights took 9.28 seconds
(VllmWorker rank=5 pid=17041) Traceback (most recent call last):
(VllmWorker rank=5 pid=17041) File "/usr/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap
(VllmWorker rank=5 pid=17041) self.run()
CRITICAL 06-19 05:08:51 [core_client.py:359] Got fatal signal from worker processes, shutting down. See stack trace above for root cause issue.
(VllmWorker rank=5 pid=17041) File "/usr/lib/python3.10/multiprocessing/process.py", line 108, in run
(VllmWorker rank=5 pid=17041) self._target(*self._args, **self._kwargs)
(VllmWorker rank=5 pid=17041) File "/usr/local/lib/python3.10/dist-packages/vllm/v1/executor/multiproc_executor.py", line 316, in worker_main
(VllmWorker rank=5 pid=17041) worker = WorkerProc(*args, **kwargs)
(VllmWorker rank=5 pid=17041) File "/usr/local/lib/python3.10/dist-packages/vllm/v1/executor/multiproc_executor.py", line 247, in init
(VllmWorker rank=5 pid=17041) self.worker.load_model()
(VllmWorker rank=5 pid=17041) File "/usr/local/lib/python3.10/dist-packages/vllm/v1/worker/gpu_worker.py", line 136, in load_model
(VllmWorker rank=5 pid=17041) self.model_runner.load_model()
(VllmWorker rank=5 pid=17041) File "/usr/local/lib/python3.10/dist-packages/vllm/v1/worker/gpu_model_runner.py", line 1279, in load_model
(VllmWorker rank=5 pid=17041) self.model = get_model(vllm_config=self.vllm_config)
(VllmWorker rank=5 pid=17041) File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/model_loader/init.py", line 14, in get_model
(VllmWorker rank=5 pid=17041) return loader.load_model(vllm_config=vllm_config)
(VllmWorker rank=5 pid=17041) File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/model_loader/loader.py", line 467, in load_model
(VllmWorker rank=5 pid=17041) raise ValueError(
(VllmWorker rank=5 pid=17041) ValueError: Following weights were not initialized from checkpoint: {'vision_model.encoder.layers.27.norm1.weight', 'vision_model.encoder.layers.38.mlp.fc2.weight', 'vision_model.encoder.layers.35.mlp.fc1.weight', 'vision_model.encoder.layers.41.norm2.weight', 'vision_model.encoder.layers.27.ls1', 'vision_model.encoder.layers.35.norm1.weight', 'vision_model.encoder.layers.23.mlp.fc2.bias', 'vision_model.encoder.layers.22.attn.proj.weight', 'vision_model.encoder.layers.24.norm2.weight', 'vision_model.encoder.layers.39.mlp.fc1.weight', 'vision_model.encoder.layers.33.attn.k_norm.weight', 'vision_model.encoder.layers.25.mlp.fc2.weight', 'vision_model.encoder.layers.40.norm2.weight', 'vision_model.encoder.layers.25.attn.q_norm.weight', 'vision_model.encoder.layers.23.mlp.fc2.weight', 'vision_model.encoder.layers.27.mlp.fc2.weight', 'vision_model.encoder.layers.43.mlp.fc2.bias', 'vision_model.encoder.layers.33.mlp.fc2.bias', 'vision_model.encoder.layers.23.attn.proj.bias', 'vision_model.encoder.layers.28.ls2', 'vision_model.encoder.layers.44.mlp.fc1.weight', 'vision_model.encoder.layers.26.attn.proj.weight', 'vision_model.encoder.layers.37.attn.q_norm.weight', 'vision_model.encoder.layers.21.attn.k_norm.weight', 'vision_model.encoder.layers.29.attn.proj.bias', 'vision_model.encoder.layers.28.mlp.fc1.bias', 'vision_model.encoder.layers.34.mlp.fc1.bias', 'vision_model.encoder.layers.34.mlp.fc2.weight', 'vision_model.encoder.layers.25.attn.proj.bias', 'vision_model.encoder.layers.30.attn.proj.weight', 'vision_model.encoder.layers.38.mlp.fc1.weight', 'vision_model.encoder.layers.43.mlp.fc1.bias', 'vision_model.encoder.layers.20.mlp.fc2.weight', 'language_model.model.layers.0.self_attn.qkv_proj.bias', 'vision_model.encoder.layers.28.norm2.weight', 'vision_model.encoder.layers.36.norm1.weight', 'vision_model.encoder.layers.27.attn.q_norm.weight', 'vision_model.encoder.layers.44.attn.proj.bias', 'vision_model.encoder.layers.39.ls2', 'vision_model.encoder.layers.25.ls2', 'vision_model.encoder.layers.36.mlp.fc1.weight', 'vision_model.encoder.layers.36.ls2', 'vision_model.encoder.layers.26.mlp.fc1.bias', 'vision_model.encoder.layers.40.attn.k_norm.weight', 'vision_model.encoder.layers.38.attn.q_norm.weight', 'vision_model.encoder.layers.20.attn.proj.weight', 'vision_model.encoder.layers.37.mlp.fc1.bias', 'vision_model.encoder.layers.33.ls2', 'vision_model.encoder.layers.25.ls1', 'vision_model.encoder.layers.25.norm1.weight', 'vision_model.encoder.layers.22.mlp.fc2.weight', 'vision_model.encoder.layers.30.mlp.fc1.bias', 'vision_model.encoder.layers.29.ls1', 'vision_model.encoder.layers.44.attn.k_norm.weight', 'vision_model.encoder.layers.40.attn.qkv.weight', 'vision_model.encoder.layers.34.attn.q_norm.weight', 'vision_model.encoder.layers.39.mlp.fc2.weight', 'vision_model.encoder.layers.34.norm2.weight', 'vision_model.encoder.layers.26.mlp.fc2.bias', 'vision_model.encoder.layers.27.attn.qkv.weight', 'vision_model.encoder.layers.33.norm1.weight', 'vision_model.encoder.layers.35.attn.q_norm.weight', 'vision_model.encoder.layers.25.attn.k_norm.weight', 'vision_model.encoder.layers.26.attn.proj.bias', 'vision_model.encoder.layers.27.norm2.weight', 'vision_model.encoder.layers.42.norm2.weight', 'vision_model.encoder.layers.26.attn.k_norm.weight', 'vision_model.encoder.layers.22.attn.q_norm.weight', 'vision_model.encoder.layers.42.ls2', 'vision_model.encoder.layers.35.attn.qkv.weight', 'vision_model.encoder.layers.30.mlp.fc2.bias', 'vision_model.encoder.layers.39.attn.qkv.weight', 'vision_model.encoder.layers.23.norm2.weight', 'vision_model.encoder.layers.31.attn.q_norm.weight', 'vision_model.encoder.layers.32.mlp.fc2.weight', 'vision_model.encoder.layers.26.norm2.weight', 'vision_model.encoder.layers.40.mlp.fc1.bias', 'vision_model.encoder.layers.36.attn.proj.bias', 'vision_model.encoder.layers.23.mlp.fc1.weight', 'vision_model.encoder.layers.44.ls2', 'vision_model.encoder.layers.27.mlp.fc2.bias', 'vision_model.encoder.layers.27.mlp.fc1.bias', 'vision_model.encoder.layers.28.attn.qkv.weight', 'vision_model.encoder.layers.30.attn.qkv.weight', 'vision_model.encoder.layers.35.mlp.fc2.weight', 'vision_model.encoder.layers.39.norm2.weight', 'vision_model.encoder.layers.42.mlp.fc1.bias', 'vision_model.encoder.layers.42.mlp.fc2.bias', 'vision_model.encoder.layers.24.attn.k_norm.weight', 'vision_model.encoder.layers.28.attn.q_norm.weight', 'vision_model.encoder.layers.38.attn.proj.bias', 'vision_model.encoder.layers.32.attn.q_norm.weight', 'vision_model.encoder.layers.43.attn.qkv.weight', 'vision_model.encoder.layers.25.attn.proj.weight', 'vision_model.encoder.layers.39.attn.proj.weight', 'vision_model.encoder.layers.43.mlp.fc1.weight', 'vision_model.encoder.layers.43.attn.proj.weight', 'vision_model.encoder.layers.42.norm1.weight', 'vision_model.encoder.layers.39.mlp.fc1.bias', 'vision_model.encoder.layers.27.attn.k_norm.weight', 'vision_model.encoder.layers.41.mlp.fc1.bias', 'vision_model.encoder.layers.36.mlp.fc2.weight', 'vision_model.encoder.layers.33.attn.proj.bias', 'vision_model.encoder.layers.43.attn.proj.bias', 'vision_model.encoder.layers.30.attn.proj.bias', 'vision_model.encoder.layers.27.ls2', 'vision_model.encoder.layers.36.norm2.weight', 'vision_model.encoder.layers.31.mlp.fc1.bias', 'vision_model.encoder.layers.41.mlp.fc2.weight', 'vision_model.encoder.layers.24.attn.qkv.weight', 'vision_model.encoder.layers.33.attn.qkv.weight', 'vision_model.encoder.layers.35.attn.k_norm.weight', 'vision_model.encoder.layers.22.attn.proj.bias', 'vision_model.encoder.layers.39.mlp.fc2.bias', 'vision_model.encoder.layers.25.attn.qkv.weight', 'vision_model.encoder.layers.44.mlp.fc1.bias', 'vision_model.encoder.layers.20.norm1.weight', 'vision_model.encoder.layers.29.mlp.fc2.bias', 'vision_model.encoder.layers.30.mlp.fc2.weight', 'vision_model.encoder.layers.41.attn.k_norm.weight', 'vision_model.encoder.layers.37.norm2.weight', 'vision_model.encoder.layers.22.attn.k_norm.weight', 'vision_model.encoder.layers.40.attn.proj.bias', 'vision_model.encoder.layers.42.attn.q_norm.weight', 'vision_model.encoder.layers.29.norm2.weight', 'vision_model.encoder.layers.34.ls1', 'vision_model.encoder.layers.40.ls2', 'vision_model.encoder.layers.30.norm1.weight', 'vision_model.encoder.layers.42.attn.proj.bias', 'vision_model.encoder.layers.22.mlp.fc1.bias', 'vision_model.encoder.layers.21.norm2.weight', 'vision_model.encoder.layers.26.attn.q_norm.weight', 'vision_model.encoder.layers.36.attn.q_norm.weight', 'vision_model.encoder.layers.29.mlp.fc1.weight', 'vision_model.encoder.layers.22.norm2.weight', 'vision_model.encoder.layers.24.ls2', 'vision_model.encoder.layers.27.attn.proj.bias', 'vision_model.encoder.layers.26.attn.qkv.weight', 'vision_model.encoder.layers.32.mlp.fc1.bias', 'vision_model.encoder.layers.36.attn.proj.weight', 'vision_model.encoder.layers.21.norm1.weight', 'vision_model.encoder.layers.24.attn.q_norm.weight', 'vision_model.encoder.layers.38.mlp.fc2.bias', 'vision_model.encoder.layers.23.norm1.weight', 'vision_model.encoder.layers.42.mlp.fc1.weight', 'vision_model.encoder.layers.31.mlp.fc2.weight', 'vision_model.encoder.layers.41.ls2', 'vision_model.encoder.layers.29.attn.q_norm.weight', 'vision_model.encoder.layers.26.norm1.weight', 'vision_model.encoder.layers.25.mlp.fc1.weight', 'vision_model.encoder.layers.44.norm1.weight', 'vision_model.encoder.layers.39.norm1.weight', 'vision_model.encoder.layers.29.norm1.weight', 'vision_model.encoder.layers.20.mlp.fc2.bias', 'vision_model.encoder.layers.28.attn.k_norm.weight', 'vision_model.encoder.layers.22.mlp.fc1.weight', 'vision_model.encoder.layers.32.norm1.weight', 'vision_model.encoder.layers.33.mlp.fc2.weight', 'vision_model.encoder.layers.32.norm2.weight', 'vision_model.encoder.layers.41.mlp.fc2.bias', 'vision_model.encoder.layers.20.norm2.weight', 'vision_model.encoder.layers.42.attn.qkv.weight', 'vision_model.encoder.layers.37.norm1.weight', 'vision_model.encoder.layers.21.attn.proj.weight', 'vision_model.encoder.layers.32.attn.proj.weight', 'vision_model.encoder.layers.30.attn.q_norm.weight', 'vision_model.encoder.layers.31.ls2', 'vision_model.encoder.layers.35.attn.proj.bias', 'vision_model.encoder.layers.31.mlp.fc2.bias', 'vision_model.encoder.layers.24.mlp.fc1.bias', 'vision_model.encoder.layers.28.ls1', 'vision_model.encoder.layers.41.attn.qkv.weight', 'vision_model.encoder.layers.21.mlp.fc2.bias', 'vision_model.encoder.layers.42.mlp.fc2.weight', 'vision_model.encoder.layers.42.attn.k_norm.weight', 'vision_model.encoder.layers.40.norm1.weight', 'vision_model.encoder.layers.29.ls2', 'vision_model.encoder.layers.23.attn.q_norm.weight', 'vision_model.encoder.layers.30.ls2', 'vision_model.encoder.layers.38.ls1', 'vision_model.encoder.layers.37.mlp.fc1.weight', 'vision_model.encoder.layers.27.attn.proj.weight', 'vision_model.encoder.layers.40.ls1', 'vision_model.encoder.layers.33.mlp.fc1.weight', 'vision_model.encoder.layers.30.attn.k_norm.weight', 'vision_model.encoder.layers.38.attn.k_norm.weight', 'vision_model.encoder.layers.22.norm1.weight', 'vision_model.encoder.layers.26.mlp.fc2.weight', 'vision_model.encoder.layers.33.attn.q_norm.weight', 'vision_model.encoder.layers.23.attn.proj.weight', 'vision_model.encoder.layers.31.ls1', 'vision_model.encoder.layers.24.attn.proj.bias', 'vision_model.encoder.layers.41.mlp.fc1.weight', 'vision_model.encoder.layers.37.mlp.fc2.weight', 'vision_model.encoder.layers.43.norm2.weight', 'vision_model.encoder.layers.24.mlp.fc2.weight', 'vision_model.encoder.layers.37.ls1', 'vision_model.encoder.layers.22.attn.qkv.weight', 'vision_model.encoder.layers.34.attn.proj.weight', 'vision_model.encoder.layers.44.attn.q_norm.weight', 'vision_model.encoder.layers.21.attn.q_norm.weight', 'vision_model.encoder.layers.34.mlp.fc1.weight', 'vision_model.encoder.layers.24.mlp.fc2.bias', 'vision_model.encoder.layers.40.mlp.fc2.bias', 'vision_model.encoder.layers.41.attn.proj.weight', 'vision_model.encoder.layers.23.attn.k_norm.weight', 'vision_model.encoder.layers.34.ls2', 'vision_model.encoder.layers.38.norm2.weight', 'vision_model.encoder.layers.40.mlp.fc2.weight', 'vision_model.encoder.layers.43.attn.q_norm.weight', 'vision_model.encoder.layers.21.mlp.fc1.weight', 'vision_model.encoder.layers.37.attn.proj.weight', 'vision_model.encoder.layers.28.attn.proj.weight', 'vision_model.encoder.layers.44.norm2.weight', 'vision_model.encoder.layers.42.attn.proj.weight', 'vision_model.encoder.layers.42.ls1', 'vision_model.encoder.layers.43.attn.k_norm.weight', 'vision_model.encoder.layers.33.attn.proj.weight', 'vision_model.encoder.layers.28.attn.proj.bias', 'vision_model.encoder.layers.24.mlp.fc1.weight', 'vision_model.encoder.layers.20.mlp.fc1.weight', 'vision_model.encoder.layers.32.attn.k_norm.weight', 'vision_model.encoder.layers.24.norm1.weight', 'vision_model.encoder.layers.39.attn.k_norm.weight', 'vision_model.encoder.layers.23.ls1', 'vision_model.encoder.layers.21.attn.proj.bias', 'vision_model.encoder.layers.35.attn.proj.weight', 'vision_model.encoder.layers.31.attn.proj.weight', 'vision_model.encoder.layers.40.mlp.fc1.weight', 'vision_model.encoder.layers.20.attn.proj.bias', 'vision_model.encoder.layers.35.mlp.fc2.bias', 'vision_model.encoder.layers.37.attn.k_norm.weight', 'language_model.model.embed_tokens.weight', 'vision_model.encoder.layers.32.mlp.fc1.weight', 'vision_model.encoder.layers.32.ls1', 'vision_model.encoder.layers.37.attn.proj.bias', 'vision_model.encoder.layers.33.norm2.weight', 'vision_model.encoder.layers.25.mlp.fc1.bias', 'vision_model.encoder.layers.35.mlp.fc1.bias', 'vision_model.encoder.layers.38.mlp.fc1.bias', 'vision_model.encoder.layers.44.ls1', 'vision_model.encoder.layers.29.mlp.fc2.weight', 'vision_model.encoder.layers.25.mlp.fc2.bias', 'vision_model.encoder.layers.43.mlp.fc2.weight', 'vision_model.encoder.layers.30.mlp.fc1.weight', 'vision_model.encoder.layers.44.attn.proj.weight', 'vision_model.encoder.layers.31.norm1.weight', 'vision_model.encoder.layers.43.ls1', 'vision_model.encoder.layers.31.attn.qkv.weight', 'vision_model.encoder.layers.25.norm2.weight', 'vision_model.encoder.layers.29.attn.k_norm.weight', 'vision_model.encoder.layers.31.norm2.weight', 'vision_model.encoder.layers.31.attn.k_norm.weight', 'vision_model.encoder.layers.24.attn.proj.weight', 'vision_model.encoder.layers.38.attn.qkv.weight', 'language_model.model.layers.0.self_attn.qkv_proj.weight', 'vision_model.encoder.layers.37.mlp.fc2.bias', 'vision_model.encoder.layers.21.ls2', 'vision_model.encoder.layers.31.attn.proj.bias', 'vision_model.encoder.layers.34.norm1.weight', 'vision_model.encoder.layers.34.attn.k_norm.weight', 'vision_model.encoder.layers.36.mlp.fc2.bias', 'vision_model.encoder.layers.30.ls1', 'vision_model.encoder.layers.38.norm1.weight', 'vision_model.encoder.layers.32.attn.proj.bias', 'vision_model.encoder.layers.26.mlp.fc1.weight', 'vision_model.encoder.layers.35.ls1', 'vision_model.encoder.layers.40.attn.proj.weight', 'vision_model.encoder.layers.41.attn.proj.bias', 'vision_model.encoder.layers.23.ls2', 'vision_model.encoder.layers.24.ls1', 'vision_model.encoder.layers.32.attn.qkv.weight', 'vision_model.encoder.layers.36.attn.k_norm.weight', 'vision_model.encoder.layers.28.mlp.fc2.weight', 'vision_model.encoder.layers.35.norm2.weight', 'vision_model.encoder.layers.37.attn.qkv.weight', 'vision_model.encoder.layers.22.mlp.fc2.bias', 'vision_model.encoder.layers.39.attn.q_norm.weight', 'vision_model.encoder.layers.36.ls1', 'vision_model.encoder.layers.41.ls1', 'vision_model.encoder.layers.22.ls1', 'vision_model.encoder.layers.29.attn.qkv.weight', 'vision_model.encoder.layers.44.attn.qkv.weight', 'vision_model.encoder.layers.44.mlp.fc2.bias', 'vision_model.encoder.layers.26.ls2', 'vision_model.encoder.layers.32.mlp.fc2.bias', 'vision_model.encoder.layers.23.attn.qkv.weight', 'vision_model.encoder.layers.39.ls1', 'vision_model.encoder.layers.20.mlp.fc1.bias', 'vision_model.encoder.layers.28.norm1.weight', 'vision_model.encoder.layers.35.ls2', 'vision_model.encoder.layers.43.ls2', 'vision_model.encoder.layers.36.attn.qkv.weight', 'vision_model.encoder.layers.38.ls2', 'vision_model.encoder.layers.23.mlp.fc1.bias', 'vision_model.encoder.layers.38.attn.proj.weight', 'vision_model.encoder.layers.39.attn.proj.bias', 'vision_model.encoder.layers.26.ls1', 'vision_model.encoder.layers.43.norm1.weight', 'vision_model.encoder.layers.34.attn.proj.bias', 'vision_model.encoder.layers.34.mlp.fc2.bias', 'vision_model.encoder.layers.29.attn.proj.weight', 'vision_model.encoder.layers.22.ls2', 'vision_model.encoder.layers.21.ls1', 'vision_model.encoder.layers.33.mlp.fc1.bias', 'vision_model.encoder.layers.30.norm2.weight', 'vision_model.encoder.layers.41.attn.q_norm.weight', 'vision_model.encoder.layers.31.mlp.fc1.weight', 'vision_model.encoder.layers.41.norm1.weight', 'vision_model.encoder.layers.37.ls2', 'vision_model.encoder.layers.40.attn.q_norm.weight', 'language_model.model.layers.0.mlp.gate_up_proj.weight', 'vision_model.encoder.layers.29.mlp.fc1.bias', 'vision_model.encoder.layers.44.mlp.fc2.weight', 'vision_model.encoder.layers.21.mlp.fc2.weight', 'vision_model.encoder.layers.36.mlp.fc1.bias', 'vision_model.encoder.layers.28.mlp.fc1.weight', 'vision_model.encoder.layers.27.mlp.fc1.weight', 'vision_model.encoder.layers.33.ls1', 'vision_model.encoder.layers.21.mlp.fc1.bias', 'vision_model.encoder.layers.34.attn.qkv.weight', 'language_model.model.layers.0.self_attn.o_proj.weight', 'vision_model.encoder.layers.21.attn.qkv.weight', 'vision_model.encoder.layers.28.mlp.fc2.bias', 'vision_model.encoder.layers.32.ls2'}
(VllmWorker rank=5 pid=17041) Exception ignored in: <function Context.del at 0x7fdfc7989120>
(VllmWorker rank=5 pid=17041) Traceback (most recent call last):
(VllmWorker rank=5 pid=17041) File "/usr/local/lib/python3.10/dist-packages/zmq/sugar/context.py", line 142, in del
(VllmWorker rank=5 pid=17041) self.destroy()
(VllmWorker rank=5 pid=17041) File "/usr/local/lib/python3.10/dist-packages/zmq/sugar/context.py", line 324, in destroy
(VllmWorker rank=5 pid=17041) self.term()
(VllmWorker rank=5 pid=17041) File "/usr/local/lib/python3.10/dist-packages/zmq/sugar/context.py", line 266, in term
(VllmWorker rank=5 pid=17041) super().term()
(VllmWorker rank=5 pid=17041) File "_zmq.py", line 564, in zmq.backend.cython._zmq.Context.term
(VllmWorker rank=5 pid=17041) File "_zmq.py", line 160, in zmq.backend.cython._zmq._check_rc
(VllmWorker rank=5 pid=17041) File "/usr/local/lib/python3.10/dist-packages/vllm/v1/executor/multiproc_executor.py", line 308, in signal_handler
(VllmWorker rank=5 pid=17041) raise SystemExit()
(VllmWorker rank=5 pid=17041) SystemExit:
Killed

Before submitting a new issue...

  • Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions