Skip to content

[Usage]: Upgrading from vllm0.7.3 to vllm0.8.2, but the required GPU memory significantly increases. #15617

@du0L

Description

@du0L

Your current environment

I use docker to run vllm
I own 8 NVIDIA GeForce RTX 4090 24564MiB

How would you like to use vllm

I upgraded from vllm0.7.3 to vllm0.8.2, but the required GPU memory increased dramatically, --max-model-len needs to be reduced by more than half to start, and frequent OOM (Out of Memory) occurs during operation, and the inference speed is extremely slow.

vllm0.8.2 command:
--gpu-memory-utilization 0.95 --port 18081 --max-model-len 116160 --enable-reasoning --reasoning-parser deepseek_r1

vllm0.7.3 command:
--gpu-memory-utilization 0.90 --port 18081 --max-model-len 53008 --enable-reasoning --reasoning-parser deepseek_r1

On version 0.8.2, I had to repeatedly try to reduce --gpu-memory-utilization and --max-model-len in order to get the model to run.

the error like:
ERROR 03-27 16:43:19 [core.py:343] ValueError: To serve at least one request with the models's max seq len (116160), (4.43 GB KV cache is needed, which is larger than the available KV cache memory (3.48 GB). Try increasing gpu_memory_utilization or decreasing max_model_len when initializing the engine.

ERROR 03-27 17:34:18 [core.py:343] RuntimeError: CUDA out of memory occurred when warming up sampler with 1024 dummy requests. Please try lowering max_num_seqs or gpu_memory_utilization when initializing the engine.

Why is the gap so large?

Before submitting a new issue...

  • Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.

Metadata

Metadata

Assignees

No one assigned

    Labels

    usageHow to use vllm

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions