Skip to content

Conversation

candyzone
Copy link
Contributor

@candyzone candyzone commented Sep 10, 2025

issue #21160


Essential Elements of an Effective PR Description Checklist
  • The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)".
  • The test plan, such as providing test command.
  • The test results, such as pasting the results comparison before and after, or e2e results
  • (Optional) The necessary documentation update, such as updating supported_models.md and examples for a new model.
  • (Optional) Release notes update. If your change is user facing, please update the release notes draft in the Google Doc.

Copy link

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors.

You ask your reviewers to trigger select CI tests on top of fastcheck CI.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

If you have any questions, please reach out to us on Slack at https://slack.vllm.ai.

🚀

@mergify mergify bot added deepseek Related to DeepSeek models llama Related to Llama models speculative-decoding labels Sep 10, 2025
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request optimizes memory usage during model loading by using generators (map) instead of materializing full weight dictionaries in memory. This is a good improvement that reduces peak memory consumption. The changes in vllm/model_executor/models/deepseek_eagle.py and vllm/model_executor/models/llama_eagle.py are correct. However, there is a critical issue in vllm/model_executor/models/llama4_eagle.py where a returned tuple is not unpacked, which will lead to a runtime error. I've added a specific comment with a suggested fix for this.

@candyzone candyzone force-pushed the dev branch 2 times, most recently from ec9edfa to 736b4a3 Compare September 10, 2025 13:03
@mgoin mgoin changed the title [Perf] Optimize memory peak during model loading. [Perf] Optimize memory peak during EAGLE model loading. Sep 10, 2025
Signed-off-by: Chen Ding <candy.dc@alibaba-inc.com>
Copy link
Collaborator

@luccafong luccafong left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looks good to me

@luccafong luccafong enabled auto-merge (squash) September 18, 2025 16:08
@github-actions github-actions bot added the ready ONLY add when PR is ready to merge/full CI is needed label Sep 18, 2025
@luccafong luccafong merged commit 1a0a04d into vllm-project:main Sep 19, 2025
48 checks passed
ywang96 pushed a commit to ywang96/vllm that referenced this pull request Sep 19, 2025
debroy-rh pushed a commit to debroy-rh/vllm that referenced this pull request Sep 19, 2025
FeiDaLI pushed a commit to FeiDaLI/vllm that referenced this pull request Sep 25, 2025
charlifu pushed a commit to ROCm/vllm that referenced this pull request Sep 25, 2025
…#24585)

Signed-off-by: Chen Ding <candy.dc@alibaba-inc.com>
Signed-off-by: charlifu <charlifu@amd.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
deepseek Related to DeepSeek models llama Related to Llama models ready ONLY add when PR is ready to merge/full CI is needed speculative-decoding
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants