Skip to content

feat(model loader): add load format 'prefetch_auto' for parallel mmap… #19659

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

BraveY
Copy link

@BraveY BraveY commented Jun 15, 2025

Introduce a new load format 'prefetch_auto' that performs concurrent mmap with MAP_POPULATE to prefetch safetensors or bin files into the page cache. This helps maximize storage bandwidth and improve model loading performance, especially on systems with high disk I/O capacity.

Essential Elements of an Effective PR Description Checklist

  • The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)".
  • The test plan, such as providing test command.
  • The test results, such as pasting the results comparison before and after, or e2e results
  • (Optional) The necessary documentation update, such as updating supported_models.md and examples for a new model.

Purpose

The purpose of this PR is to optimize the cold start performance of the inference engine when model weights are already stored on the local disk. The current model loading approach fails to fully utilize available disk bandwidth during initial startup, resulting in suboptimal loading speeds. By implementing a disk bandwidth-optimized loading strategy for cold start scenarios, we can significantly accelerate the engine's initialization process. This improvement will directly enhance Pod scaling efficiency and deployment speed in production environments, enabling faster resource provisioning and workload handling capabilities.

Test Plan

Our test env:

#free -h
              total        used        free      shared  buff/cache   available
Mem:           2.0T         23G        1.3T         92M        640G        1.9T
Swap:            0B          0B          0B

lsblk result:

NAME            FSTYPE      LABEL    MOUNTPOINT   SIZE MODEL
nvme0n1                                           3.5T SAMSUNG MZQL23T8HCLS-00B7C
└─nvme0n1p1     LVM2_member                       3.5T
  └─vgdata-data ext4                 /data       10.5T
nvme3n1                                           3.5T SAMSUNG MZQL23T8HCLS-00B7C
└─nvme3n1p1     ext4                 /home        3.5T
sdb                                             447.1G SAMSUNG MZ7L3480
├─sdb4          iso9660     config-2             64.8M
├─sdb2          ext4                 /boot          1G
├─sdb3          ext4                 /           58.5G
└─sdb1          vfat                 /boot/efi    512M
nvme2n1                                           3.5T SAMSUNG MZQL23T8HCLS-00B7C
└─nvme2n1p1     LVM2_member                       3.5T
  └─vgdata-data ext4                 /data       10.5T
nvme1n1                                           3.5T SAMSUNG MZQL23T8HCLS-00B7C
└─nvme1n1p1     LVM2_member                       3.5T
  └─vgdata-data ext4                 /data       10.5T
sda                                             447.1G SAMSUNG MZ7L3480

The loaded model is DeepSeek-R1-W8A8, the weight file size: 655GB. All the weight file is mounted in /data. Three 3.5 TB NVMe Samsung MZQL23T8HCLS-00B7C drives are configured in RAID 0 with LVM, mounted to the /data directory, providing a total storage capacity of 10 TB.

We set the tp=16 in start command.

Test Result

Module Loader Load Time (s) Peak I/O Bandwidth (GB/s) Average Bandwidth (GB/s)
Default Module Loader 148 7.73 4.43
Prefetch Auto Module Loader 50 18.38 16.36

Time Reduction: The prefetch loader reduces loading time from 148s to 50s (66% improvement).

(Optional) Documentation Update

… prefetching

Introduce a new load format 'prefetch_auto' that performs concurrent mmap
with MAP_POPULATE to prefetch safetensors or bin files into the page cache.
This helps maximize storage bandwidth and improve model loading performance,
especially on systems with high disk I/O capacity.

Signed-off-by: Yang Kaiyong <yangkaiyong.yky@antgroup.com>
Copy link

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @BraveY, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

I've added a new model loading strategy designed to improve cold start performance by aggressively prefetching model weights into the page cache. This new 'prefetch_auto' format utilizes concurrent memory mapping with population to better leverage available disk bandwidth, leading to faster initialization times, especially on systems with fast storage.

Highlights

  • New Load Format: Introduced a new LoadFormat option called prefetch_auto. This format behaves like auto (trying safetensors first, then bin) but adds a prefetching step.
  • Parallel Prefetching: Implemented a mechanism to concurrently mmap weight files with the MAP_POPULATE flag. This aims to load file contents into the page cache before the actual model loading process begins, maximizing disk bandwidth.
  • Performance Improvement: Testing shows a significant reduction in model loading time (from 148s to 50s) and a substantial increase in peak and average I/O bandwidth when using the new prefetch_auto format on a system with high disk I/O capacity.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configureGemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a new prefetch_auto load format that leverages concurrent mmap with MAP_POPULATE to improve model loading performance by prefetching weight files into the page cache. The implementation correctly handles distributed ranks by prefetching only local files and includes a barrier for synchronization. The use of a thread pool for concurrent mmap operations is appropriate for this I/O-bound task. Overall, the changes are well-structured and address the stated objective effectively.

Comment on lines +502 to +511
def _mmap_single_file(st_file: str) -> None:
with open(st_file, "rb") as f:
file_size = os.path.getsize(st_file)
mm = mmap.mmap(
fileno=f.fileno(),
length=file_size,
prot=mmap.PROT_READ,
flags=mmap.MAP_SHARED | mmap.MAP_POPULATE
)
mm.close()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Adding a docstring to this function would improve clarity on its specific purpose within the prefetching process.

Suggested change
def _mmap_single_file(st_file: str) -> None:
with open(st_file, "rb") as f:
file_size = os.path.getsize(st_file)
mm = mmap.mmap(
fileno=f.fileno(),
length=file_size,
prot=mmap.PROT_READ,
flags=mmap.MAP_SHARED | mmap.MAP_POPULATE
)
mm.close()
def _mmap_single_file(st_file: str) -> None:
"""Perform mmap with MAP_POPULATE for a single file."""
with open(st_file, "rb") as f:
file_size = os.path.getsize(st_file)
mm = mmap.mmap(
fileno=f.fileno(),
length=file_size,
prot=mmap.PROT_READ,
flags=mmap.MAP_SHARED | mmap.MAP_POPULATE
)
mm.close()

…ight

Signed-off-by: Yang Kaiyong <yangkaiyong.yky@antgroup.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant