-
-
Notifications
You must be signed in to change notification settings - Fork 8.4k
feat(model loader): add load format 'prefetch_auto' for parallel mmap… #19659
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
… prefetching Introduce a new load format 'prefetch_auto' that performs concurrent mmap with MAP_POPULATE to prefetch safetensors or bin files into the page cache. This helps maximize storage bandwidth and improve model loading performance, especially on systems with high disk I/O capacity. Signed-off-by: Yang Kaiyong <yangkaiyong.yky@antgroup.com>
👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add 🚀 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Summary of Changes
Hello @BraveY, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
I've added a new model loading strategy designed to improve cold start performance by aggressively prefetching model weights into the page cache. This new 'prefetch_auto' format utilizes concurrent memory mapping with population to better leverage available disk bandwidth, leading to faster initialization times, especially on systems with fast storage.
Highlights
- New Load Format: Introduced a new
LoadFormat
option calledprefetch_auto
. This format behaves likeauto
(trying safetensors first, then bin) but adds a prefetching step. - Parallel Prefetching: Implemented a mechanism to concurrently
mmap
weight files with theMAP_POPULATE
flag. This aims to load file contents into the page cache before the actual model loading process begins, maximizing disk bandwidth. - Performance Improvement: Testing shows a significant reduction in model loading time (from 148s to 50s) and a substantial increase in peak and average I/O bandwidth when using the new
prefetch_auto
format on a system with high disk I/O capacity.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command>
or @gemini-code-assist <command>
. Below is a summary of the supported commands.
Feature | Command | Description |
---|---|---|
Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/
folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configureGemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces a new prefetch_auto
load format that leverages concurrent mmap
with MAP_POPULATE
to improve model loading performance by prefetching weight files into the page cache. The implementation correctly handles distributed ranks by prefetching only local files and includes a barrier for synchronization. The use of a thread pool for concurrent mmap operations is appropriate for this I/O-bound task. Overall, the changes are well-structured and address the stated objective effectively.
def _mmap_single_file(st_file: str) -> None: | ||
with open(st_file, "rb") as f: | ||
file_size = os.path.getsize(st_file) | ||
mm = mmap.mmap( | ||
fileno=f.fileno(), | ||
length=file_size, | ||
prot=mmap.PROT_READ, | ||
flags=mmap.MAP_SHARED | mmap.MAP_POPULATE | ||
) | ||
mm.close() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Adding a docstring to this function would improve clarity on its specific purpose within the prefetching process.
def _mmap_single_file(st_file: str) -> None: | |
with open(st_file, "rb") as f: | |
file_size = os.path.getsize(st_file) | |
mm = mmap.mmap( | |
fileno=f.fileno(), | |
length=file_size, | |
prot=mmap.PROT_READ, | |
flags=mmap.MAP_SHARED | mmap.MAP_POPULATE | |
) | |
mm.close() | |
def _mmap_single_file(st_file: str) -> None: | |
"""Perform mmap with MAP_POPULATE for a single file.""" | |
with open(st_file, "rb") as f: | |
file_size = os.path.getsize(st_file) | |
mm = mmap.mmap( | |
fileno=f.fileno(), | |
length=file_size, | |
prot=mmap.PROT_READ, | |
flags=mmap.MAP_SHARED | mmap.MAP_POPULATE | |
) | |
mm.close() |
…ight Signed-off-by: Yang Kaiyong <yangkaiyong.yky@antgroup.com>
Introduce a new load format 'prefetch_auto' that performs concurrent mmap with MAP_POPULATE to prefetch safetensors or bin files into the page cache. This helps maximize storage bandwidth and improve model loading performance, especially on systems with high disk I/O capacity.
Essential Elements of an Effective PR Description Checklist
supported_models.md
andexamples
for a new model.Purpose
The purpose of this PR is to optimize the cold start performance of the inference engine when model weights are already stored on the local disk. The current model loading approach fails to fully utilize available disk bandwidth during initial startup, resulting in suboptimal loading speeds. By implementing a disk bandwidth-optimized loading strategy for cold start scenarios, we can significantly accelerate the engine's initialization process. This improvement will directly enhance Pod scaling efficiency and deployment speed in production environments, enabling faster resource provisioning and workload handling capabilities.
Test Plan
Our test env:
lsblk result:
The loaded model is DeepSeek-R1-W8A8, the weight file size: 655GB. All the weight file is mounted in /data. Three 3.5 TB NVMe Samsung MZQL23T8HCLS-00B7C drives are configured in RAID 0 with LVM, mounted to the /data directory, providing a total storage capacity of 10 TB.
We set the
tp=16
in start command.Test Result
Time Reduction: The prefetch loader reduces loading time from 148s to 50s (66% improvement).
(Optional) Documentation Update