-
Notifications
You must be signed in to change notification settings - Fork 2.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Core] Refactor model loading code #4097
Conversation
See also partially related PR/discussion about properly supporting local HF cache in offline-only mode or when HF hub can't be reached: #3125 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In general LGTM! Thanks for the refactoring! Left some small comments.
VLLM_USE_MODELSCOPE = os.environ.get("VLLM_USE_MODELSCOPE", | ||
"False").lower() == "true" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this more standard?
VLLM_USE_MODELSCOPE = os.environ.get("VLLM_USE_MODELSCOPE", | |
"False").lower() == "true" | |
VLLM_USE_MODELSCOPE = os.environ.get("VLLM_USE_MODELSCOPE") is not None |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would say the first approach is more common (you want to have a falsy value as well)
if isinstance(load_config.load_format, type): | ||
return load_config.load_format(load_config) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are these two lines related to this optimization?
"""Gets a tokenizer for the given model name via Huggingface/modelscope.""" | ||
if VLLM_USE_MODELSCOPE: | ||
# download model from ModelScope hub, | ||
# lazy import so that modelscope is not required for normal use. | ||
# pylint: disable=C. | ||
from modelscope.hub.snapshot_download import snapshot_download | ||
|
||
# Only set the tokenizer here, model will be downloaded on the workers. | ||
if not os.path.exists(tokenizer_name): | ||
tokenizer_path = snapshot_download( | ||
model_id=tokenizer_name, | ||
cache_dir=download_dir, | ||
revision=tokenizer_revision, | ||
# Ignore weights - we only need the tokenizer. | ||
ignore_file_pattern=["*.pt", "*.safetensors", "*.bin"]) | ||
tokenizer_name = tokenizer_path | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why don't we need these codes before but need them now?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the entire model was downloaded on init of ModelConfig
. Now the model will be downloaded by the workers instead, but we still need the tokenizer here
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
seems caused this bug #4362
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Love the refactor! Btw, for tensorizer extra args, should we document somewhere how to find it (like add a link)? I think it will be challenging to find relevant config.
return model.eval() | ||
|
||
|
||
class DummyModelLoader(BaseModelLoader): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it for testing? Can you spsecify if that's the case?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is already existing behavior in vllm
This PR refactors the weight loading code.
ModelLoader
interface is introduced, with implementations for all currently supported ways of loading weights (from disk, dummy, using tensorizer). It can be easily extended to support new loading methods.LoadConfig
object is introduced to hold generic model loader configurationload_weights
methods on model classes have been modified to simply take in an iterator over (name, weight tensor)This PR contains many changes, but most of them are just code moved around or modified slightly (eg. all of the model files have had the exact same modification). The main additions are in
vllm/model_executor/model_loader/loader.py
. There are no logic changes, aside from some very slight differences to how ModelScope is interacted with.PR Checklist (Click to Expand)
Thank you for your contribution to vLLM! Before submitting the pull request, please ensure the PR meets the following criteria. This helps vLLM maintain the code quality and improve the efficiency of the review process.
PR Title and Classification
Only specific types of PRs will be reviewed. The PR title is prefixed appropriately to indicate the type of change. Please use one of the following:
[Bugfix]
for bug fixes.[CI/Build]
for build or continuous integration improvements.[Doc]
for documentation fixes and improvements.[Model]
for adding a new model or improving an existing model. Model name should appear in the title.[Frontend]
For changes on the vLLM frontend (e.g., OpenAI API server,LLM
class, etc.)[Kernel]
for changes affecting CUDA kernels or other compute kernels.[Core]
for changes in the core vLLM logic (e.g.,LLMEngine
,AsyncLLMEngine
,Scheduler
, etc.)[Hardware][Vendor]
for hardware-specific changes. Vendor name should appear in the prefix (e.g.,[Hardware][AMD]
).[Misc]
for PRs that do not fit the above categories. Please use this sparingly.Note: If the PR spans more than one category, please include all relevant prefixes.
Code Quality
The PR need to meet the following code quality standards:
format.sh
to format your code.docs/source/
if the PR modifies the user-facing behaviors of vLLM. It helps vLLM user understand and utilize the new features or changes.Notes for Large Changes
Please keep the changes as concise as possible. For major architectural changes (>500 LOC excluding kernel/data/config/test), we would expect a GitHub issue (RFC) discussing the technical design and justification. Otherwise, we will tag it with
rfc-required
and might not go through the PR.What to Expect for the Reviews
The goal of the vLLM team is to be a transparent reviewing machine. We would like to make the review process transparent and efficient and make sure no contributor feel confused or frustrated. However, the vLLM team is small, so we need to prioritize some PRs over others. Here is what you can expect from the review process:
action-required
label on the PR if there are changes required. The contributor should address the comments and ping the reviewer to re-review the PR.Thank You
Finally, thank you for taking the time to read these guidelines and for your interest in contributing to vLLM. Your contributions make vLLM a great tool for everyone!