Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature]: Expose option to load new model weights from disk #12774

Open
1 task done
edbeeching opened this issue Feb 5, 2025 · 3 comments
Open
1 task done

[Feature]: Expose option to load new model weights from disk #12774

edbeeching opened this issue Feb 5, 2025 · 3 comments
Labels
feature request New feature or request

Comments

@edbeeching
Copy link

🚀 The feature, motivation and pitch

In an async RL setting, we often want to perform fast generation with a vllm endpoint on a separate node and occasionally sync model weights from disk. It would be good if this option was available on the vllm endpoint.

Alternatives

SGLang already exposes this option: https://docs.sglang.ai/backend/native_api.html#Update-Weights-From-Disk

Additional context

No response

Before submitting a new issue...

  • Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.
@edbeeching edbeeching added the feature request New feature or request label Feb 5, 2025
@mgoin
Copy link
Member

mgoin commented Feb 5, 2025

Hi @edbeeching can you see if this feature achieves what you need? #12084

We have been actively working on adding new features to better support RL workflows

@danya0123
Copy link

danya0123 commented Mar 3, 2025

@mgoin it would be nice to allow unload model(to save gpu memory) /reload model too

#6566 can only unload lora
#3281 require to reprogram the entire http interface

@lewtun
Copy link
Contributor

lewtun commented Mar 18, 2025

@mgoin thanks for the pointer to #12084 !

What Ed is referring to is whether this collective can be exposed in the OpenAI-compatible server as a dedicated endpoint. For context, we'd like to spin up a vllm server on N nodes and run training on M nodes. At each training step, we'd like to synchronise the weights so that the vllm server is generating from the current policy.

We did look at #12084, but it seems to require us to adopt ray which adds a rather high amount of complexity to trl

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature request New feature or request
Projects
None yet
Development

No branches or pull requests

4 participants