RL with vllm-native support (qwen3 converter)#3767
Merged
copybara-service[bot] merged 1 commit intomainfrom May 1, 2026
Merged
RL with vllm-native support (qwen3 converter)#3767copybara-service[bot] merged 1 commit intomainfrom
copybara-service[bot] merged 1 commit intomainfrom
Conversation
49611e5 to
150c252
Compare
91ca4c4 to
a208ed1
Compare
|
🤖 Hi @hengtaoguo, I've received your request, and I'm working on it now! You can track my progress in the logs for more details. |
|
🤖 I'm sorry @hengtaoguo, but I was unable to process your request. Please see the logs for more details. |
1 similar comment
|
🤖 I'm sorry @hengtaoguo, but I was unable to process your request. Please see the logs for more details. |
c6e8ee3 to
1ea9cf3
Compare
714bd8f to
ff4ec31
Compare
NicoGrande
approved these changes
Apr 30, 2026
4e68df7 to
8cd268b
Compare
3b8eb7b to
c927864
Compare
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Description
Implement/refactor MaxText to vLLM weight conversion into a reusable converter style:
BaseMaxTextToVLLMConverteras the shared converter class, and isolate the Qwen3-MOE implementation intotorchax_converter/qwen3_moe.py.maxtext.integration.vllm.torchax_converter.validate_converterfor VM testing and generation-based weight-transfer checks.use_standalone_converterconfig flag so RL rollout can explicitly opt into the standalone MaxText to vLLM converter path.MaxTextVllmRolloutwith model-specific converter creation instead of importing converter logic from the bench script._make_fuse_all.Tests
requirements (tpu-inference/vllm are critical)
logs
Standalone weight sync and decode on a v5p-8 VM:
Checklist
Before submitting this PR, please make sure (put X in square brackets):
gemini-reviewlabel.