Skip to content

refactor: merge duplicate codes in SGLang/vLLM engines#445

Merged
rchardx merged 26 commits intomainfrom
merge-sglang-vllm
Oct 21, 2025
Merged

refactor: merge duplicate codes in SGLang/vLLM engines#445
rchardx merged 26 commits intomainfrom
merge-sglang-vllm

Conversation

@garrett4wade
Copy link
Collaborator

@garrett4wade garrett4wade commented Oct 13, 2025

This pull request introduces a new staleness-aware rollout capacity controller, expands the API data structures for remote inference, and refactors the core module organization for better clarity and maintainability. It also updates test cases to reflect internal structure changes and fixes minor test logic issues.

Staleness Control and Core Refactor:

  • Added a new StalenessController class to manage rollout concurrency and staleness constraints, ensuring asynchronous rollouts in RL training do not exceed configured limits or become too off-policy.
  • Created a new areal/core module to house core components, including the new StalenessController and WorkflowExecutor, and updated imports accordingly. [1] [2]

API and Data Structure Enhancements:

  • Added new dataclasses in areal/api/io_struct.py for remote inference workflows: HttpRequest, HttpGenerationResult, and WeightUpdateRequests, enabling structured HTTP communication and weight update operations.
  • Implemented a copy method for the ModelRequest dataclass to facilitate safe duplication of model request objects.

Test Updates and Fixes:

  • Updated test cases to access the correct internal engine attributes after the core refactor, ensuring assertions check the correct workflow executor queue. [1] [2] [3] [4]
  • Fixed test logic for disk-based weight updates by properly creating temporary directories before use in both SGLang and vLLM engine tests. [1] [2]

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @garrett4wade, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly refactors the remote inference engine architecture by extracting common functionalities into a new RemoteInfEngine and defining a RemoteInfBackendProtocol. This change allows for a more modular and maintainable codebase, as backend-specific logic for SGLang and vLLM is now isolated in dedicated backend implementations. The refactoring promotes code reuse and simplifies the integration of future inference backends by clearly separating generic engine operations from unique backend API interactions.

Highlights

  • Core Remote Inference Engine: Introduced a new RemoteInfEngine class and RemoteInfBackendProtocol interface in areal/core/remote_inf_engine.py. This centralizes common logic for remote inference engines, including server discovery, health checks, request scheduling, and weight updates, reducing duplication across different backend implementations.
  • SGLang Engine Refactoring: The RemoteSGLangEngine in areal/engine/sglang_remote.py has been refactored to use the new RemoteInfEngine via composition. SGLang-specific logic for building requests and parsing responses is now encapsulated in a new SGLangBackend class, adhering to the RemoteInfBackendProtocol.
  • vLLM Engine Refactoring: Similarly, the RemotevLLMEngine in areal/engine/vllm_remote.py has been refactored to leverage the RemoteInfEngine. A new VLLMBackend class handles vLLM-specific API interactions, implementing the RemoteInfBackendProtocol.
  • New I/O Structures: Added HttpRequest, HttpGenerationResult, and WeightUpdateRequests dataclasses to areal/api/io_struct.py to standardize the representation of HTTP requests, generation results, and weight update requests across different backends.
  • Test Updates: Updated existing test files (areal/tests/test_sglang_engine.py, areal/tests/test_vllm_engine.py) to reflect the new internal structure of the refactored engines, specifically how workflow_executor is accessed.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request is a significant and well-executed refactoring that merges duplicated code from the SGLang and vLLM remote inference engines into a common RemoteInfEngine. The introduction of the RemoteInfBackendProtocol and the use of composition over inheritance is a great design choice that improves maintainability and extensibility. My review focuses on a few areas for improvement, primarily converting some remaining synchronous HTTP calls to be asynchronous to match the rest of the async-heavy design, and some minor code simplification and documentation enhancements.

@garrett4wade
Copy link
Collaborator Author

/gemini review

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request is a significant and well-executed refactoring that merges duplicated code from the SGLang and vLLM remote inference engines into a common base class, RemoteInfEngine. The use of a RemoteInfBackendProtocol to abstract backend-specific logic is a great design choice that improves maintainability and extensibility. The overall structure is much cleaner. I've identified a critical bug in an assert statement that could crash the generation process, a high-severity resource leak, and a couple of medium-severity issues related to performance and code style. After addressing these points, the code will be in excellent shape.

@garrett4wade
Copy link
Collaborator Author

/gemini review

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request is a well-executed refactoring that merges duplicated code from the SGLang and vLLM remote engines into a common RemoteInfEngine base. The use of a protocol-based backend is a great design choice that improves maintainability and clarity. My review focuses on a few key areas: a potential bug in a data-copying method, a resource leak due to an unclosed network session, and opportunities to make the new wrapper classes more robust. Overall, the changes are excellent and significantly improve the codebase.

@garrett4wade
Copy link
Collaborator Author

/gemini review

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a significant and well-executed refactoring of the remote inference engines. By creating a base RemoteInfEngine and a RemoteInfBackendProtocol, you've successfully abstracted the common logic for SGLang and vLLM backends, which greatly improves code clarity, reduces duplication, and enhances maintainability. The introduction of new data structures like HttpRequest and HttpGenerationResult further standardizes the communication layer.

My review includes a few suggestions:

  • In areal/core/remote_inf_engine.py, I've pointed out an opportunity to improve performance in the helper functions (_update_weights_from_disk, _init_weights_update_group_remote, _update_weights_from_distributed) by creating and reusing a single aiohttp.ClientSession for network requests.
  • In areal/experimental/sglang_engine.py, I've noted a redundant import that can be cleaned up.

Overall, this is a high-quality contribution that strengthens the codebase. The new architecture is much cleaner and more extensible.

@garrett4wade garrett4wade marked this pull request as ready for review October 21, 2025 09:40
Copy link
Collaborator

@rchardx rchardx left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! cc @nuzant @dhh1995

@rchardx rchardx merged commit af64fa1 into main Oct 21, 2025
1 of 4 checks passed
@rchardx rchardx deleted the merge-sglang-vllm branch October 21, 2025 16:27
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants