refactor: merge duplicate codes in SGLang/vLLM engines#445
Conversation
…ollout-controller
Summary of ChangesHello @garrett4wade, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly refactors the remote inference engine architecture by extracting common functionalities into a new Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request is a significant and well-executed refactoring that merges duplicated code from the SGLang and vLLM remote inference engines into a common RemoteInfEngine. The introduction of the RemoteInfBackendProtocol and the use of composition over inheritance is a great design choice that improves maintainability and extensibility. My review focuses on a few areas for improvement, primarily converting some remaining synchronous HTTP calls to be asynchronous to match the rest of the async-heavy design, and some minor code simplification and documentation enhancements.
|
/gemini review |
There was a problem hiding this comment.
Code Review
This pull request is a significant and well-executed refactoring that merges duplicated code from the SGLang and vLLM remote inference engines into a common base class, RemoteInfEngine. The use of a RemoteInfBackendProtocol to abstract backend-specific logic is a great design choice that improves maintainability and extensibility. The overall structure is much cleaner. I've identified a critical bug in an assert statement that could crash the generation process, a high-severity resource leak, and a couple of medium-severity issues related to performance and code style. After addressing these points, the code will be in excellent shape.
|
/gemini review |
There was a problem hiding this comment.
Code Review
This pull request is a well-executed refactoring that merges duplicated code from the SGLang and vLLM remote engines into a common RemoteInfEngine base. The use of a protocol-based backend is a great design choice that improves maintainability and clarity. My review focuses on a few key areas: a potential bug in a data-copying method, a resource leak due to an unclosed network session, and opportunities to make the new wrapper classes more robust. Overall, the changes are excellent and significantly improve the codebase.
d3c83e1 to
c890aec
Compare
|
/gemini review |
There was a problem hiding this comment.
Code Review
This pull request introduces a significant and well-executed refactoring of the remote inference engines. By creating a base RemoteInfEngine and a RemoteInfBackendProtocol, you've successfully abstracted the common logic for SGLang and vLLM backends, which greatly improves code clarity, reduces duplication, and enhances maintainability. The introduction of new data structures like HttpRequest and HttpGenerationResult further standardizes the communication layer.
My review includes a few suggestions:
- In
areal/core/remote_inf_engine.py, I've pointed out an opportunity to improve performance in the helper functions (_update_weights_from_disk,_init_weights_update_group_remote,_update_weights_from_distributed) by creating and reusing a singleaiohttp.ClientSessionfor network requests. - In
areal/experimental/sglang_engine.py, I've noted a redundant import that can be cleaned up.
Overall, this is a high-quality contribution that strengthens the codebase. The new architecture is much cleaner and more extensible.
This pull request introduces a new staleness-aware rollout capacity controller, expands the API data structures for remote inference, and refactors the core module organization for better clarity and maintainability. It also updates test cases to reflect internal structure changes and fixes minor test logic issues.
Staleness Control and Core Refactor:
StalenessControllerclass to manage rollout concurrency and staleness constraints, ensuring asynchronous rollouts in RL training do not exceed configured limits or become too off-policy.areal/coremodule to house core components, including the newStalenessControllerandWorkflowExecutor, and updated imports accordingly. [1] [2]API and Data Structure Enhancements:
areal/api/io_struct.pyfor remote inference workflows:HttpRequest,HttpGenerationResult, andWeightUpdateRequests, enabling structured HTTP communication and weight update operations.copymethod for theModelRequestdataclass to facilitate safe duplication of model request objects.Test Updates and Fixes: