Skip to content

[misc] support structured_outputs_regex#7215

Merged
hjh0119 merged 4 commits intomodelscope:mainfrom
hjh0119:guided_decode
Dec 26, 2025
Merged

[misc] support structured_outputs_regex#7215
hjh0119 merged 4 commits intomodelscope:mainfrom
hjh0119:guided_decode

Conversation

@hjh0119
Copy link
Collaborator

@hjh0119 hjh0119 commented Dec 26, 2025

No description provided.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @hjh0119, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request enhances the system by adding support for structured outputs through guided decoding, specifically for the vLLM inference engine. This enables more controlled and predictable model generation by allowing users to define a regular expression pattern that the output must adhere to. The changes involve updating core argument parsing, adapting the vLLM inference engine to utilize this new capability with backward compatibility, and ensuring all relevant documentation is up-to-date.

Highlights

  • New Feature: Structured Outputs (Guided Decoding): Introduced a new parameter, structured_outputs_regex, which allows users to constrain the model's generation to match a specified regular expression pattern. This is crucial for tasks requiring structured outputs from large language models.
  • vLLM Backend Integration: The structured_outputs_regex feature is specifically implemented for the vLLM inference backend, leveraging its capabilities for guided decoding. Compatibility logic is included to support different versions of vLLM (v0.12+ vs. older versions) by dynamically using structured_outputs or guided_decoding parameters.
  • Argument Propagation and Documentation: The new structured_outputs_regex argument has been added to GenerationArguments, RequestConfig, RolloutTrainerArgumentsMixin, and integrated into GRPO and RLHF trainers. Comprehensive documentation in both English and Chinese has been updated to reflect this new command-line parameter.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds support for structured_outputs_regex to enable guided decoding with the vLLM backend. The changes are well-integrated, covering documentation, argument parsing, and the inference engine logic. The implementation correctly handles different vLLM versions by checking for StructuredOutputsParams and falling back to GuidedDecodingParams. My main feedback is a suggestion to simplify a condition check in the vLLM engine for better readability and maintainability.

@hjh0119
Copy link
Collaborator Author

hjh0119 commented Dec 26, 2025

/gemini review

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request effectively integrates support for structured_outputs_regex to enable guided decoding with vLLM. The changes are well-propagated through argument parsing, configuration objects, and the inference engine. The implementation also correctly handles backward compatibility with older vLLM versions. I've identified one issue where the new argument is missing from the Megatron-specific argument classes, which would prevent the feature from working in that context. A code suggestion is provided to fix this.

top_k: int = 50
top_p: float = 0.9
repetition_penalty: float = 1.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

To enable structured_outputs_regex for the Megatron GRPO trainer, it needs to be added as an argument here in RLHFMegatronArgumentsMixin. Without this, getattr(args, 'structured_outputs_regex', None) in swift/megatron/trainers/grpo_trainer.py will always return None, preventing the feature from being used in this configuration.

Suggested change
structured_outputs_regex: Optional[str] = None

@hjh0119 hjh0119 merged commit 323adac into modelscope:main Dec 26, 2025
2 of 3 checks passed
@hjh0119 hjh0119 deleted the guided_decode branch December 26, 2025 08:39
meichangsu1 pushed a commit to tpx818/ms-swift that referenced this pull request Jan 22, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

GRPO训练时,VLLM后端支持结构化输出功能 如何在VllmEngine推理时启用结构化输出

2 participants