-
-
Notifications
You must be signed in to change notification settings - Fork 8.3k
feat: add reward model + min_p speculative decode #19968
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add 🚀 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Summary of Changes
Hello @jatery55555, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request introduces the capability to integrate and serve a reward model within the vLLM
framework. It primarily focuses on adding a new API endpoint (/v1/score
) that enables users to query a loaded reward model for scoring prompt-response pairs. This involves creating a dedicated model class for QWen-based reward models and registering the new API route, enhancing vLLM
's utility for reinforcement learning from human feedback (RLHF) applications.
Highlights
- New API Endpoint for Reward Scoring: I've added a new
/v1/score
API endpoint to thevLLM
API server. This endpoint allows external clients to submit aprompt
andresponse
pair and receive a scalar rewardscore
from a loaded reward model. This is particularly useful for applications requiring programmatic access to reward model evaluations. - QWen Reward Model Integration: I've introduced a new
QWenRewardModel
class, which extends the existingQWen
model. This class is specifically designed to load a pre-trained reward head (a small neural network) and compute a reward score. The scoring mechanism involves extracting the hidden state corresponding to theeos_token
from the base QWen model and passing it through the loaded reward head. - Robust Tokenizer Separator Handling: I've improved the logic for constructing prompt strings in cross-encoding scoring within
serving_score.py
. The change ensures that the separator token (tokenizer.sep_token
) is handled gracefully, defaulting to an empty string iftokenizer.sep_token
isNone
. This makes the system more robust to different tokenizer configurations.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command>
or @gemini-code-assist <command>
. Below is a summary of the supported commands.
Feature | Command | Description |
---|---|---|
Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/
folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
The pull request introduces a reward model and speculative decoding functionality. It adds a new QWenRewardModel class, a /v1/score API endpoint, and related utilities. The code appears well-structured and addresses the intended functionality. However, some improvements can be made regarding error handling, logging, and potential edge cases.
if not hasattr(model, "score"): | ||
return {"score": -1.0} # 或抛出错误 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Returning a score of -1.0 when the model doesn't have a 'score' attribute might be misleading. Consider raising an exception with a more informative error message, or providing a configuration option to disable the /v1/score
endpoint if the model doesn't support it.
if not hasattr(model, "score"):
raise AttributeError("The model does not have a 'score' attribute.")
sep_token = tokenizer.sep_token if tokenizer.sep_token else '' | ||
request_prompt = f"{t1}{sep_token}{t2}" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's good to handle the case where tokenizer.sep_token
might be None. However, consider logging a warning if tokenizer.sep_token
is indeed None, as it might indicate an unexpected tokenizer configuration.
sep_token = tokenizer.sep_token if tokenizer.sep_token else '' | |
request_prompt = f"{t1}{sep_token}{t2}" | |
sep_token = tokenizer.sep_token if tokenizer.sep_token else '' | |
if not tokenizer.sep_token: | |
logger.warning("tokenizer.sep_token is None, using empty string as separator.") | |
request_prompt = f"{t1}{sep_token}{t2}" |
if not os.path.exists(reward_head_path): | ||
raise FileNotFoundError(f"未找到 reward_head.pt: {reward_head_path}") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Consider logging a warning instead of raising a FileNotFoundError
if the reward head parameters are not found. This would allow the model to run without the reward head, potentially in a different mode. Also, provide instructions on how to obtain or train the reward head.
if not os.path.exists(reward_head_path): | |
raise FileNotFoundError(f"未找到 reward_head.pt: {reward_head_path}") | |
if not os.path.exists(reward_head_path): | |
logger.warning(f"Reward head not found at {reward_head_path}. Functionality depending on reward model will be disabled.") | |
self.reward_head = None # Or some other appropriate default | |
else: |
eos_index = eos_mask.int().argmax(dim=1) # [B] | ||
eos_hidden = hidden[torch.arange(input_ids.size(0)), eos_index] # [B, H] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Consider adding a check to ensure that eos_index
is not equal to the length of the input IDs. If it is, it means that the EOS token was not found in the input, and taking the hidden state at eos_index
will result in accessing the last token's hidden state, which might not be what's intended.
eos_index = eos_mask.int().argmax(dim=1) # [B] | |
eos_hidden = hidden[torch.arange(input_ids.size(0)), eos_index] # [B, H] | |
eos_index = eos_mask.int().argmax(dim=1) # [B] | |
if torch.any(eos_index == input_ids.size(1)): | |
logger.warning("EOS token not found in input, using last token's hidden state.") | |
eos_hidden = hidden[torch.arange(input_ids.size(0)), eos_index] # [B, H] |
[Feat] Support min_p in speculative decoding
Essential Elements of an Effective PR Description Checklist
supported_models.md
andexamples
for a new model.Purpose
Test Plan
Test Result
(Optional) Documentation Update