Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Frontend] [Bugfix] Refactor tool parsers and simplify the tool parsing interface. #11554

Draft
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

elementary-particle
Copy link

@elementary-particle elementary-particle commented Dec 27, 2024

This is the PR for the RFC #11522. Currently we are building a draft of simpler tool parsers using streaming JSON parsing libraries to reduce overhead and avoid bugs. Tests and commits will be added gradually.

FIX #11392.

Verified

This commit was signed with the committer’s verified signature.
…` for streaming outputs.

We only use `delta_token_ids` and `delta_text` in the streaming code.
This avoids overhead and errors.
Note that finish reasons other than end of stream aren't addressed yet.

Signed-off-by: elementary-particle <quantum.field@outlook.com>
Copy link

👋 Hi! Thank you for contributing to the vLLM project.
Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can do one of these:

  • Add ready label to the PR
  • Enable auto-merge.

🚀

@marcelodiaz558
Copy link

marcelodiaz558 commented Jan 31, 2025

@elementary-particle I have been testing your version of the Hermes tool parser with ijson, and it is working nicely, great job! It solved the issue I was facing here: #11279. I tested it on v0.7.0 and V1 mode.

Also, sometimes I encountered an issue regarding the JSON decode of arguments in the postprocessing step, but this small modification proposed by wangluyi fixed it: #9874 (comment).

I pushed a v0.7.0 Docker image with ijson installed and your commit 2f77b7b here: https://hub.docker.com/repository/docker/marcelodiaz/vllm-openai-hermes-fix.

07-02 EDIT: While testing a few models, I actually discovered that the issue I was facing might not be directly related to the hermes tool parser, instead, it happens when I set the tool_choice parameter to any value different than "auto", and it is reproducible with commit 2f77b7b of this PR. This issue was introduced in v0.6.5, I assume that it is related to the guided decoding used for the tool_choice parameter to work.

Nevertheless, the hermes tool parser of this PR works nicely and looks way cleaner than the one currently in the main branch.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
2 participants