Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUGFIX] [FRONTEND] Correct chat logprobs #5029

Merged
merged 16 commits into from
May 30, 2024

Conversation

br3no
Copy link
Contributor

@br3no br3no commented May 24, 2024

FIX #5008 – correct logprobs format

BEFORE SUBMITTING, PLEASE READ THE CHECKLIST BELOW AND FILL IN THE DESCRIPTION ABOVE


PR Checklist (Click to Expand)

Thank you for your contribution to vLLM! Before submitting the pull request, please ensure the PR meets the following criteria. This helps vLLM maintain the code quality and improve the efficiency of the review process.

PR Title and Classification

Only specific types of PRs will be reviewed. The PR title is prefixed appropriately to indicate the type of change. Please use one of the following:

  • [Bugfix] for bug fixes.
  • [CI/Build] for build or continuous integration improvements.
  • [Doc] for documentation fixes and improvements.
  • [Model] for adding a new model or improving an existing model. Model name should appear in the title.
  • [Frontend] For changes on the vLLM frontend (e.g., OpenAI API server, LLM class, etc.)
  • [Kernel] for changes affecting CUDA kernels or other compute kernels.
  • [Core] for changes in the core vLLM logic (e.g., LLMEngine, AsyncLLMEngine, Scheduler, etc.)
  • [Hardware][Vendor] for hardware-specific changes. Vendor name should appear in the prefix (e.g., [Hardware][AMD]).
  • [Misc] for PRs that do not fit the above categories. Please use this sparingly.

Note: If the PR spans more than one category, please include all relevant prefixes.

Code Quality

The PR need to meet the following code quality standards:

  • We adhere to Google Python style guide and Google C++ style guide.
  • Pass all linter checks. Please use format.sh to format your code.
  • The code need to be well-documented to ensure future contributors can easily understand the code.
  • Include sufficient tests to ensure the project to stay correct and robust. This includes both unit tests and integration tests.
  • Please add documentation to docs/source/ if the PR modifies the user-facing behaviors of vLLM. It helps vLLM user understand and utilize the new features or changes.

Notes for Large Changes

Please keep the changes as concise as possible. For major architectural changes (>500 LOC excluding kernel/data/config/test), we would expect a GitHub issue (RFC) discussing the technical design and justification. Otherwise, we will tag it with rfc-required and might not go through the PR.

What to Expect for the Reviews

The goal of the vLLM team is to be a transparent reviewing machine. We would like to make the review process transparent and efficient and make sure no contributor feel confused or frustrated. However, the vLLM team is small, so we need to prioritize some PRs over others. Here is what you can expect from the review process:

  • After the PR is submitted, the PR will be assigned to a reviewer. Every reviewer will pick up the PRs based on their expertise and availability.
  • After the PR is assigned, the reviewer will provide status update every 2-3 days. If the PR is not reviewed within 7 days, please feel free to ping the reviewer or the vLLM team.
  • After the review, the reviewer will put an action-required label on the PR if there are changes required. The contributor should address the comments and ping the reviewer to re-review the PR.
  • Please respond to all comments within a reasonable time frame. If a comment isn't clear or you disagree with a suggestion, feel free to ask for clarification or discuss the suggestion.

Thank You

Finally, thank you for taking the time to read these guidelines and for your interest in contributing to vLLM. Your contributions make vLLM a great tool for everyone!

@br3no
Copy link
Contributor Author

br3no commented May 24, 2024

Seems the failing builds are only due to some infrastructure issues...

@DarkLight1337
Copy link
Member

I think stop_reason was intentionally added in #2976. Better not remove it from the output.

@DarkLight1337
Copy link
Member

Also, the existing code for _create_logprobs fails the type checker (at least when I'm developing on VSCode). Feel free to use my updated implementation.

@br3no
Copy link
Contributor Author

br3no commented May 24, 2024

I think stop_reason was intentionally added in #2976. Better not remove it from the output.

Good catch.

But this means that some clients might break. I have to test this, but I think clients in statically typed languages will not work with this, since it’s not part of the official API.

@br3no
Copy link
Contributor Author

br3no commented May 24, 2024

Also, the existing code for _create_logprobs fails the type checker (at least when I'm developing on VSCode). Feel free to use my updated implementation.

Can you share some details about what the type checker is complaining about? I'm not seeing any errors.

@DarkLight1337
Copy link
Member

DarkLight1337 commented May 24, 2024

Also, the existing code for _create_logprobs fails the type checker (at least when I'm developing on VSCode). Feel free to use my updated implementation.

Can you share some details about what the type checker is complaining about? I'm not seeing any errors.

Currently, cross-directory checking is disabled (--follow-imports=skip) which suppresses most errors. @rkooo567 is working on properly type annotating the code before enabling cross-directory checking. To avoid introducing more errors, I've developing with cross-directory checking enabled. Specifically inside the context of OpenAIServing._create_logprobs and OpenAIServingChat._get_top_logprobs, I'm getting the following errors on your branch:

mypy

vllm/entrypoints/openai/serving_engine.py:110: error: Key expression in dictionary comprehension has incompatible type "Optional[str]"; expected type "str"  [misc]

vllm/entrypoints/openai/serving_chat.py:415: error: Argument "token" to "ChatCompletionLogProb" has incompatible type "Optional[str]"; expected "str"  [arg-type]
vllm/entrypoints/openai/serving_chat.py:417: error: Item "None" of "Optional[str]" has no attribute "encode"  [union-attr]

pyright

vllm/entrypoints/openai/serving_engine.py:102:40 - error: Argument of type "str | None" cannot be assigned to parameter "object" of type "str" in function "append"
  Type "str | None" cannot be assigned to type "str"
    "None" is incompatible with "str" (reportArgumentType)
vllm/entrypoints/openai/serving_engine.py:107:50 - error: Argument of type "dict[str | None, float] | None" cannot be assigned to parameter "object" of type "Dict[str, float] | None" in function "append" (reportArgumentType)
vllm/entrypoints/openai/serving_engine.py:119:34 - error: Argument of type "str | None" cannot be assigned to parameter "obj" of type "Sized" in function "len"
  Type "str | None" cannot be assigned to type "Sized"
    "None" is incompatible with protocol "Sized"
      "__len__" is not present (reportArgumentType)

vllm/entrypoints/openai/serving_chat.py:415:23 - error: Argument of type "str | None" cannot be assigned to parameter "token" of type "str" in function "__init__"
  Type "str | None" cannot be assigned to type "str"
    "None" is incompatible with "str" (reportArgumentType)
vllm/entrypoints/openai/serving_chat.py:417:44 - error: "encode" is not a known member of "None" (reportOptionalMemberAccess)

@simon-mo
Copy link
Collaborator

@DarkLight1337 if you can help review and shepherd this PR that would be great (I'll merge upon your approval)

Regarding the output compliance if it is proven wide spread (can type safe languages to ignore extra fields? Otherwise it's difficult for forward compatibility I would have imagine), we can maybe return it via extra headers. (cc @njhill)

@njhill
Copy link
Member

njhill commented May 25, 2024

Re additional fields in outputs, I feel that sticking to only what's supported by the current OpenAI APIs will be too restrictive. Just like how we have a bunch of extra request parameters (though I understand that doesn't have the same client compatibility concern). AFAIK clients can ignore unrecognized response fields in most cases.

If this turns out to be a problem, we could consider adding a "strict API response compatibility" config flag, and omit any extras if it's set?

@br3no
Copy link
Contributor Author

br3no commented May 27, 2024

I have tested today with the Azure OpenAI client for Java (cf. https://platform.openai.com/docs/libraries/azure-openai-libraries) and it worked. So at least with this client the extra stop_reason is not a problem.

While this is not an exhaustive proof, I'll add it back. Then I'll see that I merge the changes from @DarkLight1337 into this PR.

@br3no
Copy link
Contributor Author

br3no commented May 28, 2024

@DarkLight1337

I've added the following changes following your PR:

  1. in protocol.py top_logprobs is no longer required to be defined when logprobs is True
  2. LogProbs has been renamed into CompletionLogProbs
  3. improved tests
  4. fixed code for type checker

There are some parts in your PR that I disagree with:

  1. in protocol.py the format of the logprobs object for the chat completion is not in accordance to the OpenAI API spec. You can see in the official API spec here that:
    1. the logprobs object contains a content array
    2. the content array contains objects with a token string, a logprob number, a bytes array of ints and a top_logprobs array of objects
    3. the top_logprobs array of objects contain a token string, a logprob number and a bytes array of ints
  2. also, when requesting "top_logprobs" : 2, the top_logprobs array contains the top token (redundantly) plus the second best token. I.e. len(top_logprobs) == k. This can also be seen in the link above.

@DarkLight1337
Copy link
Member

DarkLight1337 commented May 28, 2024

There are some parts in your PR that I disagree with:

Thanks for going into depth regarding the API and raising these issues!

  1. in protocol.py the format of the logprobs object for the chat completion is not in accordance to the OpenAI API spec. You can see in the official API spec here that:

    1. the logprobs object contains a content array
    2. the content array contains objects with a token string, a logprob number, a bytes array of ints and a top_logprobs array of objects
    3. the top_logprobs array of objects contain a token string, a logprob number and a bytes array of ints

My PR does follow this, as specified in vllm.entrypoints.openai.protocol.ChatCompletionLogProbs. Perhaps you missed it?

Edit: Note that the spec allows logprobs.content to be null.

  1. also, when requesting "top_logprobs" : 2, the top_logprobs array contains the top token (redundantly) plus the second best token. I.e. len(top_logprobs) == k. This can also be seen in the link above.

I've looked through the docs again, and it indeed seems that len(top_logprobs) == k for Chat Completions API. I'll edit my message in #5008 accordingly.

Edit: I have also updated my PR to test that the length of the output is exactly equal to top_logprobs for Chat Completions API.

@br3no
Copy link
Contributor Author

br3no commented May 28, 2024

@DarkLight1337 I'm confused. This is from your PR:
image
According to this, an object of type vllm.entrypoints.openai.protocol.ChatCompletionLogProbs contains, optionally, a list of vllm.entrypoints.openai.protocol.ChatCompletionLogProb. Each element in the list contains only one field called top_logprobs, containing a list of vllm.entrypoints.openai.protocol.ChatCompletionTopLogprob with token, bytes and logprobs.

This does not conform to an object like this (taken from the examples in the official OpenAI API spec):

"choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "Hello! How can I assist you today?"
      },
      "logprobs": {
        "content": [
          {
            "token": "Hello",
            "logprob": -0.31725305,
            "bytes": [72, 101, 108, 108, 111],
            "top_logprobs": [
              {
                "token": "Hello",
                "logprob": -0.31725305,
                "bytes": [72, 101, 108, 108, 111]
              },
              {
                "token": "Hi",
                "logprob": -1.3190403,
                "bytes": [72, 105]
              }
            ]
          },
...

The "root" logprob contains already the values for token, logprob and bytes for the top-token (which is, analogously to the completion API, redundantly repeated in the top_logprobs list).

Can you help me understand where we are misunderstanding each other?


Edit: Note that the spec allows logprobs.content to be null.

You're right. I think this is an error in the OpenAI spec, though. I don't see when this output is ever going to be generated, since logprobs already can return null:

"logprobs": {
        "content": null
}

Currently vLLM will only ever generate "logprobs" : null. Strictly speaking, by saying that content can never be null we're still compliant with the spec.

I'd prefer to keep things like this, as it reduces unnecessary complexity in the code.

vllm/entrypoints/openai/serving_engine.py Outdated Show resolved Hide resolved
vllm/entrypoints/openai/serving_completion.py Outdated Show resolved Hide resolved
vllm/entrypoints/openai/serving_chat.py Outdated Show resolved Hide resolved
@DarkLight1337
Copy link
Member

DarkLight1337 commented May 28, 2024

According to this, an object of type vllm.entrypoints.openai.protocol.ChatCompletionLogProbs contains, optionally, a list of vllm.entrypoints.openai.protocol.ChatCompletionLogProb. Each element in the list contains only one field called top_logprobs,

Notice that ChatCompletionLogProb inherits from ChatCompletionTopLogProb.

@DarkLight1337
Copy link
Member

DarkLight1337 commented May 28, 2024

Currently vLLM will only ever generate "logprobs" : null. Strictly speaking, by saying that content can never be null we're still compliant with the spec.

I'd prefer to keep things like this, as it reduces unnecessary complexity in the code.

Yeah I think that is fine as well. I originally copied the types from OpenAI's Python library, feel free to remove the Optional annotation from content.

@br3no
Copy link
Contributor Author

br3no commented May 28, 2024

@DarkLight1337 I believe all issues are now addressed. Thanks for your help!

@br3no
Copy link
Contributor Author

br3no commented May 28, 2024

I'm having an issue with the formatting. It seems the formatter wants the imports in a different way than isort...

@br3no
Copy link
Contributor Author

br3no commented May 29, 2024

@simon-mo this is the formatting produced by isort after the yapf formatting.

When run locally, there is no check whether yapf or isort changed the sources. When run in the CI environment, the fact that yapf changes this leads to build errors.

@simon-mo
Copy link
Collaborator

@rkooo567 any suggestion to this?

@DarkLight1337
Copy link
Member

DarkLight1337 commented May 29, 2024

There are a few other places where yapf is disabled via comments. Perhaps you also have to do that here.

@DarkLight1337
Copy link
Member

While you're here, please also resolve the merge conflict.

@DarkLight1337
Copy link
Member

@simon-mo can enable auto-merge

@simon-mo
Copy link
Collaborator

#5031 was just merged which seems to create conflict

@br3no
Copy link
Contributor Author

br3no commented May 29, 2024

@simon-mo conflicts resolved. Build should be through soon.

@br3no
Copy link
Contributor Author

br3no commented May 30, 2024

@simon-mo all tests green. We can merge!

@ywang96
Copy link
Member

ywang96 commented May 30, 2024

@br3no I'll merge this on behalf of Simon - thank you for fixing this issue!

@ywang96 ywang96 merged commit 87d41c8 into vllm-project:main May 30, 2024
64 checks passed
@br3no br3no deleted the 5008-chat-logprobs branch May 30, 2024 19:14
blinkbear pushed a commit to blinkbear/vllm that referenced this pull request May 31, 2024
Co-authored-by: Breno Faria <breno.faria@intrafind.com>
dtrifiro pushed a commit to opendatahub-io/vllm that referenced this pull request May 31, 2024
Co-authored-by: Breno Faria <breno.faria@intrafind.com>
blinkbear pushed a commit to blinkbear/vllm that referenced this pull request Jun 6, 2024
Co-authored-by: Breno Faria <breno.faria@intrafind.com>
robertgshaw2-neuralmagic pushed a commit to neuralmagic/nm-vllm that referenced this pull request Jun 8, 2024
Co-authored-by: Breno Faria <breno.faria@intrafind.com>
joerunde pushed a commit to joerunde/vllm that referenced this pull request Jun 17, 2024
Co-authored-by: Breno Faria <breno.faria@intrafind.com>
robertgshaw2-neuralmagic pushed a commit to neuralmagic/nm-vllm that referenced this pull request Jul 14, 2024
Co-authored-by: Breno Faria <breno.faria@intrafind.com>
Temirulan pushed a commit to Temirulan/vllm-whisper that referenced this pull request Sep 6, 2024
Co-authored-by: Breno Faria <breno.faria@intrafind.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[Bug]: OpenAI LogProbs format for Chat-Completion is incorrect
5 participants