Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: adds logprobs to OpenAi response #1238

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

eliassoares
Copy link

This PR solves the #1228.
image

Copy link
Contributor

hyperlint-ai bot commented Mar 25, 2025

PR Change Summary

This PR introduces the addition of logprobs to the OpenAI response handling across multiple documentation files.

  • Added logprobs parameter to the response model in various documentation files.
  • Updated examples to include logprobs in the response structure.

Modified Files

  • docs/agents.md
  • docs/message-history.md
  • docs/models.md
  • docs/tools.md

How can I customize these reviews?

Check out the Hyperlint AI Reviewer docs for more information on how to customize the review.

If you just want to ignore it on this PR, you can add the hyperlint-ignore label to the PR. Future changes won't trigger a Hyperlint review.

Note specifically for link checks, we only check the first 30 links in a file and we cache the results for several hours (for instance, if you just added a page, you might experience this). Our recommendation is to add hyperlint-ignore to the PR to ignore the link check for this PR.

What is Hyperlint?

Hyperlint is an AI agent that helps you write, edit, and maintain your documentation.

Learn more about the Hyperlint AI reviewer and the checks that we can run on your documentation.

@@ -465,6 +484,13 @@ class ModelResponse:
kind: Literal['response'] = 'response'
"""Message type identifier, this is available on all parts as a discriminator."""

# TODO: implemented in OpenAI, but not in other models
logprobs: list[ChatCompletionTokenLogprob] = field(default_factory=lambda: [])
Copy link
Contributor

@dmontagu dmontagu Mar 27, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
logprobs: list[ChatCompletionTokenLogprob] = field(default_factory=lambda: [])
logprobs: list[ChatCompletionTokenLogprob] | None = field(None, repr=False)

Let's exclude from the repr. I also feel like it's worth using None to indicate whether it was included in the response. I guess that will make the ergonomics a bit worse; I'd be okay always including it if it doesn't have performance consequences for non-users and will be empty if and only if it wasn't returned.

@dmontagu
Copy link
Contributor

While I definitely would like to better-support extracting arbitrary data from the vendor response into the ModelResponses, I am not totally sure the right way to do it is to add a field to ModelResponse which affects all users.

I think we need to think a bit about how this should work, but in the mean time, I'd be more keen on an approach that retained a(n optional) reference to the underlying vendor-specific response object, which you could use to read the relevant data. Would that be an acceptable alternative? Or do you really need to retain the logprobs etc. during serialization etc.? (I think we could find a way to support that, but if it's not necessary, then it may significantly simpler to just have a way to retain references to the source vendor response)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants