-
Notifications
You must be signed in to change notification settings - Fork 669
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: adds logprobs to OpenAi response #1238
base: main
Are you sure you want to change the base?
feat: adds logprobs to OpenAi response #1238
Conversation
PR Change SummaryThis PR introduces the addition of logprobs to the OpenAI response handling across multiple documentation files.
Modified Files
How can I customize these reviews?Check out the Hyperlint AI Reviewer docs for more information on how to customize the review. If you just want to ignore it on this PR, you can add the Note specifically for link checks, we only check the first 30 links in a file and we cache the results for several hours (for instance, if you just added a page, you might experience this). Our recommendation is to add What is Hyperlint?Hyperlint is an AI agent that helps you write, edit, and maintain your documentation. Learn more about the Hyperlint AI reviewer and the checks that we can run on your documentation. |
@@ -465,6 +484,13 @@ class ModelResponse: | |||
kind: Literal['response'] = 'response' | |||
"""Message type identifier, this is available on all parts as a discriminator.""" | |||
|
|||
# TODO: implemented in OpenAI, but not in other models | |||
logprobs: list[ChatCompletionTokenLogprob] = field(default_factory=lambda: []) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
logprobs: list[ChatCompletionTokenLogprob] = field(default_factory=lambda: []) | |
logprobs: list[ChatCompletionTokenLogprob] | None = field(None, repr=False) |
Let's exclude from the repr. I also feel like it's worth using None
to indicate whether it was included in the response. I guess that will make the ergonomics a bit worse; I'd be okay always including it if it doesn't have performance consequences for non-users and will be empty if and only if it wasn't returned.
While I definitely would like to better-support extracting arbitrary data from the vendor response into the ModelResponses, I am not totally sure the right way to do it is to add a field to ModelResponse which affects all users. I think we need to think a bit about how this should work, but in the mean time, I'd be more keen on an approach that retained a(n optional) reference to the underlying vendor-specific response object, which you could use to read the relevant data. Would that be an acceptable alternative? Or do you really need to retain the logprobs etc. during serialization etc.? (I think we could find a way to support that, but if it's not necessary, then it may significantly simpler to just have a way to retain references to the source vendor response) |
This PR solves the #1228.
