Skip to content

Conversation

@tom-doerr
Copy link
Contributor

This is part of #879.
I tested it and it behaves the same as the other PR.

lm = dspy.HFClientVLLM(model="NurtureAI/Meta-Llama-3-8B-Instruct-32k", port=38242, url="http://localhost", max_tokens=4)
test_text = "This is a test article."
output_normal = lm(test_text)
print("output_normal:", output_normal)

output_with_logprobs = lm(test_text, logprobs=2)
print("output_with_logprobs:", output_with_logprobs)
output_normal: [' It is a test']
output_with_logprobs: [{'text': ' It is a test', 'logprobs': {'text_offset': [0, 3, 6, 8], 'token_logprobs': [-1.7945036888122559, -0.6691504716873169, -1.303508996963501, -0.7093929052352905], 'tokens': [' It', ' is', ' a', ' test'], 'top_logprobs': [{' It': -1.7945036888122559, ' This': -1.7945036888122559}, {' is': -0.6691504716873169, ' will': -2.0441503524780273}, {' a': -1.303508996963501, ' not': -1.803508996963501}, {' test': -0.7093929052352905, ' sample': -3.20939302444458}]}}]

@okhat
Copy link
Collaborator

okhat commented May 19, 2024

Hmm this seems fine to me but it's ad-hoc.... it would be specific to this one client

@tom-doerr
Copy link
Contributor Author

It also works already for the OpenAI API: #999.
This feature wouldn't need to be documented until more clients are supported.
In my initial testing the logprob feedback was much better than binary feedback. In my case non of the dataset examples passed and BootstrapFewShot gave up. With logprob feedback it was able to consistently improve over multiple iterations.

@okhat
Copy link
Collaborator

okhat commented May 19, 2024

does it block you if we keep this open for a week or two? i need to think more about logprobs

@tom-doerr
Copy link
Contributor Author

No doesn't block me at all, I'm using my custom branch anyway.
If you are thinking about logprobs anyway, my personal wish list would be:

  • Allows for access to prompt logprobs and not just completion logprobs
  • Allows access to unnormalized logprobs. This can be especially useful when using logprobs to sort something

I know that not all backends can give us all that, users would have to choose a backend that supports their usecase.

@ammirsm ammirsm requested a review from okhat May 20, 2024 19:33
@arnavsinghvi11
Copy link
Collaborator

@tom-doerr just wanted to add some logprobs-related issues here.

With the past PR merged for OpenAI logprobs support, I realize that this only outputs the logprobs for "direct" LM calls, and is not compatible with configuring the OpenAI LM within a DSPy program.

For instance, the example provided outputs logprobs as intended:

lm = dspy.OpenAI(model='gpt-3.5-turbo-instruct', max_tokens=6, api_key=config['openai']['secret_key'])
test_text = "This is a test article."
test_output = lm(test_text, logprobs=1)

but if we had some DSPy program and did dspy.settings.configure(lm=lm, ...) to have it run with that LM, the DSPy Completions logic does not output the logprobs in the response. This signals that we potentially we need a greater refactor to integrate logprobs correctly. I might open a PR soon with some baseline code that handles this per client, but definitely good to think in the direction of non-client-specific functionality. @okhat

@tom-doerr
Copy link
Contributor Author

@arnavsinghvi11 Yes I know what you mean, I planned to add support to DSPy components after support for the clients is merged. Since it's a bigger refactor I would have suggested adding support for it bit by bit.
Is this something I could help out with? If so it might make sense to have a call

@arnavsinghvi11
Copy link
Collaborator

Sounds good! Yes would love to connect @tom-doerr - truly appreciate the enthusiasm you've shown for DSPy :))

@okhat okhat closed this Feb 3, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants