You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@tom-doerr you need to pass temperature to the LM, not the context or the predictor. That said, I think it would be nice if one could set the LM kwargs in easier ways.
@tom-doerr you need to pass temperature to the LM, not the context or the predictor. That said, I think it would be nice if one could set the LM kwargs in easier ways.
@okhat can you elaborate please how to do it?
did you mean like this:
whenever I want to clear the cache for specific response? because its not working for me.
In my opinion need solution to run-time cases as sometimes need to send same prompt to different LLMs but it still pulling the cached solution without notice the change of LLM.
I'm trying to disable the cache for an inference call.
I tried setting different temperature values but DSPy still uses the cache for those calls.
The text was updated successfully, but these errors were encountered: