Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: gpt-3.5-turbo-instruct not supported #411

Closed
StanGirard opened this issue Sep 20, 2023 · 9 comments
Closed

[Bug]: gpt-3.5-turbo-instruct not supported #411

StanGirard opened this issue Sep 20, 2023 · 9 comments
Labels
bug Something isn't working

Comments

@StanGirard
Copy link

What happened?

Tried to use gpt-3.5-turbo-instruct on Quivr

Relevant log output

https://smith.langchain.com/public/7814d1a6-d387-4333-b893-35af5838d4ca/r

Twitter / LinkedIn details

No response

@StanGirard StanGirard added the bug Something isn't working label Sep 20, 2023
@krrishdholakia
Copy link
Contributor

cc: @ishaan-jaff

@krrishdholakia
Copy link
Contributor

@StanGirard acknowledging the bug, I'll pick this up and revert back with a fix in a few hours.

@krrishdholakia
Copy link
Contributor

@StanGirard which version of litellm are you using. gpt-3.5-turbo-instruct was added a few days ago. If you could bump the version and let me know if that works, it'd be great!

I think the issue is that the model list in the older version doesn't contain the instruct model name.

@krrishdholakia
Copy link
Contributor

Sharing as proof. This works for me in 0.1.714

Screenshot 2023-09-20 at 6 58 07 AM Screenshot 2023-09-20 at 6 58 25 AM

@krrishdholakia
Copy link
Contributor

As a potential improvement, we could decouple the model lists from the local package. This would make it easy to update lists and ensure everyone has the latest version.

thoughts @ishaan-jaff / @StanGirard

@StanGirard
Copy link
Author

I'm on litellm==0.1.531

That is probably why ;)

Yeah that would be amazing to decouple it.

@StanGirard
Copy link
Author

It works

@krrishdholakia
Copy link
Contributor

Sounds good - I'll close this issue, and open a new one to track the decoupling idea.

@aleclarson
Copy link

aleclarson commented Apr 12, 2024

In my PR for integrating LiteLLM with Aider, I'm updating the backup whenever the backup is older than 12 hours on Aider startup. If the backup is newer than 12 hours, I set the LITELLM_LOCAL_MODEL_COST_MAP to True (before importing LiteLLM) so the backup is used instead of hitting the network. I wonder if this approach should be merged into LiteLLM itself?

https://github.com/aleclarson/aider/blob/41ca019854a81cc49eb99135fc6977c0e0d03354/aider/main.py#L596-L629

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants