-
Notifications
You must be signed in to change notification settings - Fork 43
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Function calling (using food-search.ts) #27
Comments
will take a look |
can you confirm you're testing with the latest version we fixed a bunch of stuff today and also added the new 3.5 sonnet model. |
@dosco Anthropic and Groq still throwing the same errors. I'm having different issues with Cohere and Google, but I need to double check that I'm not missing something. I like the new AxAI syntax, easier to follow! |
great will take a look and fix this today. thanks as the api was growing the proper prefix helps with autocomplete etc |
latest release https://github.com/ax-llm/ax/releases/tag/9.0.9 has fixes for anthropic, cohere, gemini, i'm looking into groq but i suspect its more a model issue there i might bump up the default model choice to a bigger model |
@dosco Cool, I'll check it out! Regarding Groq, one thing to keep in mind - they limit llama3-70b to 6k tokens/min and llama3-8b to 30k t/m. I found myself hitting the limit with the larger model quite often, within one run. |
probably need to add a rate limiter we support those in the library maybe groq needs one by default |
in the latest release i added a default token bucket bases rate limiter to groq by default to slow it down when needed. |
I'm submitting a ...
[ x] bug report
Summary
I'm getting errors with:
Anthropic
Also:
Groq
Also:
Cohere
Google
Other observations:
mistralai/Mixtral-8x7B-Instruct-v0.1
mistralai/Mistral-7B-Instruct-v0.1
togethercomputer/CodeLlama-34b-Instruct
The text was updated successfully, but these errors were encountered: