-
-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. Weβll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add gpt-4o metadata #3613
Add gpt-4o metadata #3613
Conversation
The latest updates on your projects. Learn more about Vercel for Git βοΈ
|
@@ -9,6 +9,18 @@ | |||
"mode": "chat", | |||
"supports_function_calling": true | |||
}, | |||
"gpt-4o": { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we add an alias for gpt-4o-2024-05-13
too?
Thanks for working on this! A couple observations:
|
How is mode handled with multimodal input? |
I couldn't find docs supporting either explicit API limit, and I was unable to convince the model to produce more than 2048 tokens via some manual API testing. Updated this to 2048 since the limit is at least 2048 via the API (but possibly higher.) |
Now my playground is showing 4096. Unfortunately, there's no mention about it in the docs. |
LGTM - merging in, we'll take care of updating the max token limit based on testing |
#3629 So the question is, is it because of this? |
@Undertone0809 mode is primarily used for vertex ai for routing between their different sdk's. We also started using it for health checks, as a way to know whether it's a No impact on multimodal. |
#3612
Title
Relevant issues
Type
π New Feature
π Bug Fix
π§Ή Refactoring
π Documentation
π» Development Environment
π Infrastructure
β Test
Changes
Testing
Notes
Pre-Submission Checklist (optional but appreciated):
OS Tests (optional but appreciated):