-
Notifications
You must be signed in to change notification settings - Fork 68
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Model Parameter Not Functioning as Expected #54
Comments
gpt-4 might not know it’s gpt-4, that’s expected. Try comparing the latency of response with gpt-3.5-turbo |
I was skeptical, especially since OpenAI's webUI seems to know which it is, but the code doesn't lie, and if that's not enough, then I think this is close enough to proof:
|
ChatGPT web ui knows which model it is because it's specified in its system prompt (which was leaked many times, you can find it online) |
When using the command line interface for ChatGPT, the --model parameter seems to not be working as intended. When attempting to set the model to GPT-4, the application returns responses as if it were GPT-3.
Steps to Reproduce
% gpt --model gpt-4 -p "Are you chatgpt-4?"
As an AI model developed by OpenAI, I'm currently based on GPT-3. As of now, GPT-4 has not been released.
No, I'm an AI developed by OpenAI and currently known as ChatGPT-3. As of now, there is no ChatGPT-4.
Expected Behavior
The application should return responses as per the specified model in the command line or as set in the default assistant in the config file.
Actual Behavior
The application returns responses as if it were GPT-3, irrespective of the model set in the command line or the config file.
The text was updated successfully, but these errors were encountered: