-
Notifications
You must be signed in to change notification settings - Fork 13.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
openai.error.InvalidRequestError: This model's maximum context length is 4097 tokens #1767
Comments
I got the same error message today. It suggested the following. InvalidRequestError: This model's maximum context length is 4097 tokens. However, you requested 4245 tokens (1745 in the messages, 2500 in the completion). Please reduce the length of the messages or completion. Clearly I will need a text request or response clipping approach to reduce seeing this error. |
It was preceded by this warning - openai.py:608: UserWarning: You are trying to use a chat model. This way of initializing it is no longer supported. Instead, please use: |
I already used ChatOpenAI for instead but not helping, still got this error. |
use this llm where you were calling. I was getting "ChatOpenAI" error but moving to the current version of langchain and use the ChatOpenAI fixed the issue. |
I faced the same issue, but initializing the index again solved my problem. What is the right solution to the problem? |
I encountered a similar issue with langchain's FAISS code, but changing the temperature to '0' resolved it. This suggests that there may be a bug in the code that needs to be addressed by @hwchase17. In my experience, FAISS appears to be the most efficient local vector database for use with langchain. I experienced loading index issues with ChromaDB so I decided to abandon it for now. The only issue is the longer answers with temperature = 0, which may require creating new ideas from the given pdf file(s) |
@ZohaibRamzan How did you do that? Would you share some details? I would like to know if there is any alternative of temperature = 0. |
Seems related to #1349. |
Hi, @abdellahiheiballa! I'm Dosu, and I'm here to help the LangChain team manage their backlog. I wanted to let you know that we are marking this issue as stale. Based on my understanding of the issue, you encountered an error message stating "openai.error.InvalidRequestError: This model's maximum context length is 4097 tokens" when using the chat application and asking a specific question. Other users, such as @mattCLN2023 and @Jeru2023, have also experienced the same issue. Some suggested solutions include using the updated version of LangChain and initializing the index again. There is also a mention of a potential bug in the code that needs to be addressed. Before we proceed, we would like to confirm if this issue is still relevant to the latest version of the LangChain repository. If it is, please let us know by commenting on this issue. Otherwise, feel free to close the issue yourself, or the issue will be automatically closed in 7 days. Thank you for your understanding and cooperation. We look forward to hearing from you soon. |
When using the chat application, I encountered an error message stating "openai.error.InvalidRequestError: This model's maximum context length is 4097 tokens" when I asked a question like "Did he mention Stephen Breyer?".
![image](https://user-images.githubusercontent.com/25929712/226149401-8de39f47-6cd6-4ba4-9a74-f1a1eb389779.png)
The text was updated successfully, but these errors were encountered: