-
Notifications
You must be signed in to change notification settings - Fork 13.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
openai.error.InvalidRequestError: This model's maximum context length is 4097 tokens, however you requested 11836 tokens (11580 in your prompt; 256 for the completion). Please reduce your prompt; or completion length. #2333
Comments
Take a look #2133 (comment) |
this a good job,but i dont know how to set reduce_k_below_max_tokens=True?Can you give me some examples |
same issue here, have you solved ? |
here is an example:
https://github.com/Laisky/HelloWorld/blob/master/py3/ailangchain/security.ipynb |
God Bless You |
If you are getting this error: openai.error.InvalidRequestError: This model's maximum context length is 4097 tokens, however you requested 4638 tokens (4382 in your prompt; 256 for the completion). Please reduce your prompt; or completion length. The solution is bellow[change your host port accordingly]. The trick is to include only tables that you want. I have given bellow an actual code snippetfrom langchain import SQLDatabaseChain os.environ["OPENAI_API_KEY"] = 'XXXXXX' engine=create_engine('mysql+pymysql://admin:admin@localhost:3307/wordpress1') db = SQLDatabase(engine, include_tables=include_tables) #This will only include a list of tables you want!!! db_chain.run("Describe wp_greetings table") |
I tried to limit the content length by using the following code:
But that didn't seem to work. I still got the following error:
Is this something wrong with the way I'm using the agent, or is this a bug in langchain? I am using sql agent on a large database so something may be wrong with how I'm using it. |
David, you can try including only the necessary table as I shown above. This will definitely decrease number of tokens |
Thanks for the suggestion! I only have two tables in my database, but the tables have a ton of columns in them. In this case should I try to build my own agent with the ability to summarize tables? |
Hi! I'm encountering the same issue as David. Is there a method to track token usage as the program runs? I think it'd be very beneficial for me to monitor what is actually being used as a token and potentially finding reductions that way. |
Hi, @wen020! I'm Dosu, and I'm helping the LangChain team manage their backlog. I wanted to let you know that we are marking this issue as stale. From what I understand, the issue is about the maximum context length for the model being 4097 tokens, but you are requesting 11836 tokens. In the comments, there are suggestions and examples provided by users on how to solve this problem. Some users suggest setting reduce_k_below_max_tokens=True to reduce the token length, while others suggest including only necessary tables to decrease the number of tokens. Additionally, there is a question about whether there is a method to track token usage during program execution. Before we close this issue, we wanted to check with you if it is still relevant to the latest version of the LangChain repository. If it is, please let us know by commenting on the issue. Otherwise, feel free to close the issue yourself or it will be automatically closed in 7 days. Thank you for your understanding and contribution to the LangChain project! |
this is my code for hooking up an LLM to answer questions over a database(remote pg).
![image](https://user-images.githubusercontent.com/54690997/229476251-547d91b8-39a1-4f43-812b-ea01688a1261.png)
but find error:
![image](https://user-images.githubusercontent.com/54690997/229476819-bfd96216-2b41-496c-9f24-ac36e787205f.png)
Can anyone give me some advice to solve this problem?
The text was updated successfully, but these errors were encountered: