-
Notifications
You must be signed in to change notification settings - Fork 15k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feedback error while running langchain #3670
Comments
Standard error message when the text provided is too long and based on your max_token setting you've set the response length to be quiet small as well Couple of options for you to try - see what works best for your use case but note you won't be able to override the model limit.
Use the tiktoken library to count tokens if you need to do some sanity checking before making the API call |
Hi, @WillLam123! I'm Dosu, and I'm here to help the LangChain team manage their backlog. I wanted to let you know that we are marking this issue as stale. From what I understand, you encountered a feedback error while running langchain with the OpenAI API. The error message suggests that the maximum context length of the model is being exceeded and advises reducing the prompt or completion length. Veeeetzzzz provided some helpful suggestions, such as shortening the text, increasing the 'chunk_overlap' value, or increasing the max_token parameter. They also recommended using the tiktoken library to count tokens for sanity checking before making the API call. Before we close this issue, we wanted to check if it is still relevant to the latest version of the LangChain repository. If it is, please let us know by commenting on the issue. Otherwise, feel free to close the issue yourself, or it will be automatically closed in 7 days. Thank you for your understanding and contribution to the LangChain project! |
Trying to run langchain with open ai api, it works fine with short paragraphs but when I tried longer ones I got this error:
openai.error.InvalidRequestError: This model's maximum context length is 4097 tokens, however you requested 13214 tokens (12958 in your prompt; 256 for the completion). Please reduce your prompt; or completion length.
I don't know if I get the setting right or not, here is my code:
import os
from langchain.document_loaders import TextLoader
from langchain.text_splitter import CharacterTextSplitter
from langchain.embeddings import OpenAIEmbeddings
from langchain.vectorstores import FAISS
from langchain.chains.question_answering import load_qa_chain
from langchain.chat_models import ChatOpenAI
os.environ["OPENAI_API_KEY"] = "sk-xxxxxxxxxx"
def main():
global db, chain, entry, output # Add entry and output to the global variables
The text was updated successfully, but these errors were encountered: