Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feedback error while running langchain #3670

Closed
WillLam123 opened this issue Apr 27, 2023 · 2 comments
Closed

Feedback error while running langchain #3670

WillLam123 opened this issue Apr 27, 2023 · 2 comments

Comments

@WillLam123
Copy link

Trying to run langchain with open ai api, it works fine with short paragraphs but when I tried longer ones I got this error:

openai.error.InvalidRequestError: This model's maximum context length is 4097 tokens, however you requested 13214 tokens (12958 in your prompt; 256 for the completion). Please reduce your prompt; or completion length.

I don't know if I get the setting right or not, here is my code:

import os
from langchain.document_loaders import TextLoader
from langchain.text_splitter import CharacterTextSplitter
from langchain.embeddings import OpenAIEmbeddings
from langchain.vectorstores import FAISS
from langchain.chains.question_answering import load_qa_chain
from langchain.chat_models import ChatOpenAI

os.environ["OPENAI_API_KEY"] = "sk-xxxxxxxxxx"

def main():
global db, chain, entry, output # Add entry and output to the global variables

file_path = r"F:\langchain\doc1.txt"
loader = TextLoader(file_path, encoding='utf-8')
documents = loader.load()

text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
docs = text_splitter.split_documents(documents)

embeddings = OpenAIEmbeddings()

db = FAISS.from_documents(docs, embeddings)

llm = ChatOpenAI(openai_api_key="sk-xxxxxxxxxx", model_name="gpt-3.5-turbo",
             max_token=200)

chain = load_qa_chain(llm, chain_type="stuff")
@Veeeetzzzz
Copy link

Standard error message when the text provided is too long and based on your max_token setting you've set the response length to be quiet small as well

Couple of options for you to try - see what works best for your use case but note you won't be able to override the model limit.

  1. Shorten text - as you've tested it works with smaller paragraphs
  2. Increase the 'chunk_overlap' value
  3. Increase max_token parameter - you'll only get a short response with your current configuration

Use the tiktoken library to count tokens if you need to do some sanity checking before making the API call

@dosubot
Copy link

dosubot bot commented Aug 31, 2023

Hi, @WillLam123! I'm Dosu, and I'm here to help the LangChain team manage their backlog. I wanted to let you know that we are marking this issue as stale.

From what I understand, you encountered a feedback error while running langchain with the OpenAI API. The error message suggests that the maximum context length of the model is being exceeded and advises reducing the prompt or completion length. Veeeetzzzz provided some helpful suggestions, such as shortening the text, increasing the 'chunk_overlap' value, or increasing the max_token parameter. They also recommended using the tiktoken library to count tokens for sanity checking before making the API call.

Before we close this issue, we wanted to check if it is still relevant to the latest version of the LangChain repository. If it is, please let us know by commenting on the issue. Otherwise, feel free to close the issue yourself, or it will be automatically closed in 7 days.

Thank you for your understanding and contribution to the LangChain project!

@dosubot dosubot bot added the stale Issue has not had recent activity or appears to be solved. Stale issues will be automatically closed label Aug 31, 2023
@dosubot dosubot bot closed this as not planned Won't fix, can't repro, duplicate, stale Sep 10, 2023
@dosubot dosubot bot removed the stale Issue has not had recent activity or appears to be solved. Stale issues will be automatically closed label Sep 10, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants