New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Getting rate limited by OpenAI on ingestion #55
Comments
Think that vectorising data in batches will help |
Are you suggesting changing the chunk size here will help? I too get the same error.
(Line 94 of the ingest_rst_sphinx.py file) |
I use some method that seems too tricky: |
Solved in #54 Will help avoid rate limits and save progress if llm provider has issues |
@dartpain probably not fix, see #85 copy from that pull request:
|
Another workaround is to use |
Getting the following error when ingesting my documentation:
Additional context:
Number of Tokens = 538,902
I can't find any way to specify a backoff. I have a paid account, and apparently the rate limit is raised over time. However, I would like to be able to specify a rate limit.
The text was updated successfully, but these errors were encountered: