-
Notifications
You must be signed in to change notification settings - Fork 116
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
When Indexing, [TOO MANY REQUESTS] Keeps being thrown #6
Comments
Just got this email from openAI
|
You're hitting up against the maximum number of requests that OpenAI allows. This can happen in large projects. We'll be adding improvements to handle this more gracefully in the near future. Edit: your open AI key is viewable in the terminal output you posted. You should delete your post and rotate your key immediately. |
No way to throttle it to go sequentially?
…On Sat, 25 Mar 2023 at 4:36 PM Sam Hogan ***@***.***> wrote:
You're hitting up again the maximum number of requests that OpenAI allows.
This can happen in large projects.
We'll be adding improvements to handle this more gracefully in the near
future.
—
Reply to this email directly, view it on GitHub
<#6 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ACFY5BM2TMXXYXBNHO5PQU3W53YETANCNFSM6AAAAAAWHRFTFE>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
Sequentially would take a very long time for large projects using slower models. Right now we parallelize up to 25 requests at a time but it's not perfect. Checkout RateLimit.ts in the project for more info. If you can improve this, I'll merge it right away. |
I face this issue even for smaller projects; how can this be solved? https://github.com/Prem95/DataSciencePortfolio was my test attempt |
I was getting this error immediately on a pretty small repository (136 files). I changed |
Thanks for the tip! This is what I'm looking for.
…On Wed, 29 Mar 2023 at 8:05 PM Nilo Silva ***@***.***> wrote:
I was getting this error immediately on a pretty small repository (136
files). I changed APIRateLimit manually (this.maxConcurrentCalls = 1;)
and managed to index all files, though it took a few minutes.
—
Reply to this email directly, view it on GitHub
<#6 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ACFY5BLOWOSHDG56JF2SHHLW6RTT3ANCNFSM6AAAAAAWHRFTFE>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
where exactly I should change this? currently it is 50 |
|
Even after adding the line, i still get
Unsure what the issue is |
The problem with that is we need to built autodoc from source, we should let this value be configurable from the .autodoc config |
@ALL After switching to a pay as you go api key from a free trail api key, I am able to bypass this 429 rate limit error. But this error might still affect us all if OpenAI changes its rate limit And here is OpenAI guide to implement rate throttle based on your manully set limit : https://github.com/openai/openai-cookbook/blob/main/examples/How_to_handle_rate_limits.ipynb |
Yes it is recommended to have a paid OpenAI account with GPT-4 access to use autodoc. |
Indexing as usual, after estimation, it runs for a bit then keeps throwing this error:
The text was updated successfully, but these errors were encountered: