Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

create issue fails with "GraphQL error: was submitted too quickly" #4801

Closed
mskyttner opened this issue Nov 24, 2021 · 33 comments
Closed

create issue fails with "GraphQL error: was submitted too quickly" #4801

mskyttner opened this issue Nov 24, 2021 · 33 comments
Labels
bug Something isn't working p3 Affects a small number of users or is largely cosmetic platform Problems with the GitHub platform rather than the CLI client

Comments

@mskyttner
Copy link

Describe the bug

Using a bash script to create several issues (with 3 seconds in between each) fails intermittently, sometimes after 10, or 50 or 150 issues with "GraphQL error: was submitted too quickly"

gh version 2.2.0 (2021-10-25)
https://github.com/cli/cli/releases/tag/v2.2.0

Steps to reproduce the behavior

  1. Use a bash script create_issues.sh with content similar to this, in order to add a batch of issues (obviously use more entries/records/issues):
#!/bin/bash
gh issue create --label 'enhancement' --title 'Author `Carlsson, Stefan` has no kthid' --body '`Carlsson, Stefan` - should it be kthi
d `u1agqz5q` and/or orcid `NA` affiliated with 177 5956 874100 879223 882651 879234 882650 879225? - appears on PIDs
- [ ] [1049836](https://kth.diva-portal.org/smash/record.jsf?pid=diva2:1049836)
'
sleep 3
gh issue create --label 'enhancement' --title 'Author `Carminati, Barbara` has no kthid' --body '`Carminati, Barbara` - should it be 
kthid `NA` and/or orcid `0000-0002-7502-4731` affiliated with 177 879223 882650 879232? - appears on PIDs
- [ ] [1256570](https://kth.diva-portal.org/smash/record.jsf?pid=diva2:1256570)
'
sleep 3
gh issue create --label 'enhancement' --title 'Author `Carosio, Federico` has no kthid' --body '`Carosio, Federico` - should it be kt
hid `u1jimame` and/or orcid `NA` affiliated with 177 5923 5940 5948 5954 879224 879315 879340? - appears on PIDs
- [ ] [1291470](https://kth.diva-portal.org/smash/record.jsf?pid=diva2:1291470)
'
  1. Run the script and after 10 or 50 or 150 issues have been created, the error appears (hard to predict or understand when or why)

  2. See error "GraphQL error: was submitted too quickly"

Expected vs actual behavior

Since I'm trying to "rate limit" by having 3 seconds in between each new issue, I was not expecting to see the error.

Also I run gh api rate_limit when the error appears, which doesn't seem to indicate I'm actually running into a rate limit, so I'm not sure that this is actually what is causing the error to be reported:

{
  "resources": {
    "core": {
      "limit": 5000,
      "used": 0,
      "remaining": 5000,
      "reset": 1637749811
    },
    "search": {
      "limit": 30,
      "used": 0,
      "remaining": 30,
      "reset": 1637746271
    },
    "graphql": {
      "limit": 5000,
      "used": 6,
      "remaining": 4994,
      "reset": 1637749488
    },
    "integration_manifest": {
      "limit": 5000,
      "used": 0,
      "remaining": 5000,
      "reset": 1637749811
    },
    "source_import": {
      "limit": 100,
      "used": 0,
      "remaining": 100,
      "reset": 1637746271
    },
    "code_scanning_upload": {
      "limit": 500,
      "used": 0,
      "remaining": 500,
      "reset": 1637749811
    },
    "actions_runner_registration": {
      "limit": 10000,
      "used": 0,
      "remaining": 10000,
      "reset": 1637749811
    },
    "scim": {
      "limit": 15000,
      "used": 0,
      "remaining": 15000,
      "reset": 1637749811
    }
  },
  "rate": {
    "limit": 5000,
    "used": 0,
    "remaining": 5000,
    "reset": 1637749811
  }
}
@mskyttner mskyttner added the bug Something isn't working label Nov 24, 2021
@mislav
Copy link
Contributor

mislav commented Nov 24, 2021

Hi, that's really strange. If you wait an hour and try to run the script again (you can try without the sleep statements), exactly how many issues get created before it fails with the was submitted too quickly error? And, can you query the rate_limit endpoint at that exact moment and paste us what you get?

From your JSON payload above, it looks like you've only made 6 queries in an hour, which is odd because you wrote that you were able to create 10–150 issues, and so I'm expecting that the number of GraphQL queries made was higher by this point. 😕

@mskyttner
Copy link
Author

Yes, I agree that it is odd, and I just ran into the "timeout" again and when I now run `gh api rate_limit" I get "used:0" for everything but I get a number for GraphQL:

    "graphql": {
      "limit": 5000,
      "used": 466,
      "remaining": 4534,
      "reset": 1637749488
    },

Which obviously is under 5000, though....

My feeling is that after a batch of 150 issues (I have about 450 in total) it starts appearing and stays on for an extended period, I have now succceeded in adding 300 and about 150 remains.

Should the message say something like "GraphQL error: was submitted too quickly, please wait for one hour before trying again" to alleviate uncertainty around the the unknown "cooling down" period? If I knew it was one hour for sure, I could try to put in a sleep 3600 after every 150 issues created?

@mislav
Copy link
Contributor

mislav commented Nov 24, 2021

Okay so I've spoke to people internally and it looks like that, in addition to general API rate limit for queries (which you are not hitting), there is content creation rate limit for resources like Issues. It doesn't look like this rate limit is shown to API consumers over response headers, but you are allowed only a fixed number of issue creations per minute and then again a fixed number per hour. You seem to be hitting that, so the only thing I can suggest to you now to wait an hour when you hit this error.

Yes, absolutely agreed that this should be somehow communicated in the error message. I'll follow up internally.

@mislav mislav added p3 Affects a small number of users or is largely cosmetic platform Problems with the GitHub platform rather than the CLI client and removed needs-user-input labels Nov 24, 2021
@mskyttner
Copy link
Author

Thanks for the info!

If the "issues per minute" and "issues per hour" rate limits are documented somewhere, I can try to make sure that my bash-script respects those rate limits? And it would be awesome if the error message contained the "issue creation rate limits" if one runs into those, like I do.

Thanks by the way for the "gh" command, very useful!

Wishing for a "gh create issues" batch operation that takes a CSV with columns for title, body, label and makes a transaction for several issues at once, but as long as I can avoid the rate limits, a custom bash script should do the job, I hope.

@mislav
Copy link
Contributor

mislav commented Nov 24, 2021

If the "issues per minute" and "issues per hour" rate limits are documented somewhere, I can try to make sure that my bash-script respects those rate limits?

They do not seem to be documented anywhere. I'll see if they can be communicated publicly somehow. Right now they are an implementation detail for the platform to combat abuse. For now, just make your scripts respect the error message by stopping for an hour.

Wishing for a "gh create issues" batch operation that takes a CSV with columns for title, body, label and makes a transaction for several issues at once

That's been suggested as well: #4774 (comment). However, it's not likely that we will build this since there is no single "universal" file format that would satisfy everyone. If we supported CSV, then someone would ask for TSV, someone would want YAML, etc.

BTW, an easy bash script that takes a TSV of two columns (title and body):

while IFS=$'\t' read -r title body _; do
  gh issue create --title "title" --body "$body"
done < myfile.tsv

@parthea
Copy link

parthea commented Dec 28, 2021

by stopping for an hour.

I'm hitting the error Pull request creation failed. Validation failed: was submitted too quickly every time I bulk open PRs to ~100 repos within my org. Is it possible to reduce the wait time from 1 hour to something more reasonable?

@mislav mislav changed the title Using a bash script with gh create issue in order to open several issues with 3 second intervals in between fails (intermittently) with "GraphQL error: was submitted too quickly" create issue fails with "GraphQL error: was submitted too quickly" Jan 14, 2022
@JacobMillward
Copy link

I'm hitting this issue when bulk-transferring issues from one repo to another. The hour wait really is slowing me down.

@elibarzilay
Copy link

@mislav -- re your suggestion:

BTW, an easy bash script that takes a TSV of two columns (title and body): [...]

IIUC, @mskyttner's point was the fact that such an ability would create multiple issues in a single call, thereby avoiding the "too quickly" problem... To that end, it doesn't really matter what the file format is, as long as there's some way to reduce the number of separate api calls.

@mislav
Copy link
Contributor

mislav commented Jan 28, 2022

Hi, I'm pretty sure that the "submitted too quickly" limit is per creation, not per API request. You can't get around it by submitting multiple issues in a single request.

To demonstrate, I've crafted a single GraphQL request that creates a 100 issues: https://gist.github.com/mislav/4aadf706139bbf47d6e68f6b6cf4baab

I can execute this GraphQL request only once. The next time I execute it within a short time span, I get a bunch of UNPROCESSABLE: was submitted too quickly errors and none of the issues are created.

@mskyttner
Copy link
Author

@mislav if there had been 200 issues in your demo request, would rate-limit immediately kick in and if so, would the whole transaction have failed and been rolled back?

@elibarzilay true, implicitly hidden here is a wish (perhaps somewhat of an iceberg feature request?) to have support for a batch operation in the API for adding several issues from the cli(ent), where the rate-limiting happens by batch API call. Christmas is over now, but maybe something like "gh upload myissues.csv" which returns a job id.

An upload of several issues - a batch - would ideally be treated as one single atomic operation or transaction. Currently it looks like there is no such batch operation support, or at least the rate limiting gets in the way at record level - which means that in practice the client side needs to try to deal with it as if it was individual transactions, and say if issue nr x in a larger batch fails or rate-limiting kicks in, the client side then needs to manage transaction clean-up (rollback all? try again until timeout passes, but if so when do you retry since it is hard to know when you get out of rate-limit-jail?) which is cumbersome to automate?

@mislav
Copy link
Contributor

mislav commented Feb 2, 2022

would the whole transaction have failed and been rolled back?

No, because GraphQL mutations in a single request are not executed in a transaction. If some of them fail, the GraphQL response will individually report their failures under the errors part of the response payload, and the responses of successful mutations will be available under data as usual.

@mislav
Copy link
Contributor

mislav commented Feb 3, 2022

Closing this because the was submitted too quickly is an intentional restriction on the platform to combat abuse by automated actors. Unfortunately, this rate limit is internal (since it can vary dynamically) and it also affects people who just want to legitimately create a lot of objects at once. The solution is to scan for this error message in your programs and retry creation after a delay of some minutes or up to an hour.

@Cpt-Falcon
Copy link

There should really be a batch request feature, or at least give the ability to increase the request limit because this is definitely a problem

@shem8
Copy link

shem8 commented Apr 17, 2022

Is there a way to know what's the internal rate limit? In high level - or better, with the rate limit api.
Also - the rate limit is per token / user / app / repo / organization?

@thomasfowlerFIS
Copy link

Is there a way to know what's the internal rate limit? In high level - or better, with the rate limit api. Also - the rate limit is per token / user / app / repo / organization?

My suspicion is they would prefer this to be unknown, because if there are bad actors on the platform GH would want to catch them in the act, rather than the same bad actors fly under the radar. That's just speculation, but plausible.

@thomasfowlerFIS
Copy link

thomasfowlerFIS commented Apr 25, 2022

There should really be a batch request feature, or at least give the ability to increase the request limit because this is definitely a problem

I like the idea of a rate limit increase on a case-by-case basis, as we have a need to do bulk forking and/or transferring of repositories, sometimes in the hundreds.

@jfachal
Copy link

jfachal commented May 10, 2022

Closing this because the was submitted too quickly is an intentional restriction on the platform to combat abuse by automated actors. Unfortunately, this rate limit is internal (since it can vary dynamically) and it also affects people who just want to legitimately create a lot of objects at once. The solution is to scan for this error message in your programs and retry creation after a delay of some minutes or up to an hour.

maybe, and only maybe, it would be a good idea to implement an internal retry logic in gh cli with exponential backoff, queueing in the server side this operation as a backpressure mechanism, ... but if every client program implements a custom retry logic .. at the end it would be worst, because the most simplest solution is retry until it successfully create the operation, hitting much more the rate-limit, and as a side-effect increasing the load of the server side system.

@mislav
Copy link
Contributor

mislav commented May 10, 2022

maybe, and only maybe, it would be a good idea to implement an internal retry logic in gh cli with exponential backoff, queueing in the server side this operation as a backpressure mechanism

An internal retry mechanism has been proposed before for dealing with rate limits #3292, but I find it unacceptable that a gh operation on the command-line would ever hang so long waiting for a retry to be cleared. I'm leaning towards continuing to error out (an potentially exiting with an error code reserved for rate limits) and letting the gh user handle how they would like to see retries, rather than hanging for an unreasonable block of time.

@elibarzilay
Copy link

elibarzilay commented May 13, 2022

@mislav

So, ... the reasonable thing is to throw an "unknown limits, should kinda work normally, but if it's too much then implement more stuff yourself"? Worse, on the user side, this is either going to be addressed by doing exactly the same kind of busy-wait loop (so the same "hang so long", or it won't be addressed and we'd be hurting more collective skulls due to puzzled "why is stuff broken" head-scratching .

The whole purpose of a tool like gh is to have a central place where functionality is implemented once instead of home-cooked solutions that people need to implement and re-implement yourself. Furthermore, since GH is maintaining it, it would allow you to handle non-abusers in a central way. One example is to implement some backoff that won't put to much load on the servers. Another is, if there's ever some way to demonstrate good behavior by (like throwing a captcha), then that could be implemented here too.

Constructive proposal: (a) add some --maximum-time-I'm-willing-to-wait with a possible value of infinitly. (b) make a known error code just for this case, and then people can do things like

while ./gh issue create ...; [[ $? = 8 ]]; do sleep 1m; done

Hopefully it's obvious why a specific exit code is needed to make that work, and why that's much better than letting people grab the output and "parse" it.

@mislav
Copy link
Contributor

mislav commented May 16, 2022

Constructive proposal: (a) add some --maximum-time-I'm-willing-to-wait with a possible value of infinitly. (b) make a known error code just for this case

I like both these proposals. We are already tracking (b) in #3292

@elibarzilay
Copy link

@mislav That sounds good then. (FWIW machine-parsable stderr can exist, but is generally not reliable, and IMO it tends to be misused, especially for quick scripts)

@musabmasood
Copy link

Somewhat related community/community#18662

@CMCDragonkai
Copy link

This makes any issue migration pretty much impossible.

@dblock
Copy link

dblock commented Jan 5, 2023

Came here to add that trying to use automation to create issues for legit reasons in a reasonably large project is painful because of these limits. I'm coming from opensearch-project/.github#121 where I'm creating PRs in 70+ repos.

@KarneeKarnay
Copy link

KarneeKarnay commented Feb 5, 2023

I'll add that the reset time for this hidden limit is more than a minute sometimes, which implies something else is at play as well.

I have a loop that waits 61 seconds between errors before trying to create an issue.

    response = False
    while response is False:
        print("trying...")
        try:
            cim_response = client.execute(create_issue_mutation, variable_values={
                "repositoryId": repo_id,
                "title": title,
                "body": body
                }
            )
            response = True
        except Exception as ex:
            print(ex)
            if(ex['message'] == 'was submitted too quickly')
                print("Waiting a min for Git Content Creation Limit to reset")
                time.sleep(61)
This is the output between the last success and the next:
trying...
{'type': 'UNPROCESSABLE', 'path': ['createIssue'], 'locations': [{'line': 2, 'column': 3}], 'message': 'was submitted too quickly'}
Waiting a min for Git Content Creation Limit to reset
trying...
{'type': 'UNPROCESSABLE', 'path': ['createIssue'], 'locations': [{'line': 2, 'column': 3}], 'message': 'was submitted too quickly'}
Waiting a min for Git Content Creation Limit to reset
trying...
{'type': 'UNPROCESSABLE', 'path': ['createIssue'], 'locations': [{'line': 2, 'column': 3}], 'message': 'was submitted too quickly'}
Waiting a min for Git Content Creation Limit to reset
trying...
{'type': 'UNPROCESSABLE', 'path': ['createIssue'], 'locations': [{'line': 2, 'column': 3}], 'message': 'was submitted too quickly'}
Waiting a min for Git Content Creation Limit to reset
trying...
{'type': 'UNPROCESSABLE', 'path': ['createIssue'], 'locations': [{'line': 2, 'column': 3}], 'message': 'was submitted too quickly'}
Waiting a min for Git Content Creation Limit to reset
trying...
{'type': 'UNPROCESSABLE', 'path': ['createIssue'], 'locations': [{'line': 2, 'column': 3}], 'message': 'was submitted too quickly'}
Waiting a min for Git Content Creation Limit to reset
trying...
{'type': 'UNPROCESSABLE', 'path': ['createIssue'], 'locations': [{'line': 2, 'column': 3}], 'message': 'was submitted too quickly'}
Waiting a min for Git Content Creation Limit to reset
trying...
{'type': 'UNPROCESSABLE', 'path': ['createIssue'], 'locations': [{'line': 2, 'column': 3}], 'message': 'was submitted too quickly'}
Waiting a min for Git Content Creation Limit to reset
trying...
{'type': 'UNPROCESSABLE', 'path': ['createIssue'], 'locations': [{'line': 2, 'column': 3}], 'message': 'was submitted too quickly'}
Waiting a min for Git Content Creation Limit to reset
trying...
{'type': 'UNPROCESSABLE', 'path': ['createIssue'], 'locations': [{'line': 2, 'column': 3}], 'message': 'was submitted too quickly'}
Waiting a min for Git Content Creation Limit to reset
trying...
{'type': 'UNPROCESSABLE', 'path': ['createIssue'], 'locations': [{'line': 2, 'column': 3}], 'message': 'was submitted too quickly'}
Waiting a min for Git Content Creation Limit to reset
trying...
{'type': 'UNPROCESSABLE', 'path': ['createIssue'], 'locations': [{'line': 2, 'column': 3}], 'message': 'was submitted too quickly'}
Waiting a min for Git Content Creation Limit to reset
trying...
{'type': 'UNPROCESSABLE', 'path': ['createIssue'], 'locations': [{'line': 2, 'column': 3}], 'message': 'was submitted too quickly'}
Waiting a min for Git Content Creation Limit to reset
trying...
{'type': 'UNPROCESSABLE', 'path': ['createIssue'], 'locations': [{'line': 2, 'column': 3}], 'message': 'was submitted too quickly'}
Waiting a min for Git Content Creation Limit to reset
trying...
{'type': 'UNPROCESSABLE', 'path': ['createIssue'], 'locations': [{'line': 2, 'column': 3}], 'message': 'was submitted too quickly'}
Waiting a min for Git Content Creation Limit to reset
trying...
{'type': 'UNPROCESSABLE', 'path': ['createIssue'], 'locations': [{'line': 2, 'column': 3}], 'message': 'was submitted too quickly'}
Waiting a min for Git Content Creation Limit to reset
trying...
{'type': 'UNPROCESSABLE', 'path': ['createIssue'], 'locations': [{'line': 2, 'column': 3}], 'message': 'was submitted too quickly'}
Waiting a min for Git Content Creation Limit to reset
trying...
{'type': 'UNPROCESSABLE', 'path': ['createIssue'], 'locations': [{'line': 2, 'column': 3}], 'message': 'was submitted too quickly'}
Waiting a min for Git Content Creation Limit to reset
trying...
{'type': 'UNPROCESSABLE', 'path': ['createIssue'], 'locations': [{'line': 2, 'column': 3}], 'message': 'was submitted too quickly'}
Waiting a min for Git Content Creation Limit to reset
trying...
{'type': 'UNPROCESSABLE', 'path': ['createIssue'], 'locations': [{'line': 2, 'column': 3}], 'message': 'was submitted too quickly'}
Waiting a min for Git Content Creation Limit to reset
trying...
{'type': 'UNPROCESSABLE', 'path': ['createIssue'], 'locations': [{'line': 2, 'column': 3}], 'message': 'was submitted too quickly'}
Waiting a min for Git Content Creation Limit to reset
trying...
{'type': 'UNPROCESSABLE', 'path': ['createIssue'], 'locations': [{'line': 2, 'column': 3}], 'message': 'was submitted too quickly'}
Waiting a min for Git Content Creation Limit to reset
trying...
{'type': 'UNPROCESSABLE', 'path': ['createIssue'], 'locations': [{'line': 2, 'column': 3}], 'message': 'was submitted too quickly'}
Waiting a min for Git Content Creation Limit to reset
trying...
{'type': 'UNPROCESSABLE', 'path': ['createIssue'], 'locations': [{'line': 2, 'column': 3}], 'message': 'was submitted too quickly'}
Waiting a min for Git Content Creation Limit to reset
trying...
{'type': 'UNPROCESSABLE', 'path': ['createIssue'], 'locations': [{'line': 2, 'column': 3}], 'message': 'was submitted too quickly'}
Waiting a min for Git Content Creation Limit to reset
trying...
{'type': 'UNPROCESSABLE', 'path': ['createIssue'], 'locations': [{'line': 2, 'column': 3}], 'message': 'was submitted too quickly'}
Waiting a min for Git Content Creation Limit to reset
trying...
{'type': 'UNPROCESSABLE', 'path': ['createIssue'], 'locations': [{'line': 2, 'column': 3}], 'message': 'was submitted too quickly'}
Waiting a min for Git Content Creation Limit to reset
trying...
{'type': 'UNPROCESSABLE', 'path': ['createIssue'], 'locations': [{'line': 2, 'column': 3}], 'message': 'was submitted too quickly'}
Waiting a min for Git Content Creation Limit to reset
trying...
{'type': 'UNPROCESSABLE', 'path': ['createIssue'], 'locations': [{'line': 2, 'column': 3}], 'message': 'was submitted too quickly'}
Waiting a min for Git Content Creation Limit to reset
trying...
{'type': 'UNPROCESSABLE', 'path': ['createIssue'], 'locations': [{'line': 2, 'column': 3}], 'message': 'was submitted too quickly'}
Waiting a min for Git Content Creation Limit to reset
trying...

This is my limit at the time:

"rateLimit": {
      "nodeCount": 0,
      "remaining": 4859,
      "used": 141,
      "cost": 1,
      "limit": 5000,
      "resetAt": "2023-02-05T13:24:20Z"
    }

@GMNGeoffrey
Copy link

GMNGeoffrey commented Feb 15, 2023

It is indeed really annoying that these limits aren't documented (and that they exist). I'm following all the guidelines on avoiding secondary rate limits, plus my own exponential backoff, but still hitting these limits. GitHub really can't handle 0.3QPS of issue creation? Especially annoying that I can't give GitHub a single batch query (which I just spent a while constructing) and just have GitHub figure it out.

FWIW, I've reversed-engineered the limits (as of right now) to be 20 issues per minute and 150 per hour. That's just painfully low. I will not be able to accomplish my task here, which I'm only doing to work around another GitHub limitation (that there's no way to migrate an org-level project).

@mislav
Copy link
Contributor

mislav commented Feb 15, 2023

GitHub really can't handle 0.3QPS of issue creation?

In purely technical terms, GitHub's infrastructure can of course handle this (and much more). The opaque limits were not put in place to combat DDoS (there are other systems for that), but to prevent other kind of abuse via creation side-effects: e.g. by spamming a lot of notifications at once (since every issue or comment creation can @-mention people or teams).

Note that I am not on any platform teams so I am not privy to the original decision-making behind this. I have, however, forwarded them mine and others' feedback about the frustration that the opaqueness of these limits causes.

which I'm only doing to work around another GitHub limitation (that there's no way to migrate an org-level project).

It sucks that you've done all this work to prepare for a large migration between orgs, but that you're hindered by rate limits that aren't precisely documented. I don't think there are any built-in tools for Project migration between orgs, but if you want to unblock your scripts, you can consider writing to Support about your predicament and asking to be exempt from these rate limits for a limited period (e.g. for 24h). Then you could run all your scripts at once.

@GMNGeoffrey
Copy link

GMNGeoffrey commented Feb 15, 2023

Ah triggering notifications! That makes much more sense as a reason for the really low rate limits. I was careful to ensure that these wouldn't trigger any notifications (unless someone decided to watch my test repository for some reason), but of course it would be hard for GitHub to know that a priori. It does seem like it could know that by the time it sends the notifications though. I wonder actually if the limit could be on notification triggering based on your actions. That seems just generally much more useful.

The issue of these limits being undocumented is also pretty critical. I checked on the rate limits before doing this to make sure it was feasible and concluded from the documentation that it was. I wouldn't have sunk hours into debugging this otherwise.

Thanks for the suggestion to contact support about getting a temporary exemption from the rate limits. That option is also something that would be helpful to document (although if the secondary limits had been documented I probably would've contacted support at the outset).

Anyway, sorry this is really off topic for the cli tool. I'm happy to direct these suggestions elsewhere. My experience with the community discussion forum is that no one ever responds and I'm just shouting into the void.

@mislav
Copy link
Contributor

mislav commented Feb 15, 2023

Anyway, sorry this is really off topic for the cli tool. I'm happy to direct these suggestions elsewhere. My experience with the community discussion forum is that no one ever responds and I'm just shouting into the void.

No problem. This issue tracker often ends up as a place where we end up discussing general API bugs or feature requests. After all, we did originally create GitHub CLI partly for the exact kind of automation needs that you are now solving with your own hand-rolled scripts. But some time after the launch of gh, some platform teams put extra limits in place to deal with content creation at GitHub's scale. From our users' perspective, we both provided tools to do automation easily and shipped limits to the extent you can automate things 😞

I wish I had a satisfying solution for well-intentioned users, but all I can suggest now is either:

  1. Put 3 seconds delay between POST requests; or
  2. If that is too slow and your needs are mission-critical, contact Support to get allow-listed for a short time period.

@GMNGeoffrey
Copy link

Yeah the 1 second delay is suggested in the docs, but is unfortunately insufficient for avoiding the secondary rate limits for issue creation.

@mislav
Copy link
Contributor

mislav commented Feb 15, 2023

@GMNGeoffrey Ah, I see your reverse-engineering result above. I'll update my comment

@GMNGeoffrey
Copy link

To avoid the per-hour rate limits you'd actually need to put a 24s delay between post requests 😕 The per-minute rate limit is low, but manageable. It's really the per-hour rate limit that scuttles the usability here for me.

@skeet70
Copy link

skeet70 commented Mar 2, 2024

I was also bitten by this while trying to automate creating issues for vulnerabilities a private tracking repository. Were either the backoff option or specific exit code suggestions ever implemented? Could private/enterprise repositories or orgs have the option to turn off or expand the limit, or does it require support involvement?

The gh tool was otherwise a big speedup in automating this task, so thanks for it!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working p3 Affects a small number of users or is largely cosmetic platform Problems with the GitHub platform rather than the CLI client
Projects
None yet
Development

No branches or pull requests