You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Helix rate limit buckets refill at 600 points per second, which conflicts with documentation and the other response headers.
Documentation
This section of the docs states The Twitch API has a rate limit of 800 points per minute.
Also, within the past year, the docs explicitly stated: Each client ID has a point-refill rate of 800 points per minute per user and a bucket size of 800 points.
So, third-parties naturally expect a refill rate of 800/60 (I hope a silent change wasn’t made here, as there was no third-party announcement or obvious changelog entry)
Response headers
Logs contained near the bottom of this report show the key response headers from doing many Helix API calls. Under simple calculations (rate = (limit - remaining) * 60 / secondsUntilReset), we find that the headers imply a refill rate of 800/60.
Also, the docs state: Ratelimit-Limit — The rate at which points are added to your bucket (and the value of Ratelimit-Limit is 800, as shown in the logs below)
However, we also document that using 800/60 refill results in 429’s (note: this is not due to one-off issues or network oddities – even using a refill rate of 700/60 will eventually throw a 429 given a sufficient throughput of requests). Also, we document that bucket points are added at a constant flow rate, instead of at a long interval (ruling out explanations of needing to wait a full minute). Lastly it is worth noting that this refill rate asymmetry is perplexingly limited to helix (documented chat rate limits do not have an issue with the bucket algorithm we use).
How to reproduce
Try to make a couple thousand helix api calls (“Get Users” is the endpoint used in the testing below) while abiding to 800/60. You will run into 429 errors.
If, instead, our rate limit code refills bucket points at 600/60, no 429’s are thrown (note: 800 initial points is still correct).
Expected behavior
One of the following should happen (preferably the former):
a) The bucket should refill at 800/60
b) The documentation and response headers should be changed to reflect 600/60 refill rate
Additional context
Here are logs of key rate limit response headers from making 1600 Helix API requests using an algorithm that doesn’t violate 600/60 refill rate. The steady-state is not reached until line 980, so I recommend scrolling down to make the following observations:
From line 980 to 1600, 621 requests were made over 62 seconds. Doing 621 * 60/62 gives us an implied rate of 600.97 per 60 seconds
During the period above, the remaining header (essentially) does not change. If Helix were using 800/60, one would expect this header to clearly be growing
Further, the seconds remaining until reset, according to the headers, is just under 60 seconds. Using an earlier formula, this implies a refill rate of 800/60. Since this refill rate is not accurate, the reset header is yielding bad data; the true seconds would be greater than 60.
Others in the dev discord have ran into the same issue, and a staff member suggested I make this report.
The text was updated successfully, but these errors were encountered:
I can confirm this issue is fixed; the helix bucket refills at the correct rate of 800 points per minute now. (staff informed me that a fix was deployed in Q4 2022)
Brief description
Helix rate limit buckets refill at 600 points per second, which conflicts with documentation and the other response headers.
Documentation
This section of the docs states
The Twitch API has a rate limit of 800 points per minute.Also, within the past year, the docs explicitly stated:
Each client ID has a point-refill rate of 800 points per minute per user and a bucket size of 800 points.So, third-parties naturally expect a refill rate of 800/60 (I hope a silent change wasn’t made here, as there was no third-party announcement or obvious changelog entry)
Response headers
Logs contained near the bottom of this report show the key response headers from doing many Helix API calls. Under simple calculations (
rate = (limit - remaining) * 60 / secondsUntilReset), we find that the headers imply a refill rate of 800/60.Also, the docs state:
Ratelimit-Limit — The rate at which points are added to your bucket(and the value ofRatelimit-Limitis 800, as shown in the logs below)However, we also document that using 800/60 refill results in 429’s (note: this is not due to one-off issues or network oddities – even using a refill rate of 700/60 will eventually throw a 429 given a sufficient throughput of requests). Also, we document that bucket points are added at a constant flow rate, instead of at a long interval (ruling out explanations of needing to wait a full minute). Lastly it is worth noting that this refill rate asymmetry is perplexingly limited to helix (documented chat rate limits do not have an issue with the bucket algorithm we use).
How to reproduce
Try to make a couple thousand helix api calls (“Get Users” is the endpoint used in the testing below) while abiding to 800/60. You will run into 429 errors.
If, instead, our rate limit code refills bucket points at 600/60, no 429’s are thrown (note: 800 initial points is still correct).
Expected behavior
One of the following should happen (preferably the former):
a) The bucket should refill at 800/60
b) The documentation and response headers should be changed to reflect 600/60 refill rate
Additional context
Here are logs of key rate limit response headers from making 1600 Helix API requests using an algorithm that doesn’t violate 600/60 refill rate. The steady-state is not reached until line 980, so I recommend scrolling down to make the following observations:
remainingheader (essentially) does not change. If Helix were using 800/60, one would expect this header to clearly be growingOthers in the dev discord have ran into the same issue, and a staff member suggested I make this report.
The text was updated successfully, but these errors were encountered: