-
Notifications
You must be signed in to change notification settings - Fork 801
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ratelimits #273
Ratelimits #273
Conversation
Handles the new ratelimit headers - X-RateLimit-Remaining - X-RateLimit-Reset - X-RateLimit-Global
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry about the big review, I really do appreciate this though, this looks like a huge improvement.
|
||
bucketKey := ParseURL(path) | ||
|
||
r.Lock() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Use sync.RWMutex and call r.RLock/r.RUnlock here
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Well getBucket will also modify the map if the bucket isn't there already
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, then the locking should be in there so the next time someone does r.getBucket() they don't need to replicate the correct locking scheme.
@@ -17,6 +17,7 @@ import ( | |||
"sync" | |||
"time" | |||
|
|||
"github.com/bwmarrin/discordgo/internal/ratelimit" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just move ratelimit into the dgo directory, not sure we need to have an internal package.
// Ratelimiter holds all ratelimit buckets | ||
type RateLimiter struct { | ||
sync.Mutex | ||
buckets map[string]*Bucket |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Might be better to have:
global *Bucket
instead of passing the rate limiter into a bucket, circular references feel wrong.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You mean each bucket having a reference to a global bucket instead of the ratelimiter?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah exactly.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
|
||
// ParseURL parses the url, removing everything not relevant to identifying a bucket. | ||
// such as minor variables | ||
func ParseURL(url string) string { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it possible to pass the endpoint into the ratelimiter in DGO, it seems unfortunate to construct a URL, and then re-parse it with a regex to get the endpoint out.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You mean having request take another argument, the bucket, and have each request method be in charge of the locking of buckets?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's a much bigger change, but I would add:
RequestWithBucketId()
Doing that instead of changing Request() would be good for backwards compatibilities sake.
That would allow you to modify the private request() to have a nilable bucket ID.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Was this how you meant?
@@ -0,0 +1,131 @@ | |||
package ratelimit |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you! 👍
// If we ran out of calls and the reset time is still ahead of us | ||
// then we need to take it easy and relax a little | ||
for b.remaining < 1 && b.reset.After(time.Now()) { | ||
time.Sleep(b.reset.Sub(time.Now())) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would like to figure out a way to rate limit without sleeping, however I realise that this is still an improvement, so let's keep that in mind for the future.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm yeah making a ratelimiter based on channels instead sounds like a fun challenge :D
|
||
// If we ran out of calls and the reset time is still ahead of us | ||
// then we need to take it easy and relax a little | ||
for b.remaining < 1 && b.reset.After(time.Now()) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does this need to be a for loop? It looks like you're sleeping enough to be fine.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it doesn't need to be a for loop, no
} | ||
|
||
// Lock and unlock to check for global ratelimites after sleeping | ||
r.Lock() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why is this dance required?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
After sleeping, another request might have triggered a global ratelimit
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm, I worry about using mutex's for synchronization. Is there not a way to detect and sleep for max(bucketTime, globalTime)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Well the problem is that while there may not be any global ratelimit when it started sleeping, there may be one after sleeping
} | ||
|
||
// Add a second to account for time desync and such | ||
b.reset = time.Unix(unix, 0).Add(time.Second) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also, can you play with this value.. You mentioned a few milliseconds don't work, what about 100*time.Millisecond, 200? 500?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i've tried all up to 700, and 1s was the only thing that worked 100%. it's strange cause the retry-after duration is always below 50ms
I'm currently doing some tests with 800 and then trying 900
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some data i've collected about this:
with no added time it's around every 15~ request
with 250ms: every 100~ request
with 500ms: every 500~ requests
with 800ms: every 1000~ requests
with 900ms: every 2000~ requests
haven't encountered one with 1s yet, gonna let the test run for a while.
these are just rough averages done in my head, not very accurate.
My thoughts on causes for this:
- system time desync between me and discord, my time is synced up using arch's ntp servers though.
- unix time rounding on discords end (if they store the time in a higher precision than seconds, but round the time down to seconds when sending it to us)
I'll do some more testing on a vps.
That's sufficient, 1s seems reasonable. On Wed, Oct 19, 2016 at 12:09 PM, jonas747 notifications@github.com wrote:
|
Now each request function will need to specify the bucket id if the endpoint contains minor variables
@@ -269,7 +251,7 @@ func (s *Session) Logout() (err error) { | |||
// userID : A user ID or "@me" which is a shortcut of current user ID | |||
func (s *Session) User(userID string) (st *User, err error) { | |||
|
|||
body, err := s.Request("GET", EndpointUser(userID), nil) | |||
body, err := s.RequestWithBucketID("GET", EndpointUser(userID), nil, EndpointUser("")) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would use EndpointUsers instead. same with EndpointGuilds etc. Then you don't need to split the strings or do any modifications.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I updated some of them to this, did you mean to avoid the strings.SplitN call when using Request as well?
Latest commit makes it use the date header instead of requiring synced time, also lowered the extra time padding down to 250ms with that |
Yeah exactly, just pass in the bucket ID directly, I suggest using On Thu, Oct 20, 2016 at 12:39 PM, jonas747 notifications@github.com wrote:
|
so just to be 100% sure, replace stuff like body, err := s.Request("GET", EndpointGuild(guildID), nil) with body, err := s.RequestWithBucketID("GET", EndpointGuild(guildID), nil, EndpointGuild(guildID)) ? |
body, err := s.RequestWithBucketID("GET", EndpointGuild(guildID), nil, On Tue, Oct 25, 2016 at 11:25 PM, jonas747 notifications@github.com wrote:
|
Then requests for different guilds would be rate limited with the same bucket, guildID is a major variable so they should be rate limited on their own according to the docs |
I see, I'm trying to remove If so, Looks good to me :) 👍 |
the you mean like s.RequestWithBucketID("GET", EndpointGuild(guildID), nil, EndpointGuild(guildID)) ? if so yeah, that wont call |
Yeah I don't think we can avoid it in Request. On Oct 26, 2016 10:41 AM, "jonas747" notifications@github.com wrote: the strings.SpltiN is in Request(..) for backwards compatibility, since you mean like s.RequestWithBucketID("GET", EndpointGuild(guildID), nil, ? if so yeah, that wont call strings.SplitN — Reply to this email directly, view it on GitHub |
dunso |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sweet dude
type Bucket struct { | ||
Key string | ||
|
||
mu sync.Mutex |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do:
type Bucket struct {
sync.Mutex
}
Then you can simply call:
bucket.Lock()
} | ||
|
||
// Check for global ratelimits | ||
r.global.mu.Lock() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
With the comment below this can be changed to:
r.global.Lock()
// and locks up the whole thing in case if there's a global ratelimit. | ||
func (b *Bucket) Release(headers http.Header) error { | ||
|
||
defer b.mu.Unlock() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
b.Unlock() with the comment above
// where n is the amount of requests that were going on | ||
sleepTo := time.Now().Add(time.Duration(parsedAfter) * time.Millisecond) | ||
|
||
b.global.mu.Lock() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
etc. :)
time.Sleep(sleepDuration) | ||
} | ||
|
||
b.global.mu.Unlock() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
etc.
Also done |
Sorry dude, one last request, could you add Buckets for Reactions/Webhooks <3 |
finito |
Phew! Thanks so much dude. This is awesome! <3 |
@Distortions81 where exactly is the issue in this change with respect to that? |
* Added ratelimiter Handles the new ratelimit headers - X-RateLimit-Remaining - X-RateLimit-Reset - X-RateLimit-Global * Pad only reset time with a second * Moved ratelimiter out of internal package * Change for loop, move global ratelimit check inside sleep check * Moved ratelimiter locking to getBucket * Added global bucket * Changed how bucket id's are done Now each request function will need to specify the bucket id if the endpoint contains minor variables * Allow empty bucketID in request * Remove some uneeded Endpoint* function calls * Added test for global ratelimits * Fixed a silly little mistake causing incorrect ratelimits * Update test comments, Fixed treating a endpoint as 2 in ratelimiting * Use date header from discord instead of relying on sys time sync * Update all REST functions to use RequestWithBucketID * Embed mutex into bucket * Added webhook and reaction buckets
Handles the new ratelimit headers
I stuck it in an internal package, hope that's okay.
This is probably not the best way to do the removal of minor url variables, the alternative is to redo the whole endpoint system.
As for the reliability of this, after sending around 1k requests non-stop i didn't encounter a single 429.
I pad the reset time with 1 second, because if i don't it will still occasionally run into 429's of the size 1ms - 50ms (yeah, i got ratelimited for 1 ms a lot...), padding with anything less than 1 second would still give those tiny ratelimits sometimes.