Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ratelimits #273

Merged
merged 17 commits into from
Nov 4, 2016
Merged

Ratelimits #273

merged 17 commits into from
Nov 4, 2016

Conversation

jogramming
Copy link
Contributor

Handles the new ratelimit headers

  • X-RateLimit-Remaining
  • X-RateLimit-Reset
  • X-RateLimit-Global

I stuck it in an internal package, hope that's okay.
This is probably not the best way to do the removal of minor url variables, the alternative is to redo the whole endpoint system.

As for the reliability of this, after sending around 1k requests non-stop i didn't encounter a single 429.

I pad the reset time with 1 second, because if i don't it will still occasionally run into 429's of the size 1ms - 50ms (yeah, i got ratelimited for 1 ms a lot...), padding with anything less than 1 second would still give those tiny ratelimits sometimes.

Handles the new ratelimit headers
 - X-RateLimit-Remaining
 - X-RateLimit-Reset
 - X-RateLimit-Global
Copy link
Collaborator

@iopred iopred left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry about the big review, I really do appreciate this though, this looks like a huge improvement.


bucketKey := ParseURL(path)

r.Lock()
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Use sync.RWMutex and call r.RLock/r.RUnlock here

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well getBucket will also modify the map if the bucket isn't there already

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, then the locking should be in there so the next time someone does r.getBucket() they don't need to replicate the correct locking scheme.

@@ -17,6 +17,7 @@ import (
"sync"
"time"

"github.com/bwmarrin/discordgo/internal/ratelimit"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just move ratelimit into the dgo directory, not sure we need to have an internal package.

// Ratelimiter holds all ratelimit buckets
type RateLimiter struct {
sync.Mutex
buckets map[string]*Bucket
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Might be better to have:
global *Bucket

instead of passing the rate limiter into a bucket, circular references feel wrong.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You mean each bucket having a reference to a global bucket instead of the ratelimiter?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah exactly.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done


// ParseURL parses the url, removing everything not relevant to identifying a bucket.
// such as minor variables
func ParseURL(url string) string {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it possible to pass the endpoint into the ratelimiter in DGO, it seems unfortunate to construct a URL, and then re-parse it with a regex to get the endpoint out.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You mean having request take another argument, the bucket, and have each request method be in charge of the locking of buckets?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's a much bigger change, but I would add:
RequestWithBucketId()

Doing that instead of changing Request() would be good for backwards compatibilities sake.

That would allow you to modify the private request() to have a nilable bucket ID.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Was this how you meant?

@@ -0,0 +1,131 @@
package ratelimit
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you! 👍

// If we ran out of calls and the reset time is still ahead of us
// then we need to take it easy and relax a little
for b.remaining < 1 && b.reset.After(time.Now()) {
time.Sleep(b.reset.Sub(time.Now()))
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would like to figure out a way to rate limit without sleeping, however I realise that this is still an improvement, so let's keep that in mind for the future.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm yeah making a ratelimiter based on channels instead sounds like a fun challenge :D


// If we ran out of calls and the reset time is still ahead of us
// then we need to take it easy and relax a little
for b.remaining < 1 && b.reset.After(time.Now()) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this need to be a for loop? It looks like you're sleeping enough to be fine.

Copy link
Contributor Author

@jogramming jogramming Oct 19, 2016

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it doesn't need to be a for loop, no

}

// Lock and unlock to check for global ratelimites after sleeping
r.Lock()
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is this dance required?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

After sleeping, another request might have triggered a global ratelimit

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, I worry about using mutex's for synchronization. Is there not a way to detect and sleep for max(bucketTime, globalTime)?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well the problem is that while there may not be any global ratelimit when it started sleeping, there may be one after sleeping

}

// Add a second to account for time desync and such
b.reset = time.Unix(unix, 0).Add(time.Second)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, can you play with this value.. You mentioned a few milliseconds don't work, what about 100*time.Millisecond, 200? 500?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i've tried all up to 700, and 1s was the only thing that worked 100%. it's strange cause the retry-after duration is always below 50ms

I'm currently doing some tests with 800 and then trying 900

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some data i've collected about this:

with no added time it's around every 15~ request
with 250ms: every 100~ request
with 500ms: every 500~ requests
with 800ms: every 1000~ requests
with 900ms: every 2000~ requests

haven't encountered one with 1s yet, gonna let the test run for a while.

these are just rough averages done in my head, not very accurate.

My thoughts on causes for this:

  • system time desync between me and discord, my time is synced up using arch's ntp servers though.
  • unix time rounding on discords end (if they store the time in a higher precision than seconds, but round the time down to seconds when sending it to us)

I'll do some more testing on a vps.

@iopred
Copy link
Collaborator

iopred commented Oct 19, 2016

That's sufficient, 1s seems reasonable.

On Wed, Oct 19, 2016 at 12:09 PM, jonas747 notifications@github.com wrote:

@jonas747 commented on this pull request.

In internal/ratelimit/ratelimit.go
#273:

  • // Prefer retryafter because it's more accurate with time sync and whatnot
  • if retryAfter != "" {
  • parsedAfter, err := strconv.ParseInt(retryAfter, 10, 64)
    
  • if err != nil {
    
  •     return err
    
  • }
    
  • b.reset = time.Now().Add(time.Duration(parsedAfter) \* time.Millisecond)
    
  • } else if reset != "" {
  • unix, err := strconv.ParseInt(reset, 10, 64)
    
  • if err != nil {
    
  •     return err
    
  • }
    
  • // Add a second to account for time desync and such
    
  • b.reset = time.Unix(unix, 0).Add(time.Second)
    

Some data i've collected about this:

with no added time it's around every 15~ request
with 250ms: every 100~ request
with 500ms: every 500~ requests
with 800ms: every 1000~ requests
with 900ms: every 2000~ requests

haven't encountered one with 1s yet, gonna let the test run for a while.

these are just rough averages done in my head, not very accurate.

My thoughts on causes for this:

  • system time desync between me and discord, my time is synced up
    using arch's ntp servers though.
  • unix time rounding on discords end (if they store the time in a
    higher precision than seconds, but round the time down to seconds when
    sending it to us)

I'll do some more testing on a vps.


You are receiving this because you commented.
Reply to this email directly, view it on GitHub
#273, or mute the thread
https://github.com/notifications/unsubscribe-auth/ABdVggGa6_0LCFGwkELQXdhyqW71KM5dks5q1mrogaJpZM4KbFaW
.

Now each request function will need to specify the bucket id if the
endpoint contains minor variables
@@ -269,7 +251,7 @@ func (s *Session) Logout() (err error) {
// userID : A user ID or "@me" which is a shortcut of current user ID
func (s *Session) User(userID string) (st *User, err error) {

body, err := s.Request("GET", EndpointUser(userID), nil)
body, err := s.RequestWithBucketID("GET", EndpointUser(userID), nil, EndpointUser(""))
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would use EndpointUsers instead. same with EndpointGuilds etc. Then you don't need to split the strings or do any modifications.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I updated some of them to this, did you mean to avoid the strings.SplitN call when using Request as well?

@jogramming
Copy link
Contributor Author

Latest commit makes it use the date header instead of requiring synced time, also lowered the extra time padding down to 250ms with that

@iopred
Copy link
Collaborator

iopred commented Oct 25, 2016

Yeah exactly, just pass in the bucket ID directly, I suggest using
EndpointUsers, EndpointGuilds etc. as the ID's as they are already strings.

On Thu, Oct 20, 2016 at 12:39 PM, jonas747 notifications@github.com wrote:

@jonas747 commented on this pull request.

In restapi.go #273:

@@ -269,7 +251,7 @@ func (s *Session) Logout() (err error) {
// userID : A user ID or "@me" which is a shortcut of current user ID
func (s *Session) User(userID string) (st *User, err error) {

  • body, err := s.Request("GET", EndpointUser(userID), nil)
  • body, err := s.RequestWithBucketID("GET", EndpointUser(userID), nil, EndpointUser(""))

I updated some of them to this, did you mean to avoid the strings.SplitN
call when using Request as well?


You are receiving this because you commented.
Reply to this email directly, view it on GitHub
#273, or mute the thread
https://github.com/notifications/unsubscribe-auth/ABdVguQJE71OVwLGtE5EBVzztg_sZjcTks5q18NqgaJpZM4KbFaW
.

@jogramming
Copy link
Contributor Author

so just to be 100% sure, replace stuff like

body, err := s.Request("GET", EndpointGuild(guildID), nil)

with

body, err := s.RequestWithBucketID("GET", EndpointGuild(guildID), nil, EndpointGuild(guildID))

?

@iopred
Copy link
Collaborator

iopred commented Oct 26, 2016

body, err := s.RequestWithBucketID("GET", EndpointGuild(guildID), nil,
EndpointGuilds)

On Tue, Oct 25, 2016 at 11:25 PM, jonas747 notifications@github.com wrote:

so just to be 100% sure, replace stuff like

body, err := s.Request("GET", EndpointGuild(guildID), nil)

with

body, err := s.RequestWithBucketID("GET", EndpointGuild(guildID), nil, EndpointGuild(guildID))

?


You are receiving this because you commented.
Reply to this email directly, view it on GitHub
#273 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/ABdVgvKvXWA2XXJOjBimMJISusFxgcSPks5q3vJggaJpZM4KbFaW
.

@jogramming
Copy link
Contributor Author

Then requests for different guilds would be rate limited with the same bucket, guildID is a major variable so they should be rate limited on their own according to the docs

@iopred
Copy link
Collaborator

iopred commented Oct 26, 2016

I see, I'm trying to remove bucketID = strings.SplitN(urlStr, "?", 2)[0] from RequestWithBucketID, will that be sufficient if you pass EndpointGuild(guildID) in?

If so, Looks good to me :) 👍

@jogramming
Copy link
Contributor Author

the strings.SpltiN is in Request(..) for backwards compatibility, since that function dosent take a separate bucket id.

you mean like

s.RequestWithBucketID("GET", EndpointGuild(guildID), nil, EndpointGuild(guildID))

? if so yeah, that wont call strings.SplitN

@iopred
Copy link
Collaborator

iopred commented Oct 26, 2016

Yeah I don't think we can avoid it in Request.

On Oct 26, 2016 10:41 AM, "jonas747" notifications@github.com wrote:

the strings.SpltiN is in Request(..) for backwards compatibility, since
that function dosent take a separate bucket id.

you mean like

s.RequestWithBucketID("GET", EndpointGuild(guildID), nil,
EndpointGuild(guildID))

? if so yeah, that wont call strings.SplitN


You are receiving this because you commented.

Reply to this email directly, view it on GitHub
#273 (comment),
or mute
the thread
https://github.com/notifications/unsubscribe-auth/ABdVgrhjywu2mknj50JdVEbUKL4WYiHGks5q35CtgaJpZM4KbFaW
.

@jogramming
Copy link
Contributor Author

dunso

Copy link
Collaborator

@iopred iopred left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sweet dude

type Bucket struct {
Key string

mu sync.Mutex
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do:

type Bucket struct {
sync.Mutex
}

Then you can simply call:

bucket.Lock()

}

// Check for global ratelimits
r.global.mu.Lock()
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

With the comment below this can be changed to:

r.global.Lock()

// and locks up the whole thing in case if there's a global ratelimit.
func (b *Bucket) Release(headers http.Header) error {

defer b.mu.Unlock()
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

b.Unlock() with the comment above

// where n is the amount of requests that were going on
sleepTo := time.Now().Add(time.Duration(parsedAfter) * time.Millisecond)

b.global.mu.Lock()
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

etc. :)

time.Sleep(sleepDuration)
}

b.global.mu.Unlock()
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

etc.

@jogramming
Copy link
Contributor Author

Also done

@iopred
Copy link
Collaborator

iopred commented Nov 4, 2016

Sorry dude, one last request, could you add Buckets for Reactions/Webhooks <3

@jogramming
Copy link
Contributor Author

finito

@iopred iopred merged commit 56b1907 into bwmarrin:develop Nov 4, 2016
@iopred
Copy link
Collaborator

iopred commented Nov 4, 2016

Phew! Thanks so much dude. This is awesome! <3

@bwmarrin bwmarrin added this to the v0.15.0 milestone Apr 18, 2017
@CarsonHoffman
Copy link
Collaborator

@Distortions81 where exactly is the issue in this change with respect to that?

ErikMcClure pushed a commit to ErikMcClure/discordgo that referenced this pull request Aug 4, 2020
* Added ratelimiter

Handles the new ratelimit headers
 - X-RateLimit-Remaining
 - X-RateLimit-Reset
 - X-RateLimit-Global

* Pad only reset time with a second

* Moved ratelimiter out of internal package

* Change for loop, move global ratelimit check inside sleep check

* Moved ratelimiter locking to getBucket

* Added global bucket

* Changed how bucket id's are done

Now each request function will need to specify the bucket id if the
endpoint contains minor variables

* Allow empty bucketID in request

* Remove some uneeded Endpoint* function calls

* Added test for global ratelimits

* Fixed a silly little mistake causing incorrect ratelimits

* Update test comments, Fixed treating a endpoint as 2 in ratelimiting

* Use date header from discord instead of relying on sys time sync

* Update all REST functions to use RequestWithBucketID

* Embed mutex into bucket

* Added webhook and reaction buckets
@ashishjh-bst ashishjh-bst deleted the ratelimits branch March 21, 2022 12:48
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants