-
-
Notifications
You must be signed in to change notification settings - Fork 443
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Rate limit outgoing JOIN messages #3115
Conversation
I tried using Leaky rate limit bucket or however it's called, no idea if this is the best approach, but it's better than nothing as of right now I guess.
Also as advised, make bucket a member of AbstractIrcServer 11:51:38.116 YungLPR: zneix, whenever you use new directly, think twice
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This solution currently doesn't handle JOINs from when a user opens a new split - let's make sure we solve that in the same PR
Also decided to move the bucket declaration to where AbstractIrcServer itself is initialized to make sure this->bucket_ is initialized at all times.
Should be resolved in 61b0f39. Also, since that makes the use of |
This means we will not have to think about deleting it in a Qt-safe manner, rather it's done for us with the `DeleteLater` deleter
As part of this change, the logic is now that budget_ contains the amount of calls we could do before we will have to wait for a fillup. Because of this change, we no longer need to store the `limit_`, but rather just use that to initially "fill up the budget"
Co-authored-by: Tal Neoran <talneoran@gmail.com>
Works fine for me. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
looks and works as expected, but I have a couple nit picks
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems to work well to fix the issue, tested with 30+ channels and for a few edge cases.
When testing I found it's possible for a duplicate JOIN to be queued and sent. This happens if while a channel is queued and on cooldown to JOIN, the split with it is closed and reopened, causing it to be inserted again into the queue.
In our discussion offline we decided this is a rare enough case and so it can be ignored at least for now, I'm leaving this here for documentation.
Using this change and haven't run into any issues with it. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
After testing with 80 channels this works well for me
We have some further UX work, but this is a great first step
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changelog is good, functionality works as well, tested with 86 channels. 👍
Now we're on commit de4f6a9; Changes from upstream we've pulled: - Major: Fixed constant disconnections with more than 20 channels by rate-limiting outgoing JOIN messages. (Chatterino#3112, Chatterino#3115)
Pull request checklist:
CHANGELOG.md
was updated, if applicableDescription
Added experimental rate-limitting for outgoing JOIN messages with some imitation of Leaky bucket rate limitting. Huge thanks to @leon-richardt for helping me figure out the theory.
Tested it out and Twitch does not kill our connection - we conform to Twitch's 20 JOIN messages / 10s limit by sending at most 18 JOIN messages per 10.5s.
This might not be the best solution and I believe there could be a better one, but after about 24 hours of thinking this is the most sane solution I could come up with and given how important this bug is I think it's better than nothing.
There's still debug prints in the code, to make it easier for reviewers to test. After approving the PR I'll remove them.already removed.Closes #3107