-
Notifications
You must be signed in to change notification settings - Fork 149
Add an implicit rate limiter to help with bulk-operation security (Tested) #327
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Fix serialized name for api key owner (#267)
Just saying, this is ready for review |
Besides that I think it's pretty good (idk about the code) |
A more efficient way of doing this would take into account the amount of requests to hypixel-api left for the current minute, and the time it takes for hypixel-api to reset it's counters. These two values are both returned by hypixel when sending a request (#351 ). It would be less prone to return bad requests caused by terminating and starting back up a program in the same "hypixel minute". Secondly, it would be more efficient by not having to wait a full minute between the moment you go over your limit and when you can send again because hypixel resets their limit at a fixed time. (clocks would be synchronized) This makes things a lot more complex and I have tried finding a good solution to account for this. I used the "project reactor" library for scheduling and blocking, but all outside of the hypixel api, this PR gives me a new idea of implementing it inside the api (I will have to publish that code as well). If you would like to find a solution for this as well, please go ahead. I am more than interested in what you would find! |
I actually did consider doing this at first. The request recorder thing, that is. Though, I believe a better way of doing such a thing would be to make a primary background request to the API and stripping the headers for the requests left and the time left. Then, with the information, we could initialize the RateLimiter with a starting period of how many seconds are left in the minute, and a starting "current minute value" using the number of requests left |
That would indeed be a better solution, if the API stays consistent with it's resets of course (which I don't know if we're guaranteed that) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good but not sure what the best approach will be for including this with the intended changes for 4.0.0, see #399. Since we'd want to include it with that release.
Maybe we add it as an option to the transporters? Honestly not sure what's best so open to suggestions.
|
||
import static java.util.concurrent.TimeUnit.MINUTES; | ||
|
||
public class RateLimiter { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should make sure to format this class similar to other classes in the project, using 4 spaces for indentation instead of tabs are the main thing
In #425 I wrote a transport that would fulfill just this purpose, after every reset, the first next request is used to sync the clock. This also dynamically knows how many requests can be sent by sending a request and using the |
Please explain to me why the approach you created is better over simply refining this approach I've created to make an initial request to start the RateLimiter on. We could even use the field for seconds left to make the reset timer begin at a more synchronized time. This approach, if used as intended, doesn't even block. The |
I'm not saying my approach is better than yours, I realize only now that we made different things: my approach replaces getting the actual requests (intended for the 4.0.0 update) whilst yours is something that is built around getting the requests. So it's not the same, my bad. I do like the idea of automatic rate limiting like this implemented in the api and you do have a good solution for it. What I like so much about my approach though is that if I sent a lot of requests that need to wait, I should be able to cancel the ones that I might no longer be interested in, adding to the efficiency. Something else I like about it is that you don't need to specify a limit and that it will automatically deduce this based on the headers, the staying in sync part is indeed close to over-engineering but necessary for automatic limit deduction, especially if the code runs for a long time (which again, I must admit wouldn't always happen of course), a final thing I could say about mine is that I can make it use the It would be great if both could be implemented somehow, I'm definitely not saying they should prefer my "solution" over your solution as we tackle different aspects of the same problem. |
CompletableFuture#cancel(boolean mayInterruptIfRunning) |
canceling the CompletableFuture would still send the request to the api |
Not if you cancel it before a thread is freed to execute it in the thread pool, which, if you're making enough requests to have time to do (enough requests to have them begin blocking), this shouldn't be an issue. I really don't see why you'd want to cancel a request other than for timeout purposes (in which case the thread would likely be frozen anyway in the thread pool queue OR genuinely taking too long to respond; two cases which would safely cancel), and I'm sure a programmer should be smart enough to not make requests he'll have to cancel immediately |
Canceling the CompletableFuture won't cancel the request from getting sent in the code, go over it yourself. It will just get sent whenever a thread becomes available.
That is true but this is about limits and making sure the code is as efficient as possible at it's limits even if that may not be reached 90% of the time. And sometimes you really do not know when you would better cancel something or not, it's mostly a convenient feature allowing some flexibility to the developer. |
Efficiency is fine and all, yes. In the end, though, your solution depends on an entirely new framework (which I didn't even know existed, so reading your code was fun), and creates a lot more objects.
Can we not put the |
Isn't that why mine is implemented as a transport you choose to use or not? I'm much more for your solution being in the api-core by default (but toggleable). I like this library because I'm familiar with it. |
Well, yes, but this isn't any less optional. If the |
I agree, adding request canceling support will be tricky though |
Do you have Discord? This is way too many notifications |
You can manage GitHub notifications at https://github.com/notifications if they’re too noisy for you! |
No, I know. I was talking about everyone else we were spamming, and wanted
to chat with him privately
…On Sat, Jul 3, 2021 at 11:06 PM CyberFlame ***@***.***> wrote:
Do you have Discord? This is way too many notifications
You can manage GitHub notifications at https://github.com/notifications
if they’re too noisy for you!
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#327 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AHQUR4SQNONDFJT4NMULLA3TV7M55ANCNFSM4TY7ORFA>
.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM - Nothing seems out of the ordinary here
Fair enough, though in all fairness they should be controlling the
notifications they receive themselves, and the point of comments is for
the very reason of, well, commenting. That's a good point though about
privacy
…On Sun, Jul 4, 2021 at 4:08 PM Noe ***@***.***> wrote:
No, I know. I was talking about everyone else we were spamming, and wanted
to chat with him privately
On Sat, Jul 3, 2021 at 11:06 PM CyberFlame ***@***.***> wrote:
> Do you have Discord? This is way too many notifications
>
> You can manage GitHub notifications at https://github.com/notifications
> if they’re too noisy for you!
>
> —
> You are receiving this because you authored the thread.
> Reply to this email directly, view it on GitHub
> <#327 (comment)
>,
> or unsubscribe
> <
https://github.com/notifications/unsubscribe-auth/AHQUR4SQNONDFJT4NMULLA3TV7M55ANCNFSM4TY7ORFA
>
> .
>
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#327 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AF6BVMCVOGUCXBKJ7MPGCCLTV7NDBANCNFSM4TY7ORFA>
.
|
I'm not sure if having this always enabled and in the core is the best approach. Having new transports such as #425 which have rate limiting built-in or having an option in the other transports could make the most sense if this is something you still think could be useful. Feel free to re-open this PR if you update it to support the changes in 4.0, etc. |
Issue #316
Felt like doing it myself
Edit: Class fully tested
I ran this test case multiple times. It is 100% consistent with my expectations.