Skip to content

Conversation

HydroPage
Copy link
Contributor

@HydroPage HydroPage commented Nov 17, 2020

Issue #316

Felt like doing it myself

Edit: Class fully tested
I ran this test case multiple times. It is 100% consistent with my expectations.

public class RateLimiterTest
{
    private static final int TEST_LIMIT = 120;
    private static final int TEST_DEVIATION = 30;
    private final ExecutorService testAsyncPool = newCachedThreadPool();

    private final RateLimiter testInstance = new RateLimiter(TEST_LIMIT);
    private final AtomicInteger actionPassCounter = new AtomicInteger();

    /**
     * If the action is attempted N times at once, where N < LIMIT, all should pass
     */
    @Test
    public void testAllPassBelowLimit() throws InterruptedException
    {
        int underLimit = TEST_LIMIT - TEST_DEVIATION;

        for (int i = 0; i < underLimit; i++)
        {
            testAsyncPool.submit(this::limitedIncrementCounter);
        }

        Thread.sleep(2000);
        assertEquals(underLimit, actionPassCounter.intValue());
    }

    /**
     * If the action is attempted N times, where N > LIMIT, LIMIT times should pass, and LIMIT - N threads should freeze
     */
    @Test
    public void testNotAllPassOverLimit() throws InterruptedException
    {
        int overLimit = TEST_LIMIT + TEST_DEVIATION;

        for (int i = 0; i < overLimit; i++)
        {
            testAsyncPool.submit(this::limitedIncrementCounter);
        }

        Thread.sleep(2000);
        assertEquals(TEST_LIMIT, actionPassCounter.intValue());
    }

    private void limitedIncrementCounter()
    {
        try
        {
            testInstance.beforeAction();
            actionPassCounter.incrementAndGet();
        }
        catch (InterruptedException e)
        {
            Thread.currentThread().interrupt();
        }
    }
}

image

@HydroPage
Copy link
Contributor Author

Just saying, this is ready for review

@byronbytes
Copy link

Besides that I think it's pretty good (idk about the code)

@HydroPage HydroPage changed the title Add an implicit rate limiter to help with bulk-operation security Add an implicit rate limiter to help with bulk-operation security (Tested) Dec 7, 2020
@GrizzlT
Copy link
Contributor

GrizzlT commented Apr 22, 2021

A more efficient way of doing this would take into account the amount of requests to hypixel-api left for the current minute, and the time it takes for hypixel-api to reset it's counters. These two values are both returned by hypixel when sending a request (#351 ). It would be less prone to return bad requests caused by terminating and starting back up a program in the same "hypixel minute". Secondly, it would be more efficient by not having to wait a full minute between the moment you go over your limit and when you can send again because hypixel resets their limit at a fixed time. (clocks would be synchronized)

This makes things a lot more complex and I have tried finding a good solution to account for this. I used the "project reactor" library for scheduling and blocking, but all outside of the hypixel api, this PR gives me a new idea of implementing it inside the api (I will have to publish that code as well). If you would like to find a solution for this as well, please go ahead. I am more than interested in what you would find!

@HydroPage
Copy link
Contributor Author

A more efficient way of doing this would take into account the amount of...

I actually did consider doing this at first. The request recorder thing, that is. Though, I believe a better way of doing such a thing would be to make a primary background request to the API and stripping the headers for the requests left and the time left. Then, with the information, we could initialize the RateLimiter with a starting period of how many seconds are left in the minute, and a starting "current minute value" using the number of requests left

@GrizzlT
Copy link
Contributor

GrizzlT commented Apr 22, 2021

A more efficient way of doing this would take into account the amount of...

I actually did consider doing this at first. The request recorder thing, that is....

That would indeed be a better solution, if the API stays consistent with it's resets of course (which I don't know if we're guaranteed that)

Copy link
Member

@ConnorLinfoot ConnorLinfoot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good but not sure what the best approach will be for including this with the intended changes for 4.0.0, see #399. Since we'd want to include it with that release.

Maybe we add it as an option to the transporters? Honestly not sure what's best so open to suggestions.


import static java.util.concurrent.TimeUnit.MINUTES;

public class RateLimiter {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should make sure to format this class similar to other classes in the project, using 4 spaces for indentation instead of tabs are the main thing

@ConnorLinfoot ConnorLinfoot added this to the v4.0.0 milestone May 10, 2021
@GrizzlT
Copy link
Contributor

GrizzlT commented May 11, 2021

In #425 I wrote a transport that would fulfill just this purpose, after every reset, the first next request is used to sync the clock. This also dynamically knows how many requests can be sent by sending a request and using the ratelimit-remaining header. The advantage of my method is that whatever key the user is using, it'll support maximum efficiency.

@HydroPage
Copy link
Contributor Author

HydroPage commented May 11, 2021

In #425 I wrote a transport that would fulfill just this purpose...

Please explain to me why the approach you created is better over simply refining this approach I've created to make an initial request to start the RateLimiter on. We could even use the field for seconds left to make the reset timer begin at a more synchronized time. This approach, if used as intended, doesn't even block. The beforeAction() method is called before the request is made inside Hypixel's get method for making requests, inside of the CompletableFuture submission. Why can't we simply build this into the new system to do the same thing? Yes, your method is synchronized to Hypixel's clock using request headers, but I don't see why synchronization is inherently necessary as long as the rate of requesting is kept, and, like I said, my method can even be refactored to be synchronized to Hypixel with an initial request. Your approach seems overengineered for such a simple API

@GrizzlT
Copy link
Contributor

GrizzlT commented May 11, 2021

Please explain to me why the approach you created is better over simply refining this approach...

I'm not saying my approach is better than yours, I realize only now that we made different things: my approach replaces getting the actual requests (intended for the 4.0.0 update) whilst yours is something that is built around getting the requests. So it's not the same, my bad.

I do like the idea of automatic rate limiting like this implemented in the api and you do have a good solution for it.

What I like so much about my approach though is that if I sent a lot of requests that need to wait, I should be able to cancel the ones that I might no longer be interested in, adding to the efficiency. Something else I like about it is that you don't need to specify a limit and that it will automatically deduce this based on the headers, the staying in sync part is indeed close to over-engineering but necessary for automatic limit deduction, especially if the code runs for a long time (which again, I must admit wouldn't always happen of course), a final thing I could say about mine is that I can make it use the project reactor library directly, making it more convenient to use in a project reactor based environment.

It would be great if both could be implemented somehow, I'm definitely not saying they should prefer my "solution" over your solution as we tackle different aspects of the same problem.

@HydroPage
Copy link
Contributor Author

I should be able to cancel the ones that I might no longer be interested in, adding to the efficiency

CompletableFuture#cancel(boolean mayInterruptIfRunning)

@GrizzlT
Copy link
Contributor

GrizzlT commented May 11, 2021

canceling the CompletableFuture would still send the request to the api

@HydroPage
Copy link
Contributor Author

HydroPage commented May 11, 2021

canceling the CompletableFuture would still send the request to the api

Not if you cancel it before a thread is freed to execute it in the thread pool, which, if you're making enough requests to have time to do (enough requests to have them begin blocking), this shouldn't be an issue. I really don't see why you'd want to cancel a request other than for timeout purposes (in which case the thread would likely be frozen anyway in the thread pool queue OR genuinely taking too long to respond; two cases which would safely cancel), and I'm sure a programmer should be smart enough to not make requests he'll have to cancel immediately

@GrizzlT
Copy link
Contributor

GrizzlT commented May 11, 2021

Not if you cancel it before a thread is freed to execute it in the thread pool

Canceling the CompletableFuture won't cancel the request from getting sent in the code, go over it yourself. It will just get sent whenever a thread becomes available.

if you're making enough requests to have time to do (enough requests to have them begin blocking), this shouldn't be an issue.

That is true but this is about limits and making sure the code is as efficient as possible at it's limits even if that may not be reached 90% of the time. And sometimes you really do not know when you would better cancel something or not, it's mostly a convenient feature allowing some flexibility to the developer.

@HydroPage
Copy link
Contributor Author

HydroPage commented May 11, 2021

Efficiency is fine and all, yes. In the end, though, your solution depends on an entirely new framework (which I didn't even know existed, so reading your code was fun), and creates a lot more objects.
Also:

It will just get sent whenever a thread becomes available.

Can we not put the beforeAction() call somewhere else to change this?

@GrizzlT
Copy link
Contributor

GrizzlT commented May 11, 2021

your solution depends on an entirely new framework

Isn't that why mine is implemented as a transport you choose to use or not? I'm much more for your solution being in the api-core by default (but toggleable). I like this library because I'm familiar with it.

@HydroPage
Copy link
Contributor Author

Well, yes, but this isn't any less optional. If the beforeAction() call can be put in the right place in the default transport, it's just a matter of adding an enabling flag to the default transport to determine whether it should be called or not. If the flag is true, pass through beforeAction(), if it's off, skip it

@GrizzlT
Copy link
Contributor

GrizzlT commented May 11, 2021

I agree, adding request canceling support will be tricky though

@HydroPage
Copy link
Contributor Author

Do you have Discord? This is way too many notifications

@CyberFlameGO
Copy link

Do you have Discord? This is way too many notifications

You can manage GitHub notifications at https://github.com/notifications if they’re too noisy for you!

@HydroPage
Copy link
Contributor Author

HydroPage commented Jul 4, 2021 via email

Copy link

@CyberFlameGO CyberFlameGO left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM - Nothing seems out of the ordinary here

@CyberFlameGO
Copy link

CyberFlameGO commented Jul 4, 2021 via email

@ConnorLinfoot ConnorLinfoot removed this from the v4.0.0 milestone Jul 26, 2021
@ConnorLinfoot
Copy link
Member

ConnorLinfoot commented Jul 26, 2021

I'm not sure if having this always enabled and in the core is the best approach. Having new transports such as #425 which have rate limiting built-in or having an option in the other transports could make the most sense if this is something you still think could be useful.

Feel free to re-open this PR if you update it to support the changes in 4.0, etc.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Development

Successfully merging this pull request may close these issues.

5 participants