✨ Design how to bucket rate limiting by users/requesters/etc #44
Labels
area:devexp
component:ratelimiter
type:documentation
Improvements or additions to documentation
type:enhancement
New feature or request
The current Hyx's rate limiter APIs were inspired by resilience4j/Polly. So each instance of rate limiter handles one rate limit "shard" (e.g. basically, the level on which you apply rate limiting. It could per user, per user/request route, etc). As you could imagine, this is SUPER low level implementation which is almost certainly insufficient for the regular usecases 😌
What I have found people really needed is an API that would be integrated easily into their API framework of their choice (e.g. Flask, Starlette, FastAPI) and allowed to have many shards right away. With such implementation in place, we will close the most common and basic need for rate limiting.
Now, to be able to store a big number of shards, we will need to explore a use of probabilistic data structures, so we could "compress" the exact shard's limit quotas for the sake of a low memory footprint.
Definition of Done
References
The text was updated successfully, but these errors were encountered: