Description
Discussed in https://github.com/discordjs/discord.js/discussions/8124
Originally posted by didinele June 19, 2022
Preface: this RFC (Request For Comments) is meant to gather feedback and opinions for the feature set and some implementation details for an abstraction of how @discordjs/rest
stores ratelimit data.
The goal is to give the end-user a way to control how ratelimit data is stored, enabling things like using Redis to share ratelimits across REST instances (i.e. across different services/processes).
1. The public API
RESTOptions
would take a new store?: IRestStore
parameter, meaning you'd now be able to do new REST({ store: new YourCustomStore() });
Currently, I imagine stores as simple key-value storage with a few methods:
// name up for discussion
interface IRateLimitStore {
has(key: string): Awaitable<boolean>;
get(key: string): Awaitable<RateLimitState>;
set(key: string, data: RateLimitState): Awaitable<void>;
delete(key: string): Awaitable<void>;
}
where type Awaitable<T> = T | Promise<T>;
and
interface RateLimitState {
reset: number;
remaining: number;
limit: number;
referenceCount: number;
}
2. Implementation details
First off, what do we use as the key in our little store? The same key we use to store our IHandler
s. This will come in handy in a bit.
From here on out, each IHandler
implementation can get its state using await this.manager.rateLimitStore.get(this.id);
And lastly, we need to figure out how to sweep this stuff! The main issue is that doing an interval driven sweep of the store via some lastUsed
property wouldn't be very efficient, especially since, in this case, every REST
instance using our particular store (e.g. a Redis server) would be redundantly all trying to sweep.
This is where our referenceCount
property comes into play. Let's assume we have 2 microservices, each one with its own REST
instance, all using a Redis store.
Now, let's assume a route /some/route
that has never been hit before. Our first microservice tries to fire up the request, and it eventually needs to create a handler, which we can see being done here:
discord.js/packages/rest/src/lib/RequestManager.ts
Lines 328 to 335 in 358c3f4
When we create our SequentialHandler
, we would call IRateLimitStore#set
with queue.id
and { ...initialState, referenceCount: 1 }
, where the inital state is the current jazz (e.g. limit: -1
).
Notice how we set our referenceCount
property to 1. Once our second microservice tries to make the same request, we'll check if state already exists using IRateLimitStore#has
- which it does - after which we'll simply increment the referenceCount
property to 2
.
Finally, a few hours later the sweeper will start getting to our handlers:
discord.js/packages/rest/src/lib/RequestManager.ts
Lines 256 to 266 in 358c3f4
When this happens, we'll query the state for the handler and decrement the reference count by 1. If it's dropped to 0, it means there's currently no active handlers, and therefore the state can be dropped, leading to a IRateLimitStore#delete
call.