Skip to content

RFC(rest): ✨ the abstraction of ratelimit data storage ✨ #8125

@vladfrangu

Description

@vladfrangu

Discussed in https://github.com/discordjs/discord.js/discussions/8124

Originally posted by didinele June 19, 2022
Preface: this RFC (Request For Comments) is meant to gather feedback and opinions for the feature set and some implementation details for an abstraction of how @discordjs/rest stores ratelimit data.

The goal is to give the end-user a way to control how ratelimit data is stored, enabling things like using Redis to share ratelimits across REST instances (i.e. across different services/processes).


1. The public API

RESTOptions would take a new store?: IRestStore parameter, meaning you'd now be able to do new REST({ store: new YourCustomStore() });

Currently, I imagine stores as simple key-value storage with a few methods:

// name up for discussion
interface IRateLimitStore {
  has(key: string): Awaitable<boolean>;
  get(key: string): Awaitable<RateLimitState>;
  set(key: string, data: RateLimitState): Awaitable<void>;
  delete(key: string): Awaitable<void>;
}

where type Awaitable<T> = T | Promise<T>; and

interface RateLimitState {
  reset: number;
  remaining: number;
  limit: number;
  referenceCount: number;
}

2. Implementation details

First off, what do we use as the key in our little store? The same key we use to store our IHandlers. This will come in handy in a bit.

this.handlers.get(`${hash.value}:${routeId.majorParameter}`) ??

From here on out, each IHandler implementation can get its state using await this.manager.rateLimitStore.get(this.id);

And lastly, we need to figure out how to sweep this stuff! The main issue is that doing an interval driven sweep of the store via some lastUsed property wouldn't be very efficient, especially since, in this case, every REST instance using our particular store (e.g. a Redis server) would be redundantly all trying to sweep.

This is where our referenceCount property comes into play. Let's assume we have 2 microservices, each one with its own REST instance, all using a Redis store.

Now, let's assume a route /some/route that has never been hit before. Our first microservice tries to fire up the request, and it eventually needs to create a handler, which we can see being done here:

private createHandler(hash: string, majorParameter: string) {
// Create the async request queue to handle requests
const queue = new SequentialHandler(this, hash, majorParameter);
// Save the queue based on its id
this.handlers.set(queue.id, queue);
return queue;
}

When we create our SequentialHandler, we would call IRateLimitStore#set with queue.id and { ...initialState, referenceCount: 1 }, where the inital state is the current jazz (e.g. limit: -1).

Notice how we set our referenceCount property to 1. Once our second microservice tries to make the same request, we'll check if state already exists using IRateLimitStore#has - which it does - after which we'll simply increment the referenceCount property to 2.

Finally, a few hours later the sweeper will start getting to our handlers:

this.handlers.sweep((v, k) => {
const { inactive } = v;
// Collect inactive handlers
if (inactive) {
sweptHandlers.set(k, v);
}
this.emit(RESTEvents.Debug, `Handler ${v.id} for ${k} swept due to being inactive`);
return inactive;
});

When this happens, we'll query the state for the handler and decrement the reference count by 1. If it's dropped to 0, it means there's currently no active handlers, and therefore the state can be dropped, leading to a IRateLimitStore#delete call.

Metadata

Metadata

Assignees

No one assigned

    Projects

    No projects

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions