You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
What problem does this feature proposal attempt to solve?
A nice and clear point based rate limiting which is more fair than a throttle based (can only retrieve node x once every x window) because it takes in account the amount of (nested) queries it needs to execute, read on for more details.
Which possible solutions should be considered?
I have found that GitHub has a very nice idea on how they implement rate limiting at least for queries and I will not try to explain it better but redirect to their very nice documentation about how calculation is handled.
This is the schema they are using to query rate limit information for the user and we could consider to inject in the schema if the rate limiting features are enabled:
typeQuery {
"The client\'s rate limit information."rateLimit(
"If true, calculate the cost for the query without evaluating it."dryRun: Boolean = false
): RateLimit
}
"Represents the client\'s rate limit."typeRateLimit {
"The point cost for the current query counting against the rate limit."cost: Int!"The maximum number of points the client is permitted to consume in a 60 minute window."limit: Int!"The maximum number of nodes this query may return."nodeCount: Int!"The number of points remaining in the current rate limit window."remaining: Int!"The time at which the current rate limit window resets in UTC epoch seconds."resetAt: DateTime!"The number of points used in the current rate limit window."used: Int!
}
The nice thing is that I believe the current complexity system can be re-used for a large part or maybe that is already enough information. It already calculates complexity based on amount of objects requested this value might be all we need for the cost of a query (we can still divide by 100 like GitHub does to keep the numbers smaller and more rounded).
I also believe we can heavily lean on the Laravel rate limiting features or at least re-use a large part of it's backbone to actually implement the limiting.
Features to consider:
Disabled by default and enabled by a config option (lighthouse.rateLimit.enabled = true).
Change the rate limit for guests and logged in users by having a dynamic resolver for the rate limit key and max points per window so the user can decide based on auth status and/or role and/or plan for example (Laravel offers similar flexibility: https://laravel.com/docs/8.x/routing#rate-limiting)
Possibly a dynamic window based on the auth status but I think defining a window in the settings would also suffice since I wont expect that to be tweaked as much but if easy it could be a nice addition (lighthouse.rateLimit.window = 60)
Dry run is a really cool feature which basically parses the query but doesn't execute anything exception for the rateLimit node
Cost should be always 1 not matter how small the query was or if dryRun was used
Things to research
How to handle mutations and what their cost is
How to handle subscriptions and what their cost is
If and how much of the Laravel rate limiting code can be re-used for our purpose
If and how much the current complexity calculation can be re-used to calculate the query cost
I did try to make a little start on this but I have not enough experience or time at the moment and I believe there is also not any feature in Lighthouse that injects a query in the schema with it's internal resolver so I got stuck there already... happy to discuss ideas and see if we can make some headway on this if there is interest to have this in the core.
The text was updated successfully, but these errors were encountered:
I just found another approach which is equally as interesting. I wanted to add it to the discussion for future reference because it is possibly simpeler than the GitHub model to implement.
Possibly a mix between the two is the "right" way for Lighthouse.
I think this is simpeler because there is already query complexity calculations performed by the underlying GraphQL lib so that might be a value that can be used 1:1.
What problem does this feature proposal attempt to solve?
A nice and clear point based rate limiting which is more fair than a throttle based (can only retrieve node x once every x window) because it takes in account the amount of (nested) queries it needs to execute, read on for more details.
Which possible solutions should be considered?
I have found that GitHub has a very nice idea on how they implement rate limiting at least for queries and I will not try to explain it better but redirect to their very nice documentation about how calculation is handled.
This is the schema they are using to query rate limit information for the user and we could consider to inject in the schema if the rate limiting features are enabled:
The nice thing is that I believe the current complexity system can be re-used for a large part or maybe that is already enough information. It already calculates complexity based on amount of objects requested this value might be all we need for the cost of a query (we can still divide by 100 like GitHub does to keep the numbers smaller and more rounded).
I also believe we can heavily lean on the Laravel rate limiting features or at least re-use a large part of it's backbone to actually implement the limiting.
Features to consider:
lighthouse.rateLimit.enabled = true
).lighthouse.rateLimit.window = 60
)rateLimit
nodedryRun
was usedThings to research
I did try to make a little start on this but I have not enough experience or time at the moment and I believe there is also not any feature in Lighthouse that injects a query in the schema with it's internal resolver so I got stuck there already... happy to discuss ideas and see if we can make some headway on this if there is interest to have this in the core.
The text was updated successfully, but these errors were encountered: