You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The way the current rate limits are implemented are a bit of lackluster. I hope this can be improved upon.
Currently when you define your rate limits as suggested in the .yml or .toml file, you can specify a duration, an average and a burst. Additionally to an expression it should trigger on.
Now if you hit one of those rate limits in the loadbalancer it will return a string which says the following:
For users that want to properly build around those rate limits and throttle depending on how close you are to hitting them, it would be kind of nice to have a way of always sending rate limit headers?
Maybe something along the lines of on every request:
X-Ratelimit-Limit: 50
X-Ratelimit-Remaining: 20 // or 0 if you hit the ratelimit
X-Ratelimit-Reset: Unix Timestamps of reset
This way the users that interface with any service could properly integrate their application into those ratelimits.
Additionally, it would be nice to have a configurable response. Currently as mentioned above it returns a string, which can certainly throw off any implementation on a JSON based REST API since the response body would not match the expected JSON response.
The text was updated successfully, but these errors were encountered:
Do you want to request a feature or report a bug?
Feature
What did you expect to see?
The way the current rate limits are implemented are a bit of lackluster. I hope this can be improved upon.
Currently when you define your rate limits as suggested in the .yml or .toml file, you can specify a duration, an average and a burst. Additionally to an expression it should trigger on.
Now if you hit one of those rate limits in the loadbalancer it will return a string which says the following:
https://github.com/containous/traefik/blob/599b699ac95fa055a6c09ce6fc10a0484cfb4ad0/vendor/github.com/vulcand/oxy/ratelimit/tokenlimiter.go#L174-L176
While this may be sort of okay, the headers that get set are:
https://github.com/containous/traefik/blob/599b699ac95fa055a6c09ce6fc10a0484cfb4ad0/vendor/github.com/vulcand/oxy/ratelimit/tokenlimiter.go#L181-L189
For users that want to properly build around those rate limits and throttle depending on how close you are to hitting them, it would be kind of nice to have a way of always sending rate limit headers?
Maybe something along the lines of on every request:
This way the users that interface with any service could properly integrate their application into those ratelimits.
Additionally, it would be nice to have a configurable response. Currently as mentioned above it returns a string, which can certainly throw off any implementation on a JSON based REST API since the response body would not match the expected JSON response.
The text was updated successfully, but these errors were encountered: