You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Apr 19, 2024. It is now read-only.
I tried the leaky bucket algorithm but it seems the tokens doesn't leak from the bucket at consistent rate while we add new hits. When we doesn't add new hits, it's ok, the number or remaining tokens increases at the expected rate.
Maybe I misundertood the leaky bucket algorithm or there is a something strange.
An example may be more explicit.
I use a leaky bucket with 10 requests allowed per 30 seconds. I add a new hit every seconds. I expect that the remaining tokens increases by one at t+3s, t+6s, t+9s but it seems not the case.
You can reproduce this problem with the following script:
for i in 0 1 2 3 4 5 6 7 8 9 10; do curl -X POST http://localhost:9080/v1/GetRateLimits --data '{"requests":[{"name":"test", "unique_key": "testkey1", "hits":1, "duration":30000, "limit":10, "algorithm":"LEAKY_BUCKET"}]}'; echo; sleep 1; done
{"responses":[{"limit":"10","remaining":"9","reset_time":"3000","metadata":{"owner":"192.168.16.5:81"}}]}
{"responses":[{"limit":"10","remaining":"8","reset_time":"1603905514100","metadata":{"owner":"192.168.16.5:81"}}]}
{"responses":[{"limit":"10","remaining":"7","reset_time":"1603905515137","metadata":{"owner":"192.168.16.5:81"}}]}
{"responses":[{"limit":"10","remaining":"6","reset_time":"1603905516170","metadata":{"owner":"192.168.16.5:81"}}]}
{"responses":[{"limit":"10","remaining":"5","reset_time":"1603905517204","metadata":{"owner":"192.168.16.5:81"}}]}
{"responses":[{"limit":"10","remaining":"4","reset_time":"1603905518238","metadata":{"owner":"192.168.16.5:81"}}]}
{"responses":[{"limit":"10","remaining":"3","reset_time":"1603905519270","metadata":{"owner":"192.168.16.5:81"}}]}
{"responses":[{"limit":"10","remaining":"2","reset_time":"1603905520307","metadata":{"owner":"192.168.16.5:81"}}]}
{"responses":[{"limit":"10","remaining":"1","reset_time":"1603905521341","metadata":{"owner":"192.168.16.5:81"}}]}
{"responses":[{"limit":"10","reset_time":"1603905522373","metadata":{"owner":"192.168.16.5:81"}}]}
{"responses":[{"status":"OVER_LIMIT","limit":"10","reset_time":"1603905523406","metadata":{"owner":"192.168.16.5:81"}}]}
We can see the number of remaining tokens always decreases and the first "reset_time" isn't a valid unix timestamp.
If I run the same test but wait 4 seconds between each new hits, the remaining tokens increases correctly between each call.
for i in 0 1 2 3 4; do curl -X POST http://localhost:9080/v1/GetRateLimits --data '{"requests":[{"name":"test", "unique_key": "testkey2", "hits":1, "duration":30000, "limit":10, "algorithm":"LEAKY_BUCKET"}]}'; echo; sleep 4; done
{"responses":[{"limit":"10","remaining":"9","reset_time":"3000","metadata":{"owner":"192.168.16.4:81"}}]}
{"responses":[{"limit":"10","remaining":"9","reset_time":"1603905601743","metadata":{"owner":"192.168.16.4:81"}}]}
{"responses":[{"limit":"10","remaining":"9","reset_time":"1603905605775","metadata":{"owner":"192.168.16.4:81"}}]}
{"responses":[{"limit":"10","remaining":"9","reset_time":"1603905609808","metadata":{"owner":"192.168.16.4:81"}}]}
{"responses":[{"limit":"10","remaining":"9","reset_time":"1603905613840","metadata":{"owner":"192.168.16.4:81"}}]}
Don't hesitate if you need more information. I run the docker-compose platform with the latest docker image.
The text was updated successfully, but these errors were encountered:
We can see the number of remaining tokens always decreases and the first "reset_time" isn't a valid unix timestamp.
That is definitely a bug!
I see what you mean, This does look like a bug. I'll make a test in functional_test.go to exercise these bugs and see if we can get them fixed for the 1.0.0 release.
I tried the leaky bucket algorithm but it seems the tokens doesn't leak from the bucket at consistent rate while we add new hits. When we doesn't add new hits, it's ok, the number or remaining tokens increases at the expected rate.
Maybe I misundertood the leaky bucket algorithm or there is a something strange.
An example may be more explicit.
I use a leaky bucket with 10 requests allowed per 30 seconds. I add a new hit every seconds. I expect that the remaining tokens increases by one at t+3s, t+6s, t+9s but it seems not the case.
You can reproduce this problem with the following script:
We can see the number of remaining tokens always decreases and the first "reset_time" isn't a valid unix timestamp.
If I run the same test but wait 4 seconds between each new hits, the remaining tokens increases correctly between each call.
Don't hesitate if you need more information. I run the docker-compose platform with the latest docker image.
The text was updated successfully, but these errors were encountered: