New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Rare failure mode #6
Comments
A possible solution might be to first check if the key exists, and if it doesn't create it with the count of zero and set the TTL. |
Maybe we should use transaction said in https://aioredis.readthedocs.io/en/v1.3.0/mixins.html#transaction-commands instead of pipeline |
It looks to me like that would work - assuming aioredis exposes the |
Issue is still present right? Is this being closed as "won't fix"? |
Fixed in f7d5720 |
As far as I can tell that change doesn't address the issue. It's the second redis call that sets the ttl on the key (iff num is 1). If that second transaction is missed due to computers being computers, the rate limiter is forever going to deny all future requests once the per period limit is reached. |
Could you give a PR and tests? |
I can think of a few ways to improve the protocol such that the failure mode becomes "in rare circumstances could let one extra request into a period" rather than "in rare circumstances will enter a stater where all future requests are denied".
So the current behavior is this: Proposal 1 adds an additional redis query of "does the key exist": if not redis.exists(key):
redis.psetex(key, period_in_milliseconds, 0)
tr = redis.multi_exec()
tr.incrby(key, 1)
tr.pttl(key)
num, expiry = await tr.execute() Which gives something like this: There may be a method without an extra roundtrip to redis, and there could very well still be issues with that proposal. Ideally there would be a published protocol we could use, or we could make a TLA+ model of the protocol. |
@long2ice do you want to reopen this issue? Hopefully my diagrams help to explain how a service using fastapi-limiter can get into an irrecoverable state. Do you have any thoughts on my proposed solution? |
Could get exists by |
I'm not sure what you mean? In the current protocol or in my proposal? |
How about setting the TTL every time in the transaction? That should be safe and doesn't need the extra roundtrip. Maybe I'm missing something but doesn't it make sense to reset the TTL for every request? Not familiar with aioredis but something like this: tr = redis.multi_exec()
tr.incrby(key, 1)
tr.pexpire(key, self.milliseconds)
num, _ = await tr.execute() |
That would prevent the failure condition... but it would stop being a rate limiter :-P |
Oh, thats right 😄. What's missing is that one has to use a separate key for each time interval as in https://redis.io/commands/incr/#pattern-rate-limiter-1 |
Looking at the core in fastapi_limiter/depends.py#L35-L42:
In the extremely rare case that the process fails between incrementing the value with
p.execute()
and setting the ttl withredis.pexpire
the key won't actually have a time to live set. The next request will increment the count, but asnum
will already be greater than 1 the expiry won't get set... so afterself.times
requests all following requests will count as exceeding the rate limit.The text was updated successfully, but these errors were encountered: