-
Notifications
You must be signed in to change notification settings - Fork 37
Description
Currently, we assume that always enough memory is available. Usually processing the data from Redis into the database should be kinda fast and the queue should not take loads of space. However, if there's eg a problem with tracking then it might just collect requests into Redis and never remove them from there when it is eg not possible to acquire a lock see #22 and #24 . In such cases it might be possible to run out of memory over time.
We should think about ways to make the Queue better handle such problems.
- Maybe we can detect a no more memory available and print or log a clear error message.
- Also when enabling the queue and when testing the queue via "queuedtracking::test", we should check whether eg the
noevictionorallkeys-lrueviction policy is activated (these are OK). Other policies might result in problems when it comes to low memory and will most likely always release our lock.
Background:
If no more memory is available, and eg volatile-lru is set, it would always evict first our key for the lock as it is probably the only one with an expire set. The same for volatile-random etc.
From http://redis.io/topics/lru-cache:
volatile-lru: evict keys trying to remove the less recently used (LRU) keys first, but only among keys that have an expire set, in order to make space for the new data added.
allkeys-lru: evict keys trying to remove the less recently used (LRU) keys first, in order to make space for the new data added
I was also thinking about using two databases, one for the lock etc and one for the actual tracking requests but this doesn't solve much. We could have had a small database just for the lock which doesn't need much space, maybe 1MB. This way we would make sure to never evict a lock key etc but I think it is not really needed as it makes configuration more difficult etc.
Maybe there are other things we can do too?