Current Item in redis cahe keeps increasing - Understanding Metadata Storage and Cache Management in Asynq with AWS Elasticache - Redis #875
Unanswered
DipinVasuHPE
asked this question in
Q&A
Replies: 1 comment 3 replies
-
Yes see the rention period option. Check the wiki for more details. |
Beta Was this translation helpful? Give feedback.
3 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I've built a straightforward Go application comprising producer and consumer processes that communicate via AWS Elasticache - Redis. While the server doesn't explicitly define any caching mechanisms, I've noticed that Asynq, the library I'm using for background job processing, stores metadata in the Redis cache, which appears to keep increasing over time. The current item count in the Redis cache seems to rise as task volume or traffic grows.
** Need to confirm on our assumption that Asynq uses a cache when used as job queue
I'm concerned about the scalability implications, particularly as task volume or traffic grows. Is there a TTL (time-to-live) defined for the data stored by Asynq in Redis? I'm curious about whether there's an expiry mechanism in place or any other measures to manage and reset the cache, especially under increasing load. Any insights or guidance on this matter would be greatly appreciated!
for example :
KEYS * :
CC : @hibiken
Beta Was this translation helpful? Give feedback.
All reactions