This repository has been archived by the owner on Dec 13, 2023. It is now read-only.
add workflow cleaner to clean expired workflows periodically #1609
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Hi, I ran into a problem while using redis persistence, which has already been discussed in some issues like #1315. Redis memory usage kept growing and random data including some metadata got evicted.
So I add a workflow cleaner in RedisExecutionDAO. It records the timestamp and workflowId in zset (timestamp as score, workflowId as value) when a workflow is created. A background thread will find and clean the expired workflows (indicated by config workflow.cleaner.expire.seconds) periodically. The period is indicated by config workflow.cleaner.period.seconds. If there are too many expired workflows, cleaner will process them batch by batch. The batch size is indicated by config workflow.cleaner.batch.size.
The cleaner is not meant to clean data in IndexDAO or other db implementation, because the disk-based storage may not have this problem. It works well in my environment and I hope this would help someone else.