Join GitHub today
GitHub is home to over 28 million developers working together to host and review code, manage projects, and build software together.Sign up
[manager/orchestrator/reaper] Clean out the task reaper dirty set at the end of tick() #2675
Addresses the following from from #2672:
task reaper dirty set is not cleaned up in various different scenarios at the end of tick() and causes high CPU as each additional task causes tick() to be called.
For testing, we should get coverage for cases where the tasks are not expected to be removed from the task reaper dirty set e.g. service already deleted, size of dirty set 1000 but per slot retention limit not reached etc.
Please sign your commits following these rules:
$ git clone -b "reaper" firstname.lastname@example.org:anshulpundir/swarmkit.git somewhere $ cd somewhere $ git rebase -i HEAD~842354429024 editor opens change each 'pick' to 'edit' save the file and quit $ git commit --amend -s --no-edit $ git rebase --continue # and repeat the amend for each commit $ git push -f
Amending updates the existing PR. You DO NOT need to open a new one.
@@ Coverage Diff @@ ## master #2675 +/- ## ========================================== + Coverage 62.06% 62.18% +0.11% ========================================== Files 134 134 Lines 21740 21742 +2 ========================================== + Hits 13494 13520 +26 + Misses 6794 6775 -19 + Partials 1452 1447 -5
My only concern is that we might end up leaking tasks that matches the condition here
One would assume that
EventUpdateTask gets fired when the task state changes, I am not sure if this is the case, but the below commit comment makes it sounds like it does not, hence why the author removed the reset map logic.
EventUpdateTask ends up in cleanup not in dirty map, so we need to handle this use case
If this is the case, I suggest to delete all tasks from the dirty list except the ones that apply to the above corner case/condition
Assuming that my 'leak' you mean 'never be able to clean up', I don't think that we'll leak those tasks. They will be cleaned up whenever a new task gets created for that slot.
Having said that, I also think its OK to keep track of those tasks and not remove them from the dirty set, but it leads to a slightly inelegant implementation.
EventUpdateTask needs updating to mark tasks as dirty.
Right, the task will eventually be cleaned up when a create task event comes in; but unlike other tasks that skips this condition and gets processed, here we might end up (temporary) with a number of tasks (+1) larger then taskHistory.
If we might end up reporting the wrong number of task history (> configured taskHistory), I think it's worth to keep track of those tasks