-
Notifications
You must be signed in to change notification settings - Fork 879
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[17.09] Fix reapTime logic in NetworkDB + handle cleanup DNS for attachable container #2017
Merged
fcrisciani
merged 6 commits into
moby:bump_17.09
from
thaJeztah:17.09-backport-netdb-fix-reap
Nov 20, 2017
Merged
[17.09] Fix reapTime logic in NetworkDB + handle cleanup DNS for attachable container #2017
fcrisciani
merged 6 commits into
moby:bump_17.09
from
thaJeztah:17.09-backport-netdb-fix-reap
Nov 20, 2017
Commits on Nov 20, 2017
-
Fix reapTime logic in NetworkDB
- Added remainingReapTime field in the table event. Wihtout it a node that did not have a state for the element was marking the element for deletion setting the max reapTime. This was creating the possibility to keep the entry being resync between nodes forever avoding the purpose of the reap time itself. - On broadcast of the table event the node owner was rewritten with the local node name, this was not correct because the owner should continue to remain the original one of the message Signed-off-by: Flavio Crisciani <flavio.crisciani@docker.com> (cherry picked from commit 10cd98c) Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Configuration menu - View commit details
-
Copy full SHA for bcc968c - Browse repository at this point
Copy the full SHA bcc968cView commit details -
- Changed the loop per network. Previous implementation was taking a ReadLock to update the reapTime but now with the residualReapTime also the bulkSync is using the same ReadLock creating possible issues in concurrent read and update of the value. The new logic fetches the list of networks and proceed to the cleanup network by network locking the database and releasing it after each network. This should ensure a fair locking avoiding to keep the database blocked for too much time. Note: The ticker does not guarantee that the reap logic runs precisely every reapTimePeriod, actually documentation says that if the routine is too long will skip ticks. In case of slowdown of the process itself it is possible that the lifetime of the deleted entries increases, it still should not be a huge problem because now the residual reaptime is propagated among all the nodes a slower node will let the deleted entry being repropagate multiple times but the state will still remain consistent. Signed-off-by: Flavio Crisciani <flavio.crisciani@docker.com> (cherry picked from commit 3feb3aa) Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Configuration menu - View commit details
-
Copy full SHA for 1cde6d6 - Browse repository at this point
Copy the full SHA 1cde6d6View commit details -
Avoid alignment of reapNetwork and tableEntries
Make sure that the network is garbage collected after the entries. Entries to be deleted requires that the network is present. Signed-off-by: Flavio Crisciani <flavio.crisciani@docker.com> (cherry picked from commit fbba555) Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Configuration menu - View commit details
-
Copy full SHA for 7931758 - Browse repository at this point
Copy the full SHA 7931758View commit details -
Fix comparison against wrong constant
The comparison was against the wrong constant value. As described in the comment the check is there to guarantee to not propagate events realted to stale deleted elements Signed-off-by: Flavio Crisciani <flavio.crisciani@docker.com> (cherry picked from commit 6f11d29) Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Configuration menu - View commit details
-
Copy full SHA for 775f944 - Browse repository at this point
Copy the full SHA 775f944View commit details -
Handle cleanup DNS for attachable container
Attachable containers they are tasks with no service associated their cleanup was not done properly so it was possible to have a leak of their name resolution if that was the last container on the network. Cleanupservicebindings was not able to do the cleanup because there is no service, while also the notification of the delete arrives after that the network is already being cleaned Signed-off-by: Flavio Crisciani <flavio.crisciani@docker.com> (cherry picked from commit 1c04e19) Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Configuration menu - View commit details
-
Copy full SHA for 87b75a4 - Browse repository at this point
Copy the full SHA 87b75a4View commit details -
Configuration menu - View commit details
-
Copy full SHA for 42f9e55 - Browse repository at this point
Copy the full SHA 42f9e55View commit details
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.