Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[17.09] Fix reapTime logic in NetworkDB + handle cleanup DNS for attachable container #2017

Merged
merged 6 commits into from
Nov 20, 2017

Commits on Nov 20, 2017

  1. Fix reapTime logic in NetworkDB

    - Added remainingReapTime field in the table event.
      Wihtout it a node that did not have a state for the element
      was marking the element for deletion setting the max reapTime.
      This was creating the possibility to keep the entry being resync
      between nodes forever avoding the purpose of the reap time
      itself.
    
    - On broadcast of the table event the node owner was rewritten
      with the local node name, this was not correct because the owner
      should continue to remain the original one of the message
    
    Signed-off-by: Flavio Crisciani <flavio.crisciani@docker.com>
    (cherry picked from commit 10cd98c)
    Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
    Flavio Crisciani authored and thaJeztah committed Nov 20, 2017
    Configuration menu
    Copy the full SHA
    bcc968c View commit details
    Browse the repository at this point in the history
  2. Changed ReapTable logic

    - Changed the loop per network. Previous implementation was taking a
      ReadLock to update the reapTime but now with the residualReapTime
      also the bulkSync is using the same ReadLock creating possible
      issues in concurrent read and update of the value.
      The new logic fetches the list of networks and proceed to the
      cleanup network by network locking the database and releasing it
      after each network. This should ensure a fair locking avoiding
      to keep the database blocked for too much time.
    
      Note: The ticker does not guarantee that the reap logic runs
      precisely every reapTimePeriod, actually documentation says that
      if the routine is too long will skip ticks. In case of slowdown
      of the process itself it is possible that the lifetime of the
      deleted entries increases, it still should not be a huge problem
      because now the residual reaptime is propagated among all the nodes
      a slower node will let the deleted entry being repropagate multiple
      times but the state will still remain consistent.
    
    Signed-off-by: Flavio Crisciani <flavio.crisciani@docker.com>
    (cherry picked from commit 3feb3aa)
    Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
    Flavio Crisciani authored and thaJeztah committed Nov 20, 2017
    Configuration menu
    Copy the full SHA
    1cde6d6 View commit details
    Browse the repository at this point in the history
  3. Avoid alignment of reapNetwork and tableEntries

    Make sure that the network is garbage collected after
    the entries. Entries to be deleted requires that the network
    is present.
    
    Signed-off-by: Flavio Crisciani <flavio.crisciani@docker.com>
    (cherry picked from commit fbba555)
    Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
    Flavio Crisciani authored and thaJeztah committed Nov 20, 2017
    Configuration menu
    Copy the full SHA
    7931758 View commit details
    Browse the repository at this point in the history
  4. Fix comparison against wrong constant

    The comparison was against the wrong constant value.
    As described in the comment the check is there to guarantee
    to not propagate events realted to stale deleted elements
    
    Signed-off-by: Flavio Crisciani <flavio.crisciani@docker.com>
    (cherry picked from commit 6f11d29)
    Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
    Flavio Crisciani authored and thaJeztah committed Nov 20, 2017
    Configuration menu
    Copy the full SHA
    775f944 View commit details
    Browse the repository at this point in the history
  5. Handle cleanup DNS for attachable container

    Attachable containers they are tasks with no service associated
    their cleanup was not done properly so it was possible to have
    a leak of their name resolution if that was the last container
    on the network.
    Cleanupservicebindings was not able to do the cleanup because there
    is no service, while also the notification of the delete arrives
    after that the network is already being cleaned
    
    Signed-off-by: Flavio Crisciani <flavio.crisciani@docker.com>
    (cherry picked from commit 1c04e19)
    Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
    Flavio Crisciani authored and thaJeztah committed Nov 20, 2017
    Configuration menu
    Copy the full SHA
    87b75a4 View commit details
    Browse the repository at this point in the history
  6. Add test for cleanupServiceDiscovery

    Unit test for the cleanupServiceDiscovery,
    follow up of PR: moby#1985
    
    Signed-off-by: Flavio Crisciani <flavio.crisciani@docker.com>
    (cherry picked from commit 52a9ab5)
    Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
    Flavio Crisciani authored and thaJeztah committed Nov 20, 2017
    Configuration menu
    Copy the full SHA
    42f9e55 View commit details
    Browse the repository at this point in the history