Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix reapTime logic in NetworkDB #1944

Merged
merged 3 commits into from
Sep 22, 2017
Merged

Commits on Sep 21, 2017

  1. Fix reapTime logic in NetworkDB

    - Added remainingReapTime field in the table event.
      Wihtout it a node that did not have a state for the element
      was marking the element for deletion setting the max reapTime.
      This was creating the possibility to keep the entry being resync
      between nodes forever avoding the purpose of the reap time
      itself.
    
    - On broadcast of the table event the node owner was rewritten
      with the local node name, this was not correct because the owner
      should continue to remain the original one of the message
    
    Signed-off-by: Flavio Crisciani <flavio.crisciani@docker.com>
    Flavio Crisciani committed Sep 21, 2017
    Configuration menu
    Copy the full SHA
    10cd98c View commit details
    Browse the repository at this point in the history
  2. Changed ReapTable logic

    - Changed the loop per network. Previous implementation was taking a
      ReadLock to update the reapTime but now with the residualReapTime
      also the bulkSync is using the same ReadLock creating possible
      issues in concurrent read and update of the value.
      The new logic fetches the list of networks and proceed to the
      cleanup network by network locking the database and releasing it
      after each network. This should ensure a fair locking avoiding
      to keep the database blocked for too much time.
    
      Note: The ticker does not guarantee that the reap logic runs
      precisely every reapTimePeriod, actually documentation says that
      if the routine is too long will skip ticks. In case of slowdown
      of the process itself it is possible that the lifetime of the
      deleted entries increases, it still should not be a huge problem
      because now the residual reaptime is propagated among all the nodes
      a slower node will let the deleted entry being repropagate multiple
      times but the state will still remain consistent.
    
    Signed-off-by: Flavio Crisciani <flavio.crisciani@docker.com>
    Flavio Crisciani committed Sep 21, 2017
    Configuration menu
    Copy the full SHA
    3feb3aa View commit details
    Browse the repository at this point in the history

Commits on Sep 22, 2017

  1. Avoid alignment of reapNetwork and tableEntries

    Make sure that the network is garbage collected after
    the entries. Entries to be deleted requires that the network
    is present.
    
    Signed-off-by: Flavio Crisciani <flavio.crisciani@docker.com>
    Flavio Crisciani committed Sep 22, 2017
    Configuration menu
    Copy the full SHA
    fbba555 View commit details
    Browse the repository at this point in the history