Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

reader-writer file locks #278

Merged
merged 3 commits into from
Feb 15, 2019
Merged

reader-writer file locks #278

merged 3 commits into from
Feb 15, 2019

Conversation

vrothberg
Copy link
Member

@vrothberg vrothberg commented Jan 31, 2019

ready for review :^)

Signed-off-by: Valentin Rothberg rothberg@redhat.com

@vrothberg vrothberg force-pushed the rw-lock branch 3 times, most recently from ad9b760 to 7702c69 Compare January 31, 2019 15:04
lockfile.go Outdated Show resolved Hide resolved
lockfile_unix.go Outdated Show resolved Hide resolved
lockfile_unix.go Show resolved Hide resolved
lockfile_unix.go Show resolved Hide resolved
@vrothberg vrothberg force-pushed the rw-lock branch 3 times, most recently from 2375bf5 to 8dc3973 Compare February 1, 2019 16:25
@vrothberg
Copy link
Member Author

@nalind @rhatdan PTAL. This, in combination with the latest containers/image allows parallel pushes with the ~20 percent improvement.

However, the progress bars of image/copy are a bit confused now which I will address next week :)

@rhatdan
Copy link
Member

rhatdan commented Feb 1, 2019

Nice work.

@rhatdan
Copy link
Member

rhatdan commented Feb 1, 2019

@vrothberg One thing you might want to try, would be to vendor this into CRI-O in a test build and then fire off the tests against it. Might give us a better feeling that we will not have issues.

@vrothberg
Copy link
Member Author

For the record, I opened the following CI PRs to check for potential regressions:

@vrothberg
Copy link
Member Author

Podman and Buildah are happy, CRI-O seems to be constrained by CI flakes.

@vrothberg vrothberg changed the title WIP: reader-writer file locks reader-writer file locks Feb 4, 2019
Copy link
Member

@giuseppe giuseppe left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

lockfile.go Outdated Show resolved Hide resolved
@vrothberg vrothberg force-pushed the rw-lock branch 5 times, most recently from e706e38 to e7d00a0 Compare February 11, 2019 09:07
@vrothberg
Copy link
Member Author

Alright. I fixed some issues when using defer x.Unlock() on loop variables causing the Unlock() to always be called on the last item of the slice which caused various issues. I further polished reader-writer locks and did some more optimizations on the code (mostly avoiding the fcntl(2) syscall from the same process space), so that a single push is now 30 percent faster on my notebook. I will update the CI PRs but consider this PR to be the done now (unless CI disagrees).

@rhatdan @nalind @giuseppe Can you have a look at the three commits of this PR?

@vrothberg
Copy link
Member Author

Podman and Buildah are green. CRI-O is green but the e2e tests which have a regression from upstream (as all other PRs for CRI-O).

lockfile.go Outdated Show resolved Hide resolved
Copy link
Member

@rhatdan rhatdan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, but a couple of nit questions.

store.go Show resolved Hide resolved
@rhatdan
Copy link
Member

rhatdan commented Feb 13, 2019

LGTM
@nalind WDYT?

lockfile_unix.go Outdated Show resolved Hide resolved
@nalind
Copy link
Member

nalind commented Feb 14, 2019

Good catch! We also need the locking loop fix added to store.ContainerSize() and store.LayerSize(). LGTM with a couple of pieces added.

Deferring method calls on loop variables must be avoided by all means as
the calls will be invoked on the last item of the loop.

The intermediate fix used in this commit is to allocate a new variable
on the heap for each loop iteration.  An example transformation is:

FROM:
for _, x := range x_slice {
	x.Lock()
	defer x.Unlock()
}

TO:
for _, x_itr := range x_slice {
	x := x_itr
	x.Lock()
	defer x.Unlock()
}

Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
Implement reader-writer locks to allow allow multiple readers to hold
the lock in parallel.

* The locks are still based on fcntl(2).

* Changing the lock from a reader to a writer and vice versa will block
  on the syscall.

* A writer lock can be held only by one process.  To protect against
  concurrent accesses by gourtines within the same process space, use a
  writer mutex.

* Extend the Locker interface with the `RLock()` method to acquire a
  reader lock.  If the lock is set to be read-only, all calls to
  `Lock()` will be redirected to `RLock()`.  A reader lock is only
  released via fcntl(2) when all gourtines within the same process space
  have unlocked it.  This is done via an internal counter which is
  protected (among other things) by an internal state mutex.

* Panic on violations of the lock protocol, namely when calling
  `Unlock()` on an unlocked lock.  This helps detecting violations in
  the code but also protects the storage from corruption.  Doing this
  has revealed some bugs fixed in ealier commits.

Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
@vrothberg
Copy link
Member Author

Good catch! We also need the locking loop fix added to store.ContainerSize() and store.LayerSize(). LGTM with a couple of pieces added.

Thanks for the review, @nalind! I updated the commits accordingly and repushed.

Copy link
Member

@giuseppe giuseppe left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

just a minor thing, LGTM

lockfile_unix.go Outdated Show resolved Hide resolved
Enable executing parallel `GetBlob()` executions in containers/image by
using reader-lock acquisitions in `ImageBigData()` and `Diff()`.

Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
@rhatdan
Copy link
Member

rhatdan commented Feb 15, 2019

LGTM

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants