Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

devmapper: Increase sleep times and unlock while sleeping #4504

Merged
merged 2 commits into from
Mar 11, 2014

Conversation

alexlarsson
Copy link
Contributor

We've seen some cases in the wild where waiting for unmount/deactivate
of devmapper devices taking a long time (several seconds). So, we increase
the sleeps to 10 seconds before we timeout. For instance:

#4389

But, in order to not keep other processes blocked we unlock the global
dm lock while waiting to allow other devices to continue working.

Docker-DCO-1.1-Signed-off-by: Alexander Larsson alexl@redhat.com (github: alexlarsson)

We currently use a global lock to protect global data (like the
Devices map) as well as device data itself and access to
(non-threadsafe) libdevmapper.

This commit also adds a per-device lock, which will allow per-device
operations to temporarily release the global lock while e.g. waiting.
The per-device lock will make sure that nothing else accesses that
device while we're operating on it.

Docker-DCO-1.1-Signed-off-by: Alexander Larsson <alexl@redhat.com> (github: alexlarsson)
We've seen some cases in the wild where waiting for unmount/deactivate
of devmapper devices taking a long time (several seconds). So, we increase
the sleeps to 10 seconds before we timeout. For instance:

moby#4389

But, in order to not keep other processes blocked we unlock the global
dm lock while waiting to allow other devices to continue working.

Docker-DCO-1.1-Signed-off-by: Alexander Larsson <alexl@redhat.com> (github: alexlarsson)
@creack
Copy link
Contributor

creack commented Mar 11, 2014

LGTM

time.Sleep(5 * time.Millisecond)
devices.Unlock()
time.Sleep(10 * time.Millisecond)
devices.Lock()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we unlock here before returning?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, the lock is held when we enter. Its taken at all the public entry points. We should leave it held when we return.

@crosbymichael
Copy link
Contributor

LGTM

crosbymichael added a commit that referenced this pull request Mar 11, 2014
devmapper: Increase sleep times and unlock while sleeping
@crosbymichael crosbymichael merged commit b55a79a into moby:master Mar 11, 2014
@alexlarsson alexlarsson deleted the devicemapper-waits branch March 13, 2014 13:11
@unclejack unclejack added this to the 0.9.1 milestone Mar 13, 2014
alexlarsson added a commit to alexlarsson/docker that referenced this pull request Mar 18, 2014
As reported in moby#4389 we're
currently seeing timeouts in waitClose on some systems. We already
bumped the timeout in waitRemove() in
moby#4504.

Docker-DCO-1.1-Signed-off-by: Alexander Larsson <alexl@redhat.com> (github: alexlarsson)
unclejack pushed a commit to unclejack/moby that referenced this pull request Mar 18, 2014
As reported in moby#4389 we're
currently seeing timeouts in waitClose on some systems. We already
bumped the timeout in waitRemove() in
moby#4504.

Docker-DCO-1.1-Signed-off-by: Alexander Larsson <alexl@redhat.com> (github: alexlarsson)
shykes pushed a commit to shykes/docker-dev that referenced this pull request Oct 2, 2014
As reported in moby/moby#4389 we're
currently seeing timeouts in waitClose on some systems. We already
bumped the timeout in waitRemove() in
moby/moby#4504.

Docker-DCO-1.1-Signed-off-by: Alexander Larsson <alexl@redhat.com> (github: alexlarsson)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants