-
Notifications
You must be signed in to change notification settings - Fork 18.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
docker rm performance drop since 1.8.2 #16281
Comments
Maybe this is due to bumping up the allocated space from 10GB to 100GB? |
Bumping up allocated space to 100G is causing delays in mkfs.xfs on top of loop devices but I have not heard anything w.r.t device deletion. Can you setup a fresh instance and pass option "--storage-opt dm.basesize=10G" to make sure 10G loop device is created and try again. |
@rhvgoyal A side-effect of having to wait If you run these two commands in parallel (one does not remove a container), both take
Running only the second command takes
Is this a standard behavior or is there anything that can be done? |
Ok, I think you are taking more time to discard the blocks because device size is 100G. This discard happens only in case of loop devices. If you move away from loop devies, I think you will not see this performance penalty. Anyway, 100G default base size is not working well. mkfs.xfs is taking more time on top of loop devices and discards are taking more time and container removal is slow. I think we should revert the patch back and keep base size either 10G or 20G and then implement the functionality to be able to specify container size at run time so that we can have different sized containers to meet the needs of people for whom 10G is not sufficient. |
After switching to thin-pool, the time with removal of a container is Just an idea - maybe it would to have a separate section in the docs which would list all things to keep in mind for production systems. Similar to what MongoDB does with |
When you say "switching to thin-pool", I am assuming you mean using a thin-pool which is on real block devices and not on top of loop devices? |
Yes, exactly |
same probem here on CentOS7.1; device-mapper default storage with an image size of about 2GB (Tutum CentOS + some tar files I copied...) the other open shell is frozen because the CPU is @100%. I see this from the portal. |
Hey there, I am having the same problem as @ceecko, deleting is way slower than before and sometimes results in docker saying that it can't delete the container because the Device is busy:
Switching the dm.basesize to 10GB seems to be fixing the issue so far, maybe it would be worth reverting the default to 10GB instead of 100GB or even specify this option at the creation of the container as requested in this issue: #14678 Docker info: docker infoContainers: 5 |
Hi,
I noticed that after an upgrade from docker
1.7.1
to1.8.2
the performance ofdocker rm
command has dropped significantly. I managed to reproduce the behavior on two identical AWS machines (m4.large):On
v1.7.1
it takes~1.2s
to remove a container.On
v1.8.2
it takes~6.6s
to remove a container.The logs from
v1.8.2
don't show any errors:Is this an expected behavior of
v1.8.2
due to some changes?The text was updated successfully, but these errors were encountered: