Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

bugfix: fix update container rootfs disk quota recursively timeout #2922

Merged
merged 1 commit into from
Jun 24, 2019

Conversation

zjumoon01
Copy link
Contributor

Updating disk quota to a running container will set quota ID to all files
and directories under UpperDir in function SetQuotaForDir, which will
take a long while. Because it will costs much time in
getMountpointFstype, CheckRegularFile and filepath.Walk.

Call SetQuotaForDir Asynchronously when updating disk quota .

Signed-off-by: Wang Rui baijia.wr@antfin.com

Ⅰ. Describe what this PR did

This patch will optimize efficiency of updating disk quota. It may take a long while in function SetQuotaForDir if container has a lot of new files and directories in UpperDir.

So we can change to call SetQuotaForDir in go routine.

Ⅱ. Does this pull request fix one issue?

NONE

Ⅲ. Why don't you add test cases (unit test/integration test)? (你真的觉得不需要加测试吗?)

Ⅳ. Describe how to verify it

step 1. Run many containers with disk quota so that /proc/mounts has a lot of lines
step 2. Make a running container create many files and directories.
step3. Before this path, update disk quota of this container will take a log while. After this path, it will be more efficient.

Ⅴ. Special notes for reviews

Updating disk quota to a running container will set quota ID to all files
and directories under UpperDir in function SetQuotaForDir, which will
take a long while. Because it will costs much time in
getMountpointFstype, CheckRegularFile and filepath.Walk.

Call SetQuotaForDir Asynchronously when updating disk quota .

Signed-off-by: Wang Rui <baijia.wr@antfin.com>
@codecov
Copy link

codecov bot commented Jun 21, 2019

Codecov Report

Merging #2922 into master will decrease coverage by 0.08%.
The diff coverage is 12.5%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master    #2922      +/-   ##
==========================================
- Coverage   68.23%   68.14%   -0.09%     
==========================================
  Files         291      291              
  Lines       18328    18330       +2     
==========================================
- Hits        12506    12491      -15     
- Misses       4368     4374       +6     
- Partials     1454     1465      +11
Flag Coverage Δ
#criv1alpha2_test 34.79% <12.5%> (-0.04%) ⬇️
#integration_test_0 36.08% <12.5%> (+0.01%) ⬆️
#integration_test_1 35.56% <12.5%> (+0.06%) ⬆️
#integration_test_2 36.1% <12.5%> (+0.06%) ⬆️
#integration_test_3 35.6% <12.5%> (-0.04%) ⬇️
#node_e2e_test 34.15% <12.5%> (-0.2%) ⬇️
#unittest 27.98% <0%> (ø) ⬆️
Impacted Files Coverage Δ
storage/quota/quota.go 6.45% <0%> (-0.07%) ⬇️
daemon/mgr/container.go 59.93% <0%> (-0.42%) ⬇️
daemon/mgr/container_storage.go 54.9% <33.33%> (ø) ⬆️
cri/ocicni/netns.go 58.1% <0%> (-2.71%) ⬇️
ctrd/container.go 52% <0%> (-2.3%) ⬇️
ctrd/supervisord/daemon.go 49.32% <0%> (-1.36%) ⬇️
cri/ocicni/cni_manager.go 61.32% <0%> (-0.95%) ⬇️
cri/v1alpha2/cri.go 63.53% <0%> (-0.51%) ⬇️
daemon/mgr/container_utils.go 76.76% <0%> (+0.5%) ⬆️
... and 3 more

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug This is bug report for project size/S
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants