-
Notifications
You must be signed in to change notification settings - Fork 3.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
support quota set on overlay snapshot #5859
base: main
Are you sure you want to change the base?
Conversation
Hi @yylt. Thanks for your PR. I'm waiting for a containerd member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Build succeeded.
|
See the License for the specific language governing permissions and | ||
limitations under the License. | ||
*/ | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This file seems copied from Moby. Please clarify the origin of the code and the license in code comments.
snapshots/quota/projectquota.go
Outdated
#define Q_XGETPQUOTA QCMD(Q_XGETQUOTA, PRJQUOTA) | ||
#endif | ||
*/ | ||
import "C" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we avoid cgo
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we avoid cgo
Done!
Quota handling is a topic we should discuss at a high level first and possibly a good topic for a future community meeting. However, getting library support for setting project quotas across different backing FS makes sense to do now. As an end feature, we need to discuss how it will be integrated with Kubernetes and how we can use it from our own tooling and go libraries. It seems backwards to have the cri layer here enable the quota then the snapshot set a static quota. I would think the quota would be set by the client adding a quota and the snapshotter enabling it via configuration and erroring out if the backing FS couldn't support it when enabled. The difficult part is figuring out from the client perspective whether quota is enabled so it knows whether it can avoid expensive ephemeral storage accounting (such as in the Kubelet). |
pkg/cri/config/config.go
Outdated
@@ -266,6 +266,9 @@ type PluginConfig struct { | |||
// of being placed under the hardcoded directory /var/run/netns. Changing this setting requires | |||
// that all containers are deleted. | |||
NetNSMountsUnderStateDir bool `toml:"netns_mounts_under_state_dir" json:"netnsMountsUnderStateDir"` | |||
// SupportSetQuota indicate to set quota on container read-write snapshot, when snapshot do not | |||
// support quota set, it will do nothing. | |||
SupportSetQuota bool `toml:"support_set_quota" json:"supportSetQuota"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe EnableRootfsQuota
or EnableSetQuota
sounds better.
In Kubernetes, there is a feature gate and a check method
|
sorry for so long time later.
it is good suggestion. now quota setter focus on overlay, and with pure go. and update snapshots litter, add new option such as:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I tried to build a containerd to do some test. Too many quota is created in my env.
[root@daocloud ~]# xfs_quota -xc "report -a -p -h" | head -n 100
Project quota on /var/lib/kubelet (/dev/sdc)
Blocks
Project ID Used Soft Hard Warn/Grace
---------- ---------------------------------
#0 820K 0 0 00 [------]
volume1048590 0 8E 8E 00 [------]
volume1048605 0 8E 8E 00 [------]
volume1048607 0 8E 8E 00 [------]
volume1048610 0 8E 8E 00 [------]
Project quota on /var/lib/containerd (/dev/sdb)
Blocks
Project ID Used Soft Hard Warn/Grace
---------- ---------------------------------
#0 10.2G 0 0 00 [------]
#2 0 0 0 00 [------]
#3 0 15G 15G 00 [------]
#4 12K 15G 15G 00 [------]
#5 0 15G 15G 00 [------]
#6 0 15G 15G 00 [------]
#7 0 15G 15G 00 [------]
#8 0 15G 15G 00 [------]
#9 0 15G 15G 00 [------]
#10 0 15G 15G 00 [------]
#11 0 15G 15G 00 [------]
#12 0 15G 15G 00 [------]
#13 0 15G 15G 00 [------]
#14 0 15G 15G 00 [------]
#15 0 15G 15G 00 [------]
#16 0 15G 15G 00 [------]
#17 0 15G 15G 00 [------]
#18 0 15G 15G 00 [------]
#19 0 15G 15G 00 [------]
#20 0 15G 15G 00 [------]
#21 0 15G 15G 00 [------]
#22 0 15G 15G 00 [------]
#23 0 15G 15G 00 [------]
#24 4K 15G 15G 00 [------]
#25 0 15G 15G 00 [------]
#26 0 15G 15G 00 [------]
#27 0 15G 15G 00 [------]
#28 0 15G 15G 00 [------]
#29 4K 15G 15G 00 [------]
#30 0 15G 15G 00 [------]
#31 0 15G 15G 00 [------]
#32 0 15G 15G 00 [------]
#33 0 15G 15G 00 [------]
#34 0 15G 15G 00 [------]
#35 4K 15G 15G 00 [------]
#36 4K 15G 15G 00 [------]
#37 0 15G 15G 00 [------]
#38 540K 15G 15G 00 [------]
#39 0 15G 15G 00 [------]
#40 0 15G 15G 00 [------]
#41 0 15G 15G 00 [------]
#42 0 15G 15G 00 [------]
#43 0 10G 10G 00 [------]
#44 0 10G 10G 00 [------]
#45 0 10G 10G 00 [------]
#46 0 10G 10G 00 [------]
#47 0 10G 10G 00 [------]
#48 0 10G 10G 00 [------]
#49 0 10G 10G 00 [------]
#50 0 10G 10G 00 [------]
#51 0 10G 10G 00 [------]
#52 0 10G 10G 00 [------]
#53 0 10G 10G 00 [------]
#54 0 10G 10G 00 [------]
#55 0 10G 10G 00 [------]
#56 0 10G 10G 00 [------]
#57 0 10G 10G 00 [------]
#58 0 10G 10G 00 [------]
#59 0 10G 10G 00 [------]
#60 0 10G 10G 00 [------]
#61 0 10G 10G 00 [------]
#62 0 10G 10G 00 [------]
#63 0 10G 10G 00 [------]
I set the size to 10G at first and 15G later.
[root@daocloud ~]# xfs_quota -xc "report -a -p -h" | wc -l
1956507
snapshots/overlay/plugin/plugin.go
Outdated
@@ -57,6 +62,15 @@ func init() { | |||
oOpts = append(oOpts, overlay.WithUpperdirLabel) | |||
} | |||
|
|||
ic.Meta.Exports["root"] = root |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
dup with line 74.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
snapshots/overlay/overlay.go
Outdated
fsMagic, err := overlayutils.GetFSMagic(root) | ||
if err != nil { | ||
return nil, err | ||
} | ||
if config.quotaSize > 0 { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fsMagic, err := overlayutils.GetFSMagic(root) | |
if err != nil { | |
return nil, err | |
} | |
if config.quotaSize > 0 { | |
if config.quotaSize > 0 { | |
fsMagic, err := overlayutils.GetFSMagic(root) | |
if err != nil { | |
return nil, err | |
} |
snapshots/overlay/overlay.go
Outdated
} else { | ||
size = config.quotaSize | ||
} | ||
return quotaCtl.SetAllQuota(size, targets...) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
setting quota
may need some logs. Probably, some debug log.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
yes, it becauce set quota committed layer too. but it is not expected |
I got some new errors with latest code.
And the quota is not set as expected. |
f72d83c
to
ed16c4b
Compare
I try command
and use bpftrace to debug, and the syscall
finally, use below to translate
|
/retest |
@yylt: Cannot trigger testing until a trusted user reviews the PR and leaves an In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Is it necessary to use the new version of ackagep in this way to use fs magic? This looks quite ugly.
|
Can this be done in a seperate PR? |
It doesn't seem appropriate, because the |
}, nil | ||
default: | ||
log.G(context.TODO()).WithError(err).Warnf("filesystem %s does not support set quota", backingFs) | ||
return nil, fmt.Errorf("filesystem %s does not support set quota", backingFs) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For file systems that do not support the quota mount option (ext4, etc.), do not return err here?
if config.enableQuota { | ||
quotaSet, err = findQuotaSetter(root) | ||
if err != nil { | ||
return nil, err |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For file systems that do not support the quota mount option (ext4, etc.), do not return err here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not quite sure what you're trying to express, but an error has been returned and the process has ended here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I mean,do not return err here, and just print an warning log, and go on. so that to prevent report errors from on file systems that do not support quota.
like :
root@localhost.localdomain:/home/workspace$ df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 30G 0 30G 0% /dev
tmpfs 30G 0 30G 0% /dev/shm
tmpfs 30G 18M 30G 1% /run
tmpfs 30G 0 30G 0% /sys/fs/cgroup
/dev/mapper/cl-root 70G 9.9G 61G 15% /
/dev/vda1 1014M 257M 758M 26% /boot
/dev/mapper/cl-home 400G 16G 385G 4% /home
tmpfs 5.9G 32K 5.9G 1% /run/user/0
/dev/loop0 976M 6.1M 903M 1% /mnt/myext4
shm 64M 0 64M 0% /mnt/myext4/state/io.containerd.grpc.v1.cri/sandboxes/26944b67479fc70fd12d1e74f675f11b349cd1490189948362c42db885ffb40e/shm
overlay 976M 6.1M 903M 1% /mnt/myext4/state/io.containerd.runtime.v2.task/k8s.io/26944b67479fc70fd12d1e74f675f11b349cd1490189948362c42db885ffb40e/rootfs
overlay 976M 6.1M 903M 1% /mnt/myext4/state/io.containerd.runtime.v2.task/k8s.io/52a1271cc6869d843aa87264986711fc8b69fc61680d57dcc065499f9546bd57/rootfs
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This doesn't seem to be the recommended approach.
In principle, if configuration quotas are not supported, it should be reported and the process should exit.
Additionally, it's possible to determine whether quota is supported when install and plan the containerd root directory in configure file.
Signed-off-by: Yang Yang <yang8518296@163.com>
support set quota on overlay snapshot
as discussed in issue #3329
Fixes #3329
enable quota set by follows:
config.toml like this
check containerd root mount info
check container had been set quota
Signed-off-by: Yang Yang yang8518296@163.com