New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix handling of quota on volumes #975
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Originally container/storage only did quota on rootfs, now we want to support it on volumes as well.
Is that a forthcoming PR? Right now I only can see a single overlay caller. Assuming so…
(I know nothing about project quotas, I haven’t now taken the time to learn, I’m blindly assuming that the current implementation is correct and does what it says.) As an absolute newbie to this, the current “take an allocated value and just keep incrementing” approach already feels like a hack^Wcompromise, and layering it under another unreliable compromise feels unnecessarily unreliable, especially for a security mechanism.
As an initial absolutely uneducated guess, why not make the project allocator a per-c/storage.Store state (it might even keep its current initialization mechanism of starting with a project ID associated with the driver directory), and give out single project IDs for both the existing overlay-driven projects, and the new (volume?) ones from that allocator, instead of trying to blindly partition space that isn’t ours at all?
drivers/quota/projectquota.go
Outdated
// ids. This function attempts to allocate at least minPorjectIDS(10000) | ||
// unigue projectids, based on the inode of the basepath. | ||
func generateUnigueProjectID(path string) (uint32, error) { | ||
const minProjectids = 10000 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If this is a global tuning parameter, make it more prominent, at least a top-level constant on the top of the file.
And isn’t it more a projectIDsAllocatedPerQuotaHome
or something like that? At least it’s more of a “max”ProjectIDs than a “min”projectIDs.
AFAICS this doesn’t even try to prevent overlap. Shouldn’t it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am not sure of the best way to do this. This was just a stab at it, hoping someone would have a silver bullet.
containers/podman#10973 uses containers/storage/drivers/quota on volumes. |
1787741
to
4d58b8a
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the documentation update.
Ideally I’d prefer to just extend the existing code, with the single allocation range, across both image store and volumes associated with a single c/storage.Store
, but if that is not wanted, this implementation looks nice.
e6db1d2
to
3e7c6dc
Compare
drivers/quota/projectquota.go
Outdated
// In that case, the storage directory project id will be used as a "start offset" | ||
// and all containers will be assigned larger project ids (e.g. >= 100000). | ||
// the volumes directory project id will be used as a "start offset" | ||
// and all volumes will be assigned larger project ids (e.g. >= 200000). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if you take the comments above, please use here too
This patch fixes the handling of inodes and sizes, currently if user sets indoes and sizes together, quota is only set on sizes. Second problem with quota is that we have to have unigue projectids for each directory. Originally container/storage only did quota on rootfs, now we want to support it on volumes as well. We need to be able to get unigue projectids for these two different parent directories. The added function, attempts to maintain at least 10,000 unigue id's based on the inode of the parent directory. I know that this is not perfect and we have a potential for overlay. If you have a better algorythm, I would love to use it. Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
@mtrmac @TomSweeneyRedHat @nalind @saschagrunert @giuseppe @vrothberg @haircommander @mrunalp PTAL Need to get this in, so that CRI-O can use it. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code LGTM, given the design; I’m not 100% happy about potentially taking over all of the uint32 project ID space but if it is opt-in like containers/podman#10973, it’s acceptable (and good enough for RHCOS).
LGTM |
This patch fixes the handling of inodes and sizes, currently if
user sets indoes and sizes together, quota is only set on sizes.
Second problem with quota is that we have to have unigue projectids
for each directory. Originally container/storage only did quota on
rootfs, now we want to support it on volumes as well. We need to be
able to get unigue projectids for these two different parent
directories. The added function, attempts to maintain at least 10,000
unigue id's based on the inode of the parent directory. I know that this
is not perfect and we have a potential for overlay. If you have a
better algorythm, I would love to use it.
Signed-off-by: Daniel J Walsh dwalsh@redhat.com