-
Notifications
You must be signed in to change notification settings - Fork 85
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Better calculate required space for LVM cache #207
Better calculate required space for LVM cache #207
Conversation
Looks good to me. |
Hmm, python2's |
bff32c6
to
40fa2db
Compare
Should be fixed now. |
Glad to see the recently added tests catching changes. They need to be updated. |
40fa2db
to
dba2a8c
Compare
Metadata size is a useful information when doing calculations with LVs' sizes for example when growing LVs in a VG. We need to report the actual (if the cache exists) or planned (desired or default for a non-existing cache) metadata size. However, instead of complicating many places in the codebase by adding extra metadata space for requested caches, we just subtract the default metadata size from the requested size of the cache so that the whole cache (data+metadata) fits into the space or requested size. If both size and metadata size are specified, we respect them. Please note that this adds the metadata size parameter only to the internal LVMCache class. A full support of letting the user code to set metadata size would/will require more changes in multiple places. This is however definitely the compulsory first step.
That's what we do in all other places in blivet.
This is generic concept for requests that require some extra space that shouldn't be included in their base size because it is extra/external space taken from the same Chunk. The only usage/case right now is for LVRequests of cached LVs to reserve space for their caches in a nicer and cleaner way, but more are expected to come in the future because many devices require some fixed extra space that is not part of their real/usable size (usually some metadata space).
Shrinking means that there is an LV that fits into the VG (otherwise it cannot exist) and thus we can skip the check for maximum size according to the current size and VG's free space.
May come handy in the future.
The 'pmspare' LV is created by LVM behind our back and it serves as a temporary space for some metadata reorganizations/cleanup/... so its size is equal to the size of the biggest metadata LV in the VG. If we ignore it, we lack free space in the VG later when creating LVs.
dba2a8c
to
c6ad606
Compare
Should be fixed now. I also checked that the anaconda's kickstart test with a fixed-size LV and two growable cached LVs respects the requested size of the fixed-size LV and leaves no extra space in the VG. |
Please note that the test failure is caused by a workflow issue in Jenkins this time. All tests passed. |
if not self.exists and \ | ||
not isinstance(self, LVMThinLogicalVolumeDevice) and \ | ||
size > self.vg.freeSpace + self.vgSpaceUsed: | ||
size > self.size and size > self.vg.freeSpace + self.vgSpaceUsed: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is for the case when you are reducing the size of a planned/non-existent lv at a point where the vg's model is messed up to the point of thinking it has too little free space even for a reduced version of an lv is already contains?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The problem is that for setting LV's size to a smaller value, it doesn't make sense to compare the requested size
with self.vgSpaceUsed + self.vg.freeSpace
because self.vgSpaceUsed
uses the original size. And it is getting triggered when we try to set LV's size to a smaller value if we find out the original size wouldn't fit to an existing VG (when creating the LV).
Better calculate required space for LVM cache
This pull request has three purposes (of descending importance):
results in a successful installation, but the
swap
LV ends up being 8 MiB smaller. It's not the end of the world, but if the metadata size for cache was 256 MiB, it would be 256 MiB smaller which is quite a big difference. And even with the 8 MiB it's just wrong because theswap
LV is the only one that has the exact requested size whereas the other LVs are requested to grow and fill in the remaining space.It brings us closer to the target of the issue Cleanup/update LVM size calculations and expectations #31 where similar fixes/improvements/cleanup should happen for other types of LVs too.
It brings us closer to the support of LVM cache metadata size specification by users/user code.