Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Better calculate required space for LVM cache #207

Merged
merged 6 commits into from
Aug 31, 2015

Conversation

vpodzime
Copy link
Contributor

This pull request has three purposes (of descending importance):

  1. it fixes a hidden issue discovered by anaconda's kickstart tests where the following kickstart snippet
logvol swap --name=swap --vgname=fedora --size=500 --fstype=swap
logvol / --name=root --vgname=fedora --size=4000 --grow --fstype=ext4 --cachepvs=pv.2 --cachesize=1000 --cachemode=writethrough
logvol /home --name=home --vgname=fedora --size=1000 --grow --fstype=ext4 --cachepvs=pv.2 --cachesize=1000 --cachemode=writeback

results in a successful installation, but the swap LV ends up being 8 MiB smaller. It's not the end of the world, but if the metadata size for cache was 256 MiB, it would be 256 MiB smaller which is quite a big difference. And even with the 8 MiB it's just wrong because the swap LV is the only one that has the exact requested size whereas the other LVs are requested to grow and fill in the remaining space.

  1. It brings us closer to the target of the issue Cleanup/update LVM size calculations and expectations #31 where similar fixes/improvements/cleanup should happen for other types of LVs too.

  2. It brings us closer to the support of LVM cache metadata size specification by users/user code.

@jkonecny12
Copy link
Contributor

Looks good to me.
I think this should go to f23-branch too.

@vpodzime
Copy link
Contributor Author

Hmm, python2's max() doesn't have the same kwargs as python3's max().

@vpodzime
Copy link
Contributor Author

Should be fixed now.

@vpodzime
Copy link
Contributor Author

Glad to see the recently added tests catching changes. They need to be updated.

Metadata size is a useful information when doing calculations with LVs' sizes
for example when growing LVs in a VG. We need to report the actual (if the cache
exists) or planned (desired or default for a non-existing cache) metadata
size. However, instead of complicating many places in the codebase by adding
extra metadata space for requested caches, we just subtract the default metadata
size from the requested size of the cache so that the whole cache
(data+metadata) fits into the space or requested size. If both size and metadata
size are specified, we respect them.

Please note that this adds the metadata size parameter only to the internal
LVMCache class. A full support of letting the user code to set metadata size
would/will require more changes in multiple places. This is however definitely
the compulsory first step.
That's what we do in all other places in blivet.
This is generic concept for requests that require some extra space that
shouldn't be included in their base size because it is extra/external space
taken from the same Chunk. The only usage/case right now is for LVRequests of
cached LVs to reserve space for their caches in a nicer and cleaner way, but
more are expected to come in the future because many devices require some fixed
extra space that is not part of their real/usable size (usually some metadata
space).
Shrinking means that there is an LV that fits into the VG (otherwise it cannot
exist) and thus we can skip the check for maximum size according to the current
size and VG's free space.
May come handy in the future.
The 'pmspare' LV is created by LVM behind our back and it serves as a temporary
space for some metadata reorganizations/cleanup/... so its size is equal to the
size of the biggest metadata LV in the VG. If we ignore it, we lack free space
in the VG later when creating LVs.
@vpodzime
Copy link
Contributor Author

Should be fixed now. I also checked that the anaconda's kickstart test with a fixed-size LV and two growable cached LVs respects the requested size of the fixed-size LV and leaves no extra space in the VG.

@vpodzime
Copy link
Contributor Author

Please note that the test failure is caused by a workflow issue in Jenkins this time. All tests passed.

if not self.exists and \
not isinstance(self, LVMThinLogicalVolumeDevice) and \
size > self.vg.freeSpace + self.vgSpaceUsed:
size > self.size and size > self.vg.freeSpace + self.vgSpaceUsed:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is for the case when you are reducing the size of a planned/non-existent lv at a point where the vg's model is messed up to the point of thinking it has too little free space even for a reduced version of an lv is already contains?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The problem is that for setting LV's size to a smaller value, it doesn't make sense to compare the requested size with self.vgSpaceUsed + self.vg.freeSpace because self.vgSpaceUsed uses the original size. And it is getting triggered when we try to set LV's size to a smaller value if we find out the original size wouldn't fit to an existing VG (when creating the LV).

vpodzime added a commit that referenced this pull request Aug 31, 2015
Better calculate required space for LVM cache
@vpodzime vpodzime merged commit 07fa419 into storaged-project:master Aug 31, 2015
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants