New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update 110_include_lvm_code.sh to make sure vgremove is called before recreating the VG #2564
Conversation
@rmetrich Yes I encountered the same issue also and fixed it the same way via Chef deployment on >20k systems ;-) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am not a sufficient LVM expert to actually review it but from
#2514 (comment)
I guess that in practice lvm vgremove --force
is OK in general.
But in
#2514 (comment)
I described my concerns with removing LVM stuff in general
from my non-LVM-expert point of view.
Therein I described my basic concern as
But deconstructing LVM stuff ... gets impossible
when there are LVs that belong to VGs that
contain disks that should not be wiped.
i.e. I wonder if it might happen that with an unusual LVM setup
lvm vgremove --force
could possibly destroy things on a disk
that must not be touched by "rear recover", in
#2514 (comment)
see my offhanded example of such an unusual LVM setup where a VG
has a PV or LV on a disk that must not be touched by "rear recover"
and then I wonder if lvm vgremove --force
of that VG could do
something bad on that disk that must not be touched by "rear recover" ?
@rmetrich |
@@ -140,6 +140,7 @@ EOF | |||
cat >> "$LAYOUT_CODE" <<EOF | |||
if [ \$create_volume_group -eq 1 ] ; then | |||
LogPrint "Creating LVM VG '$vg'; Warning: some properties may not be preserved..." | |||
lvm vgremove --force $vg >&2 || true |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In #2514 (comment)
@gdha suggests lvm vgremove $vg --force --yes
i.e. additionally a --yes
and in
https://github.com/rear/rear/blob/b98687ae9c61084b74d996179a0550abc887c005/usr/share/rear/layout/recreate/default/150_wipe_disks.sh
I had to even use a double force plus yes --force --force -y
(but in my case it was pvremove
) see the comment in my code.
Hello, For the VG having a PV on a disk that should not be touched, then I would consider it's a bug/limitation: it's not possible to re-create a VG in that case at all. |
Make sure we delete the volume group before re-creating it. The issue happens in Migration mode when ReaR is not trying to use vgcfgrestore.
But If I am right with my concerns then I am missing a test that a VG Or I am wrong with my concerns and it is safe to do an enforced 'vgremove' I would be happy if I am wrong with my concerns because that would simplify |
My concern is not when for a VG having a PV on a disk that should not be touched My concern is that an unconditioned enforced My concern is also not that such an enforced My concern is only possible damage / loss of data / whatever problem |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I approve it "bona fide".
@jsmeix I did test your code but that did not removed the VG in question. Only when I used a similar line as @rmetrich provides it worked perfectly and I have tested a real DR several times with real servers used by developers. |
@gdha Some code is there and it had worked for my tests but I commented it out
because of my concern that enforced removal of VGs and/or LVs might result What needs to be tested is a LVM setup with VGs that spread their PVs Did you test such a LVM setup? |
In short - yes ;-) |
Pull Request Details:
Type: Bug Fix
Impact: Normal
How was this pull request tested? Test on RHEL7 with a LVM Raid
Brief description of the changes in this pull request:
Make sure we delete the volume group before re-creating it.
The issue happens in Migration mode when ReaR is not trying to use vgcfgrestore.
Reproducer:
Install a system
Add 2 additional disks that will be used to host a LVM VG
Create a Raid volume
Build a rescue image and perform a recovery
Error is shown below:
Log excerpt: