You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, I'm just trying the latest github version of rear, to be able to restore el6 systems (and the released version gave me issues with /dev/md0, /dev/md1 and parted).
I got this before backup/restore:
# cat /proc/mdstat
Personalities : [raid1]
md1 : active raid1 vda2[0] vdb2[1]
31235520 blocks super 1.1 [2/2] [UU]
bitmap: 1/1 pages [4KB], 65536KB chunk
md0 : active raid1 vda1[2] vdb1[3]
205056 blocks super 1.0 [2/2] [UU]
with /dev/md0 being mounted as /boot and the rest in lv-partitions:
# mount
/dev/mapper/vg_00-lv_root on / type ext4 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
tmpfs on /dev/shm type tmpfs (rw)
/dev/md0 on /boot type ext4 (rw)
/dev/mapper/vg_00-lv_home on /home type ext4 (rw)
/dev/mapper/vg_00-lv_opt on /opt type ext4 (rw)
/dev/mapper/vg_00-lv_tmp on /tmp type ext4 (rw)
/dev/mapper/vg_00-lv_usr on /usr type ext4 (rw)
/dev/mapper/vg_00-lv_usropenv on /usr/openv type ext4 (rw)
/dev/mapper/vg_00-lv_var on /var type ext4 (rw)
/dev/mapper/vg_00-lv_varlog on /var/log type ext4 (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
Now, after reboot, I got this (I needed to remove an empty line at the top of /etc/fstab and remove the mount of /boot, due to another reason):
# cat /proc/mdstat
Personalities : [raid1]
md126 : active (auto-read-only) raid1 vda1[0] vdb1[1]
255936 blocks super 1.0 [2/2] [UU]
md127 : active raid1 vda2[0] vdb2[1]
31183872 blocks super 1.1 [2/2] [UU]
unused devices: <none>
As you can see: md0/1 have been renamed to md126/127 (I'm guessing /etc/mdadm.conf is not in the regenerated initramfs). Also, the blkid output is weird:
As you can see: the blkid of vda2 and vdb2 are the same (which is normal I guess, since they are in a md-raid device). But the blkid of vda1 and vdb1 differ, and the md126 blkid is the same as vda1
This makes me a bit uncomfortable to restore a system of course ...
The text was updated successfully, but these errors were encountered:
@gdha Hi Gratien. Yep, long time not heard :-)
On the prod system I got md0 and md1 before using rear, so that was stable ... And I did see md0 and md1 during restore, just not after the reboot.
If you need any info, I can give it to you on Monday.
Gratien: I downloaded the latest git version, re-installed the test system, took a full backup and ran rear on it: this time it seems to be working just fine! More testing to come, but for now this can be closed.
Hi, I'm just trying the latest github version of rear, to be able to restore el6 systems (and the released version gave me issues with /dev/md0, /dev/md1 and parted).
I got this before backup/restore:
with /dev/md0 being mounted as /boot and the rest in lv-partitions:
Now, after reboot, I got this (I needed to remove an empty line at the top of /etc/fstab and remove the mount of /boot, due to another reason):
As you can see: md0/1 have been renamed to md126/127 (I'm guessing /etc/mdadm.conf is not in the regenerated initramfs). Also, the blkid output is weird:
As you can see: the blkid of vda2 and vdb2 are the same (which is normal I guess, since they are in a md-raid device). But the blkid of vda1 and vdb1 differ, and the md126 blkid is the same as vda1
This makes me a bit uncomfortable to restore a system of course ...
The text was updated successfully, but these errors were encountered: