Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

mdraid restore issues #554

Closed
liedekef opened this issue Feb 27, 2015 · 4 comments
Closed

mdraid restore issues #554

liedekef opened this issue Feb 27, 2015 · 4 comments
Assignees

Comments

@liedekef
Copy link

Hi, I'm just trying the latest github version of rear, to be able to restore el6 systems (and the released version gave me issues with /dev/md0, /dev/md1 and parted).
I got this before backup/restore:

# cat /proc/mdstat 
Personalities : [raid1] 
md1 : active raid1 vda2[0] vdb2[1]
      31235520 blocks super 1.1 [2/2] [UU]
      bitmap: 1/1 pages [4KB], 65536KB chunk

md0 : active raid1 vda1[2] vdb1[3]
      205056 blocks super 1.0 [2/2] [UU]

with /dev/md0 being mounted as /boot and the rest in lv-partitions:

# mount
/dev/mapper/vg_00-lv_root on / type ext4 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
tmpfs on /dev/shm type tmpfs (rw)
/dev/md0 on /boot type ext4 (rw)
/dev/mapper/vg_00-lv_home on /home type ext4 (rw)
/dev/mapper/vg_00-lv_opt on /opt type ext4 (rw)
/dev/mapper/vg_00-lv_tmp on /tmp type ext4 (rw)
/dev/mapper/vg_00-lv_usr on /usr type ext4 (rw)
/dev/mapper/vg_00-lv_usropenv on /usr/openv type ext4 (rw)
/dev/mapper/vg_00-lv_var on /var type ext4 (rw)
/dev/mapper/vg_00-lv_varlog on /var/log type ext4 (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)

Now, after reboot, I got this (I needed to remove an empty line at the top of /etc/fstab and remove the mount of /boot, due to another reason):

# cat /proc/mdstat 
Personalities : [raid1] 
md126 : active (auto-read-only) raid1 vda1[0] vdb1[1]
      255936 blocks super 1.0 [2/2] [UU]

md127 : active raid1 vda2[0] vdb2[1]
      31183872 blocks super 1.1 [2/2] [UU]

unused devices: <none>

As you can see: md0/1 have been renamed to md126/127 (I'm guessing /etc/mdadm.conf is not in the regenerated initramfs). Also, the blkid output is weird:

/dev/vda1: UUID="284ef52b-53eb-43cf-850a-3d56ef840f90" TYPE="ext4"
/dev/vda2: UUID="9806afe8-83df-e9a0-7b0b-af725015303f" UUID_SUB="59b55793-937d-f1a5-c621-43e367d90fd0" LABEL="btcab0001ap:1" TYPE="linux_raid_member"
/dev/vdb1: UUID="533101b7-358c-49c0-ee1c-54e794f3b32d" UUID_SUB="83f0c4d4-73c5-bc0b-f5f1-3b141eb81d42" LABEL="btcab0001ap:0" TYPE="linux_raid_member" 
/dev/vdb2: UUID="9806afe8-83df-e9a0-7b0b-af725015303f" UUID_SUB="ec8a00b6-83db-f783-f65b-7e4494c66c6b" LABEL="btcab0001ap:1" TYPE="linux_raid_member" 
/dev/mapper/vg_00-lv_swap: UUID="b7a1746e-fafa-4d54-8452-13ac74436205" TYPE="swap" 
/dev/mapper/vg_00-lv_root: UUID="327e7579-89bc-4774-bd4e-1da5317ea0da" TYPE="ext4" 
/dev/mapper/vg_00-lv_home: UUID="60a9e1f2-871f-4834-be84-1c8da286d322" TYPE="ext4" 
/dev/mapper/vg_00-lv_usr: UUID="dc44b6ac-8070-4fa0-b9bb-761937b172fe" TYPE="ext4" 
/dev/mapper/vg_00-lv_usropenv: UUID="97d45071-9ff7-4bac-b4c1-c69ffb4a46db" TYPE="ext4" 
/dev/mapper/vg_00-lv_tmp: UUID="6f1347f9-fe3e-43a0-a736-631a5e6c58d6" TYPE="ext4" 
/dev/mapper/vg_00-lv_opt: UUID="bba66e3e-3567-4795-b343-df949f8e3cfb" TYPE="ext4" 
/dev/mapper/vg_00-lv_varlog: UUID="6d6fdb24-67f4-41be-9fc3-79e0a21a6b07" TYPE="ext4" 
/dev/md127: UUID="pmVwqW-wfUi-Ufjd-lXL0-0bUx-ouJO-g8crvV" TYPE="LVM2_member" 
/dev/md126: UUID="284ef52b-53eb-43cf-850a-3d56ef840f90" TYPE="ext4" 

As you can see: the blkid of vda2 and vdb2 are the same (which is normal I guess, since they are in a md-raid device). But the blkid of vda1 and vdb1 differ, and the md126 blkid is the same as vda1
This makes me a bit uncomfortable to restore a system of course ...

@gdha gdha self-assigned this Feb 27, 2015
@gdha
Copy link
Member

gdha commented Feb 27, 2015

@liedekef Hi Franky - that is a long time ago (from mkcdrec times I guess)

  • when you say md0/1 have been renamed to md126/127 was this on the production system itself ? You need to get md stable first I would think...
  • rear does save the mdadm.conf in its rescue image

@liedekef
Copy link
Author

@gdha Hi Gratien. Yep, long time not heard :-)
On the prod system I got md0 and md1 before using rear, so that was stable ... And I did see md0 and md1 during restore, just not after the reboot.
If you need any info, I can give it to you on Monday.

@gdha
Copy link
Member

gdha commented Mar 4, 2015

@liedekef could you paste the /var/lib/rear/layout/disklayout.conf file? And a blikid output of the origunal system.

@liedekef
Copy link
Author

liedekef commented Mar 5, 2015

Gratien: I downloaded the latest git version, re-installed the test system, took a full backup and ran rear on it: this time it seems to be working just fine! More testing to come, but for now this can be closed.

@liedekef liedekef closed this as completed Mar 5, 2015
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants