Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Does REAR support DRBD on LVM? #368

Closed
dragon299 opened this issue Feb 18, 2014 · 12 comments
Closed

Does REAR support DRBD on LVM? #368

dragon299 opened this issue Feb 18, 2014 · 12 comments

Comments

@dragon299
Copy link
Contributor

Hello,

I am currently try to find out, if REAR does support DRBD on LVM volume. I know that LVM on DRBD works perfectly but I never tried the other way around.

Is there someone who already tried this?

Thank you.

@jhoekx
Copy link
Contributor

jhoekx commented Feb 18, 2014

We implemented and tested it in both ways a few years ago. It should work...

@dragon299
Copy link
Contributor Author

I tried it now with a very special (maybe stupid) configuration (LVM on DRBD on LVM) and I got the following "error":

Disk configuration is identical, proceeding with restore.
No code has been generated to restore device pv:/dev/drbd0 (lvmdev).
Please add code to /var/lib/rear/layout/diskrestore.sh to manually install it or choose abort.

@gdha gdha added the support label Feb 18, 2014
@bbeaver
Copy link

bbeaver commented Aug 9, 2014

Any updates or solutions for this yet? I've run into the same issue. While testing a Rear recover of a system which includes a DRBD device on an LVM volume, received this error:

No code has been generated to restore device /dev/drbd0 (drbd). Please add code to /var/lib/rear/layout/diskrestore.sh to manually install it or choose abort.

My local.conf looks like this:

OUTPUT=ISO
BACKUP=NETFS
BACKUP_OPTIONS="nfsvers=3,nolock"
BACKUP_URL="nfs://x.x.x.x/nfs01/"
BACKUP_TYPE=incremental
FULLBACKUPDAY="Mon"

Thanks

@gdha
Copy link
Member

gdha commented Aug 11, 2014

@bbeaver Personally I don not have experience with DRBD. If you could run savelayout in debug mode that would be a nice start.

@bbeaver
Copy link

bbeaver commented Aug 11, 2014

Please see savelayout info below:

disklayout.conf:

disk /dev/sda 320072933376 msdos
part /dev/sda 524288000 1048576 primary boot /dev/sda1
part /dev/sda 319547244544 525336576 primary lvm /dev/sda2
lvmdev /dev/vg_centos01 /dev/sda2 j2PzQ3-tTlb-T6bz-1yal-mkS8-9Eif-oyBaWf 624115712
lvmgrp /dev/vg_centos01 4096 76185 312053760
lvmvol /dev/vg_centos01 lv_root 12800 104857600 
lvmvol /dev/vg_centos01 lv_swap 976 7995392 
lvmvol /dev/vg_centos01 lv_home 37500 307200000 
lvmvol /dev/vg_centos01 lvol0 2560 20971520 
lvmvol /dev/vg_centos01 lvol1 256 2097152 
fs /dev/mapper/vg_centos01-lv_root / ext4 uuid=541142d0-5cf6-4c55-af1d-75fc55b9b5f2 label=_CentOS-6.5-x86_ blocksize=4096 reserved_blocks=0% max_mounts=-1 check_interval=0d bytes_per_inode=16384 default_mount_options=user_xattr,acl options=rw
fs /dev/sda1 /boot ext4 uuid=b83eaf94-8194-4af2-9da6-0f3bbc16353c label= blocksize=1024 reserved_blocks=5% max_mounts=-1 check_interval=0d bytes_per_inode=4095 default_mount_options=user_xattr,acl options=rw
fs /dev/mapper/vg_centos01-lv_home /home ext4 uuid=1fe2fb96-7334-4a6e-9432-92784cd4c0a8 label= blocksize=4096 reserved_blocks=5% max_mounts=-1 check_interval=0d bytes_per_inode=16382 default_mount_options=user_xattr,acl options=rw
fs /dev/drbd0 /drbd0 ext4 uuid=cf5c0b71-ea5b-4e87-8747-da2d65f38cd2 label= blocksize=4096 reserved_blocks=4% max_mounts=24 check_interval=180d bytes_per_inode=16383 options=rw
swap /dev/mapper/vg_centos01-lv_swap uuid=ef75bca1-63f8-438c-90d9-0696266b2a16 label=
drbd /dev/drbd0 clusterdb /dev/vg_centos01/lvol0

disktodo.conf:

todo /dev/sda disk
todo /dev/sda1 part
todo /dev/sda2 part
todo pv:/dev/sda2 lvmdev
todo /dev/vg_centos01 lvmgrp
todo /dev/mapper/vg_centos01-lv_root lvmvol
todo /dev/mapper/vg_centos01-lv_swap lvmvol
todo /dev/mapper/vg_centos01-lv_home lvmvol
todo /dev/mapper/vg_centos01-lvol0 lvmvol
todo /dev/mapper/vg_centos01-lvol1 lvmvol
todo fs:/ fs
todo fs:/boot fs
todo fs:/home fs
todo fs:/drbd0 fs
todo swap:/dev/mapper/vg_centos01-lv_swap swap
todo /dev/drbd0 drbd

diskdeps.conf:

/dev/sda1 /dev/sda
/dev/sda2 /dev/sda
/dev/vg_centos01 pv:/dev/sda2
pv:/dev/sda2 /dev/sda2
/dev/mapper/vg_centos01-lv_root /dev/vg_centos01
/dev/mapper/vg_centos01-lv_swap /dev/vg_centos01
/dev/mapper/vg_centos01-lv_home /dev/vg_centos01
/dev/mapper/vg_centos01-lvol0 /dev/vg_centos01
/dev/mapper/vg_centos01-lvol1 /dev/vg_centos01
fs:/ /dev/mapper/vg_centos01-lv_root
fs:/boot /dev/sda1
fs:/boot fs:/
fs:/home /dev/mapper/vg_centos01-lv_home
fs:/home fs:/
fs:/drbd0 /dev/drbd0
fs:/drbd0 fs:/
swap:/dev/mapper/vg_centos01-lv_swap /dev/mapper/vg_centos01-lv_swap
/dev/drbd0 /dev/vg_centos01/lvol0

Thanks

@bbeaver
Copy link

bbeaver commented Aug 16, 2014

After performing additional tests, I believe REAR handles full NETFS tar restores of drbd nodes properly by just skipping over the error mentioned above. I've been able to run full rear restore tests on each of my 2 node drdb cluster nodes successfully. After confirming I had a full rear backup for each of the drbd nodes, I manually destroyed the primary node using the dd utillity. Then performed a rear recovery, and chose the option to "Continue" when receiving the "No code has been generated to restore device /dev/drbd0 (drbd) Please add code to /var/lib/rear/layout/diskrestore.sh to manually install it or choose abort" error message. I did not add any code to /var/lib/rear/layout/diskrestore.sh. REAR restored the node perfectly, and drbd automatically knew to re-sync on it's own from the remaining online node.

@gdha
Copy link
Member

gdha commented Aug 18, 2014

By any chance do you have the script /var/lib/rear/layout/diskrestore.sh by hand so we could review it?

@bbeaver
Copy link

bbeaver commented Aug 18, 2014

I do not have the diskrestore.sh file. Apparently this is not created by default, and I did not create one.

Thanks

@gdha
Copy link
Member

gdha commented Jun 3, 2015

@bbeaver the diskrestore.sh script is generated during the rear recover session. You should copy it over to another system at the moment rear stops working. The script is auto-generated. That is the reason you do not have it by default.
Furthermore, running a rear -vD recover will generate a full debug file which is interesting to see where it fails and why...

@dragon299
Copy link
Contributor Author

A few days ago I was able to do a restore with DRBD on LVM using REAR 1.17.2 on RedHat 7.2 without any issues. Maybe the problem is already fixed or does not exist on all plattforms.

@gdha
Copy link
Member

gdha commented Sep 7, 2016

@dragon299 That is good news. You can always pre-view the diskrestore.sh script - see http://www.it3.be/2016/06/08/rear-diskrestore/

@jsmeix
Copy link
Member

jsmeix commented Sep 21, 2016

According to #368 (comment)
I think we can clse it as "fixed" - at least for now - it can be
reopened if needed.

@jsmeix jsmeix closed this as completed Sep 21, 2016
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants