New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Rear recovery does not start due to "No code has been generated to recreate pv:/dev/sda2 (lvmdev)." #1952
Comments
@chrismorgan240 |
@chrismorgan240 perhaps also add the output of |
This is a ndf from a very similar machine (just ip / hostname differences) [root@dc1dsydb206 ~]# df Filesystem 1K-blocks Used Available Use% Mounted on /dev/mapper/vg_root-lv_root 10225664 3796740 6428924 38% / devtmpfs 264025896 156 264025740 1% /dev tmpfs 264036996 1302528 262734468 1% /dev/shm tmpfs 264036996 2096 264034900 1% /run tmpfs 264036996 0 264036996 0% /sys/fs/cgroup /dev/sda1 503040 421928 81112 84% /boot /dev/mapper/vg_ora-lv_adm 19912448 17478176 2434272 88% /syncscp /dev/mapper/vg_ora-lv_exp 52402944 5370688 47032256 11% /oraexp /dev/mapper/vg_ora-lv_log 31441664 4771876 26669788 16% /oralogs /dev/mapper/vg_ora-lv_u01 157235200 127819172 29416028 82% /u01 /dev/mapper/vg_root-lv_usrlocal 1015040 49876 965164 5% /usr/local /dev/mapper/vg_root-lv_home 1015040 117740 897300 12% /home /dev/mapper/vg_root-lv_opt 1015040 498632 516408 50% /opt /dev/mapper/vg_root-lv_tmp 10471424 57972 10413452 1% /tmp /dev/mapper/vg_root-lv_var 5134336 1023916 4110420 20% /var /dev/mapper/vg_root-lv_varcrash 62879744 33328 62846416 1% /var/crash /dev/mapper/vg_ora-lv_oem 26201344 1980064 24221280 8% /opt/oem /dev/mapper/vg_root-lv_varlog 6259968 1127840 5132128 19% /var/log /dev/mapper/vg_root-lv_varlogaudit 1560576 884308 676268 57% /var/log/audit /dev/asm/mysqldev-459 10485760 721720 9764040 7% /mysqldev /dev/asm/backup_test-459 68157440 52844856 15312584 78% /backup_test /dev/asm/mysqltest-459 10485760 728324 9757436 7% /mysqltest /dev/asm/mysqluat-459 10485760 722000 9763760 7% /mysqluat /dev/asm/syncd01-459 209715200 188641768 21073432 90% /syncd01 /dev/asm/synct01-459 367001600 136423028 230578572 38% /synct01 dc1dsydbcl-tacfs-vip:/synct01 367001600 136422400 230579200 38% /synctnfs01 dc1dsydbcl-dacfs-vip:/syncd01 209715200 188641280 21073920 90% /syncdnfs01 /dev/asm/acfsreptest-79 157286400 4535864 152750536 3% /acfsreptest /dev/asm/syncu01-289 1022361600 2278556 1020083044 1% /syncu01 |
@chrismorgan240 Your disklayout.conf.txt contains for lvmdev /dev/vg_root /dev/sda2 tqugIk-a3nI-mS8g-mz8P-JJ4X-CMT8-0Buqe3 584843860 fs /dev/sda1 /boot xfs uuid=cd3460e8-d7fb-4a3c-a993-aafe3ae91860 label= options=rw,relatime,attr2,inode64,logbsize=256k,sunit=512,swidth=512,noquota logicaldrive /dev/sda 0|A|1 raid=1 drives=1I:1:1,1I:1:2, spares= sectors=32 stripesize=256 where a XFS filesystem should be created on '/dev/sda1' Because you use HP SmartArray and multipath In particular regarding In your initial description root VG is local disk but your disklayout.conf.txt looks as if your root VG is perhaps See |
@chrismorgan240 You should check which HP RAID tool you are using
You need to dig into that script to find out what could have went wrong (use debug mode) to have a clear view. |
@gdha define_HPSSACLI at the beginning of layout/prepare/GNU/Linux/170_include_hpraid_code.sh function define_HPSSACLI() { # HP Smart Storage Administrator CLI is either hpacucli, hpssacli or ssacli if has_binary hpacucli ; then HPSSACLI=hpacucli elif has_binary hpssacli ; then HPSSACLI=hpssacli elif has_binary ssacli ; then HPSSACLI=ssacli fi } where the |
I added entries for /dev/sda from another similar server : disk /dev/sda 299966445568 msdos part /dev/sda 524288000 2097152 primary boot /dev/sda1 part /dev/sda 299440056320 526389248 primary lvm /dev/sda2 This allows restore to proceed : RESCUE dc1dsydb106:/var/lib/rear/layout # rear -D recover Relax-and-Recover 2.3-git.3007.056bfdb.master.changed / 2018-06-05 Using log file: /var/log/rear/rear-dc1dsydb106.log Running workflow recover within the ReaR rescue/recovery system Starting required daemons for NFS: RPC portmapper (portmap or rpcbind) and rpc.statd if available. Started RPC portmapper 'rpcbind'. RPC portmapper 'rpcbind' available. RPC status rpc.statd available. Started rpc.idmapd. Using backup archive '/tmp/rear.yWwNxSWLLaYl7K0/outputfs/dc1dsydb106/backup.tar.gz' Will do driver migration (recreating initramfs/initrd) Calculating backup archive size Backup archive size is 3.4G /tmp/rear.yWwNxSWLLaYl7K0/outputfs/dc1dsydb106/backup.tar.gz (compressed) Comparing disks Device sda has expected (same) size 299966445568 (will be used for recovery) Disk configuration looks identical UserInput -I DISK_LAYOUT_PROCEED_RECOVERY needed in /usr/share/rear/layout/prepare/default/250_compare_disks.sh line 146 Proceed with recovery (yes) otherwise manual disk layout configuration is enforced (default 'yes' timeout 30 seconds) yes UserInput: No choices - result is 'yes' User confirmed to proceed with recovery Start system layout restoration. Creating partitions for disk /dev/sda (msdos) Creating LVM PV /dev/sda2 Restoring LVM VG 'vg_root' Sleeping 3 seconds to let udev or systemd-udevd create their devices... Creating filesystem of type xfs with mount point / on /dev/mapper/vg_root-lv_root. Mounting filesystem / Creating filesystem of type xfs with mount point /home on /dev/mapper/vg_root-lv_home. Mounting filesystem /home Creating filesystem of type xfs with mount point /opt on /dev/mapper/vg_root-lv_opt. Mounting filesystem /opt Creating filesystem of type xfs with mount point /tmp on /dev/mapper/vg_root-lv_tmp. Mounting filesystem /tmp Creating filesystem of type xfs with mount point /usr/local on /dev/mapper/vg_root-lv_usrlocal. Mounting filesystem /usr/local Creating filesystem of type xfs with mount point /var on /dev/mapper/vg_root-lv_var. Mounting filesystem /var Creating filesystem of type xfs with mount point /var/crash on /dev/mapper/vg_root-lv_varcrash. Mounting filesystem /var/crash Creating filesystem of type xfs with mount point /var/log on /dev/mapper/vg_root-lv_varlog. Mounting filesystem /var/log Creating filesystem of type xfs with mount point /var/log/audit on /dev/mapper/vg_root-lv_varlogaudit. Mounting filesystem /var/log/audit Creating filesystem of type xfs with mount point /boot on /dev/sda1. Mounting filesystem /boot Creating swap on /dev/mapper/vg_root-lv_swap Disk layout created. Restoring from '/tmp/rear.yWwNxSWLLaYl7K0/outputfs/dc1dsydb106/backup.tar.gz' (restore log in /var/lib/rear/restore/recover.backup.tar.gz.9126.restore.log) ... Restored 9106 MiB [avg. 158059 KiB/sec] OK Restored 9243 MiB in 60 seconds [avg. 157757 KiB/sec] Restoring finished (verify backup restore log messages in /var/lib/rear/restore/recover.backup.tar.gz.9126.restore.log) Recreating directories (with permissions) from /var/lib/rear/recovery/directories_permissions_owner_group Running mkinitrd... Updated initrd with new drivers for kernel 3.10.0-123.el7.x86_64. Running mkinitrd... Updated initrd with new drivers for kernel 3.10.0-514.26.2.el7.x86_64. Skip installing GRUB Legacy boot loader because GRUB 2 is installed (grub-probe or grub2-probe exist). Installing GRUB2 boot loader Finished recovering your system. You can explore it under '/mnt/local'. Exiting rear recover (PID 9126) and its descendant processes Running exit tasks You should also rm -Rf /tmp/rear.yWwNxSWLLaYl7K0 Exploring /mnt/local shows all lvs / filesystems in vg_root. /dev/mapper looks ok RESCUE dc1dsydb106:/mnt/local # ls -l /dev/mapper total 0 crw------- 1 root root 10, 236 Nov 7 13:30 control lrwxrwxrwx 1 root root 7 Nov 7 13:30 vg_root-lv_home -> ../dm-1 lrwxrwxrwx 1 root root 7 Nov 7 13:30 vg_root-lv_opt -> ../dm-6 lrwxrwxrwx 1 root root 7 Nov 7 13:30 vg_root-lv_root -> ../dm-0 lrwxrwxrwx 1 root root 7 Nov 7 13:30 vg_root-lv_swap -> ../dm-2 lrwxrwxrwx 1 root root 8 Nov 7 13:30 vg_root-lv_temp -> ../dm-10 lrwxrwxrwx 1 root root 7 Nov 7 13:30 vg_root-lv_tmp -> ../dm-5 lrwxrwxrwx 1 root root 7 Nov 7 13:30 vg_root-lv_usrlocal -> ../dm-3 lrwxrwxrwx 1 root root 7 Nov 7 13:30 vg_root-lv_var -> ../dm-4 lrwxrwxrwx 1 root root 7 Nov 7 13:30 vg_root-lv_varcrash -> ../dm-9 lrwxrwxrwx 1 root root 7 Nov 7 13:30 vg_root-lv_varlog -> ../dm-7 lrwxrwxrwx 1 root root 7 Nov 7 13:30 vg_root-lv_varlogaudit -> ../dm-8 Reboot and server goes to rescue mode. |
@chrismorgan240 Do you mean you end up in GRUB rescue mode |
@jsmeix |
Here is a df after the restore : RESCUE dc1dsydb106:/var/lib/rear/layout # df -h |
RESCUE dc1dsydb106:/mnt/local/dev/vg_root # find /mnt/local/dev/vg_root VG #PV #LV #SN Attr VSize VFree |
rear -D recover : |
Above are boot messages - this looks like it could be the issue to me ? I would appreciate advice. |
@chrismorgan240 try a manual relabel?
It seems that the SELinux relabel was not performed for one of other reason? |
@chrismorgan240 Have a look at https://access.redhat.com/solutions/24845 to fix your situation. |
Thanks for your help. |
Relax-and-Recover (ReaR) Issue Template
Fill in the following items before submitting a new issue
(quick response is not guaranteed with free support):
ReaR version ("/usr/sbin/rear -V"):
Relax-and-Recover 2.3-git.3007.056bfdb.master.changed / 2018-06-05
OS version ("cat /etc/rear/os.conf" or "lsb_release -a" or "cat /etc/os-release"):
Hardware (PC or PowerNV BareMetal or ARM) or virtual machine (KVM guest or PoverVM LPAR):
PC
System architecture (x86 compatible or PPC64/PPC64LE or what exact ARM device):
x86
Firmware (BIOS or UEFI or Open Firmware) and bootloader (GRUB or ELILO or Petitboot):
BIOS / GRUB
Storage (lokal disk or SSD) and/or SAN (FC or iSCSI or FCoE) and/or multipath (DM or NVMe):
root VG is local disk
Description of the issue (ideally so that others can reproduce it):
rear recover doesn't start with no code message.
Selecting option 4. to continue then displays
Selecting 4 then shows the same message for vg_root LVs.
disklayout.conf does have entries for above.
Workaround, if any:
None - we have had a fatal root filesystem failure on this host and are trying to recover it using rear.
Attachments, as applicable ("rear -D mkrescue/mkbackup/recover" debug log files):
The text was updated successfully, but these errors were encountered: