Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

No code has been generated to recreate pv:/dev/mapper/mpathc_part2 (lvmdev) #1796

Closed
bern66 opened this issue May 4, 2018 · 85 comments
Closed
Assignees
Labels
external tool The issue depends on other software e.g. third-party backup tools. fixed / solved / done support / question

Comments

@bern66
Copy link

bern66 commented May 4, 2018

I am in the process to test ReaR. To test the restore, I use the same machine where "rear mkrescue" was run. In the below output of "rear -D recover" you can see the following messages:

No code has been generated to recreate pv:/dev/mapper/mpathc_part2 (lvmdev).
    To recreate it manually add code to /var/lib/rear/layout/diskrestore.sh or abort.

UserInput -I ADD_CODE_TO_RECREATE_MISSING_PVDEVMAPPERMPATHCPART2LVMDEV needed in /usr/share/rear/layout/prepare/default/600_show_unprocessed.sh line 33

Manually add code that recreates pv:/dev/mapper/mpathc_part2 (lvmdev)

Do I really have to add code to complete a restore? Or did I missed something while configuring the site.conf for my environment?

I really hope I will not have to add code at restore because we have hundreds of systems. In a situation of DR it would be another problem on the pile.

RESCUE tstinf01:~ # rear -D recover
Relax-and-Recover 2.3 / 2018-04-20
Using log file: /var/log/rear/rear-tstinf01.log
Running workflow recover within the ReaR rescue/recovery system
Will do driver migration (recreating initramfs/initrd)
IBM Spectrum Protect
Command Line Backup-Archive Client Interface
  Client Version 8, Release 1, Level 2.0
  Client date/time: 05/04/18   11:07:05
(c) Copyright by IBM Corporation and other(s) 1990, 2017. All Rights Reserved.

eode Name: STSTINF01
Session established with server GISPA: Linux/x86_64
  Server Version 8, Release 1, Level 4.000
  Server date/time: 05/04/18   07:07:12  Last access: 05/04/18   07:06:39

Domain Name               : QAASBA
Activated Policy Set Name : QAASBA
Activation date/time      : 03/26/18   10:14:56
Default Mgmt Class Name   : QAASBA
Grace Period Backup Retn. : 30 day(s)
Grace Period Archive Retn.: 365 day(s)


MgmtClass Name                  : HDB
Description                     : Management Class for HANA Database


MgmtClass Name                  : HDBNL
Description                     : Management Class for HANA DB No Limit Retention


MgmtClass Name                  : HLOG
Description                     : Management Class for HANA Logs


MgmtClass Name                  : QAASBA
Description                     : MGMT Class default pour QAAS

TSM restores by default the latest backup data. Alternatively you can specify
a different date and time to enable Point-In-Time Restore. Press ENTER to
use the most recent available backup
Enter date/time (YYYY-MM-DD HH:mm:ss) or press ENTER [30 secs]:
Skipping Point-In-Time Restore, will restore most recent data.

The TSM Server reports the following for this node:
                  #     Last Incr Date          Type    Replication       File Space Name
                --------------------------------------------------------------------------------
                  1     01-05-2018 22:16:40     BTRFS   Current           /
                  2     01-05-2018 22:10:59     BTRFS   Current           /.snapshots
                  3     01-05-2018 22:11:12     BTRFS   Current           /boot/grub2/powerpc-ieee1275
                  4     01-05-2018 22:11:26     XFS     Current           /home
                  5     01-05-2018 22:11:38     BTRFS   Current           /opt
                  6     01-05-2018 22:11:12     BTRFS   Current           /srv
                  7     01-05-2018 22:11:20     BTRFS   Current           /usr/local
                  8     01-05-2018 22:11:14     BTRFS   Current           /var/cache
                  9     01-05-2018 22:11:24     BTRFS   Current           /var/crash
                 10     01-05-2018 22:11:03     BTRFS   Current           /var/lib/libvirt/images
                 11     01-05-2018 22:11:12     BTRFS   Current           /var/lib/machines
                 12     01-05-2018 22:11:03     BTRFS   Current           /var/lib/mailman
                 13     01-05-2018 22:11:12     BTRFS   Current           /var/lib/mariadb
                 14     01-05-2018 22:11:24     BTRFS   Current           /var/lib/mysql
                 15     01-05-2018 22:11:03     BTRFS   Current           /var/lib/named
                 16     01-05-2018 22:11:12     BTRFS   Current           /var/lib/pgsql
                 17     01-05-2018 22:11:12     BTRFS   Current           /var/log
                 18     01-05-2018 22:11:24     BTRFS   Current           /var/opt
                 19     01-05-2018 22:11:26     BTRFS   Current           /var/spool
                 20     01-05-2018 22:11:26     BTRFS   Current           /var/tmp
Please enter the numbers of the filespaces we should restore.
Pay attention to enter the filesystems in the correct order
(like restore / before /var/log)
(default: 1 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20): [30 secs]
The following filesystems will be restored:
/
/boot/grub2/powerpc-ieee1275
/home
/opt
/srv
/usr/local
/var/cache
/var/crash
/var/lib/libvirt/images
/var/lib/machines
/var/lib/mailman
/var/lib/mariadb
/var/lib/mysql
/var/lib/named
/var/lib/pgsql
/var/log
/var/opt
/var/spool
/var/tmp
Is this selection correct ? (Y|n) [30 secs]
Setting up multipathing
Activating multipath
multipath activated
Listing multipath device found
mpathc  (254, 0)
Comparing disks
Device dm-0 has expected (same) size 107374182400 (will be used for recovery)
Disk configuration looks identical
UserInput -I DISK_LAYOUT_PROCEED_RECOVERY needed in /usr/share/rear/layout/prepare/default/250_compare_disks.sh line 146
Proceed with recovery (yes) otherwise manual disk layout configuration is enforced
(default 'yes' timeout 30 seconds)
yes
UserInput: No choices - result is 'yes'
User confirmed to proceed with recovery
No code has been generated to recreate pv:/dev/mapper/mpathc_part2 (lvmdev).
    To recreate it manually add code to /var/lib/rear/layout/diskrestore.sh or abort.
UserInput -I ADD_CODE_TO_RECREATE_MISSING_PVDEVMAPPERMPATHCPART2LVMDEV needed in /usr/share/rear/layout/prepare/default/600_show_unprocessed.sh line 33
Manually add code that recreates pv:/dev/mapper/mpathc_part2 (lvmdev)
1) View /var/lib/rear/layout/diskrestore.sh
2) Edit /var/lib/rear/layout/diskrestore.sh
3) Go to Relax-and-Recover shell
4) Continue 'rear recover'
5) Abort 'rear recover'
(default '4' timeout 300 seconds)
1
UserInput: Valid choice number result 'View /var/lib/rear/layout/diskrestore.sh'
#!/bin/bash

LogPrint "Start system layout restoration."

mkdir -p /mnt/local
if create_component "vgchange" "rear" ; then
    lvm vgchange -a n >/dev/null
    component_created "vgchange" "rear"
fi

set -e
set -x

if create_component "/dev/mapper/mpathc" "multipath" ; then
# Create /dev/mapper/mpathc (multipath)
LogPrint "Creating partitions for disk /dev/mapper/mpathc (msdos)"
my_udevsettle
parted -s /dev/mapper/mpathc mklabel msdos >&2
my_udevsettle
my_udevsettle
parted -s /dev/mapper/mpathc mkpart 'primary' 1048576B 8225279B >&2
my_udevsettle
my_udevsettle
parted -s /dev/mapper/mpathc set 1 boot on >&2
my_udevsettle
my_udevsettle
parted -s /dev/mapper/mpathc set 1 prep on >&2
my_udevsettle
my_udevsettle
parted -s /dev/mapper/mpathc mkpart 'primary' 8225280B 107002667519B >&2
my_udevsettle
my_udevsettle
parted -s /dev/mapper/mpathc set 2 lvm on >&2
my_udevsettle
sleep 1
if ! partprobe -s /dev/mapper/mpathc >&2 ; then
    LogPrint 'retrying partprobe /dev/mapper/mpathc after 10 seconds'
    sleep 10
    if ! partprobe -s /dev/mapper/mpathc >&2 ; then
        LogPrint 'retrying partprobe /dev/mapper/mpathc after 1 minute'
        sleep 60
        if ! partprobe -s /dev/mapper/mpathc >&2 ; then
            LogPrint 'partprobe /dev/mapper/mpathc failed, proceeding bona fide'
        fi
    fi
fi
component_created "/dev/mapper/mpathc" "multipath"
else
    LogPrint "Skipping /dev/mapper/mpathc (multipath) as it has already been created."
fi

if create_component "/dev/mapper/mpathc1" "part" ; then
# Create /dev/mapper/mpathc1 (part)
component_created "/dev/mapper/mpathc1" "part"
else
    LogPrint "Skipping /dev/mapper/mpathc1 (part) as it has already been created."
fi

if create_component "/dev/mapper/mpathc2" "part" ; then
# Create /dev/mapper/mpathc2 (part)
component_created "/dev/mapper/mpathc2" "part"
else
    LogPrint "Skipping /dev/mapper/mpathc2 (part) as it has already been created."
fi


set +x
set +e

LogPrint "Disk layout created."

UserInput -I ADD_CODE_TO_RECREATE_MISSING_PVDEVMAPPERMPATHCPART2LVMDEV needed in /usr/share/rear/layout/prepare/default/600_show_unprocessed.sh line 33
Manually add code that recreates pv:/dev/mapper/mpathc_part2 (lvmdev)
1) View /var/lib/rear/layout/diskrestore.sh
2) Edit /var/lib/rear/layout/diskrestore.sh
3) Go to Relax-and-Recover shell
4) Continue 'rear recover'
5) Abort 'rear recover'
(default '4' timeout 300 seconds)
5
UserInput: Valid choice number result 'Abort 'rear recover''
ERROR: User chose to abort 'rear recover' in /usr/share/rear/layout/prepare/default/600_show_unprocessed.sh
Aborting due to an error, check /var/log/rear/rear-tstinf01.log for details
You should also rm -Rf /tmp/rear.nVsRkyuhN0xgWT6
Terminated

RESCUE tstinf01:~ # rear -V
Relax-and-Recover 2.3 / 2018-04-20

tstinf01:~ # arch
ppc64le

Booting via SMS

tstinf01:~ # lsb_release -a
LSB Version: n/a
Distributor ID: SUSE
Description: SUSE Linux Enterprise Server for SAP Applications 12 SP2
Release: 12.2
Codename: n/a

tstinf01:~ # cat site.conf
OUTPUT=ISO
OUTPUT_URL=nfs://tstinf02/exports/rear/iso
ISO_PREFIX="$HOSTNAME-rear-$( date "+%y%m%d" )"
ISO_VOLID=$HOSTNAME
REAR_INITRD_COMPRESSION=lzma
AUTOEXCLUDE_MULTIPATH=n
BOOT_OVER_SAN=y
BACKUP=TSM
COPY_AS_IS_TSM=( /etc/$HOSTNAME /opt/tivoli/tsm/client/ba/bin/dsmc /opt/tivoli/tsm/client/ba/bin/tsmbench_inclexcl /opt/tivoli/tsm/client/ba/bin/dsm.sys /opt/tivoli/tsm/client/ba/bin/dsm.opt /opt/tivoli/tsm/client/api/bin64/libgpfs.so /opt/tivoli/tsm/client/api/bin64/libdmapi.so /opt/tivoli/tsm/client/ba/bin/EN_US/dsmclientV3.cat /usr/local/ibm/gsk8* )
COPY_AS_IS_EXCLUDE_TSM=( )
PROGS_TSM=(dsmc)
TSM_LD_LIBRARY_PATH="/opt/tivoli/tsm/client/ba/bin:/opt/tivoli/tsm/client/api/bin64:/opt/tivoli/tsm/client/api/bin:/opt/tivoli/tsm/client/api/bin64/cit/bin"
TSM_RESULT_FILE_PATH=/opt/tivoli/tsm/rear
TSM_RESULT_SAVE=n
TSM_ARCHIVE_MGMT_CLASS=qaasba
TSM_RM_ISOFILE=y

Thanks,

@jsmeix
Copy link
Member

jsmeix commented May 4, 2018

PPC and MULTIPATH (plus TSM and a special way to boot SMS)
looks very much as if only @schabrolles might actually help here...

@jsmeix
Copy link
Member

jsmeix commented May 4, 2018

@bern66
it seems you use SLES12-SP2 with its default btrfs structure
but I do not see the usual config variables in your etc/rear/local.conf
(or etc/rear/site.conf) that are needed for the SLE12 btrfs structure,
cf. the example config files in usr/share/rear/conf/examples/
(there is also one for SLE12 with SAP HANA).

@bern66
Copy link
Author

bern66 commented May 4, 2018

Thanks jsmeix! I didn't know about the details for btrfs and SLES12. I'll add those elements of configuration in my environment.

@schabrolles
Copy link
Member

schabrolles commented May 4, 2018

@bern66 can you also show your /var/lib/rear/layout/disklayout.conf

just for reference, here is an example of a configuration file that is working for me (Power8 LPAR, SLES12, TSM, PXE boot server instead of ISO)

# Default is to create Relax-and-Recover rescue media as ISO image
# set OUTPUT to change that
# set BACKUP to activate an automated (backup and) restore of your data
# Possible configuration values can be found in /usr/share/rear/conf/default.conf
#
# This file (local.conf) is intended for manual configuration. For configuration
# through packages and other automated means we recommend creating a new
# file named site.conf next to this file and to leave the local.conf as it is.
# Our packages will never ship with a site.conf.

AUTOEXCLUDE_MULTIPATH=n
BOOT_OVER_SAN=y
REAR_INITRD_COMPRESSION=lzma

OUTPUT=PXE
OUTPUT_PREFIX_PXE=rear/$HOSTNAME
PXE_CONFIG_GRUB_STYLE=y
PXE_CONFIG_URL="nfs://{{ PXE_SERVER_IP }}/var/lib/tftpboot/boot/grub2/powerpc-ieee1275"
PXE_CREATE_LINKS=IP
PXE_REMOVE_OLD_LINKS=y
PXE_TFTP_URL="nfs://{{ PXE_SERVER_IP }}/var/lib/tftpboot"
OUTPUT_OPTIONS="nfsvers=4,nolock"

BACKUP=TSM
COPY_AS_IS_TSM=( /etc/adsm/TSM.PWD /opt/tivoli/tsm/client/ba/bin/dsmc /opt/tivoli/tsm/client/ba/bin/tsmbench_inclexcl /opt/tivoli/tsm/client/ba/bin/dsm.sys /opt/tivoli/tsm/client/ba/bin/dsm.opt /opt/tivoli/tsm/client/api/bin64/libgpfs.so /opt/tivoli/tsm/client/api/bin64/libdmapi.so /opt/tivoli/tsm/client/ba/bin/EN_US/dsmclientV3.cat /usr/local/ibm/gsk8* )
TSM_RESULT_SAVE=n

## SLES12
BACKUP_OPTIONS="nfsvers=4,nolock"
REQUIRED_PROGS=( "${REQUIRED_PROGS[@]}" snapper chattr lsattr )
COPY_AS_IS=( "${COPY_AS_IS[@]}" /usr/lib/snapper/installation-helper /etc/snapper/config-templates/default )

for subvol in $(findmnt -n -r -t btrfs | cut -d ' ' -f 1 | grep -v '^/$' | egrep -v 'snapshots|crash') ; do
    BACKUP_PROG_INCLUDE=( "${BACKUP_PROG_INCLUDE[@]}" "$subvol" )
done

POST_RECOVERY_SCRIPT=( 'if snapper --no-dbus -r $TARGET_FS_ROOT get-config | grep -q "^QGROUP.*[0-9]/[0-9]" ; then snapper --no-dbus -r $TARGET_FS_ROOT set-config QGROUP= ; snapper --no-dbus -r $TARGET_FS_ROOT setup-quota && echo snapper setup-quota done || echo snapper setup-quota failed ; else echo snapper setup-quota not used ; fi' )

@bern66
Copy link
Author

bern66 commented May 4, 2018

Of course I can do that even more if it can help you helping me. The disklayout.conf below is from a test server. Our production servers will more LUNs, just in case it could matter somehow. Here it is:

tstinf01:~ # cat /var/lib/rear/layout/disklayout.conf
lvmdev /dev/system /dev/mapper/mpathc_part2 RWHsoG-C5a3-78M5-FmaZ-xMvD-dN5N-jSsKXJ 208973520
lvmgrp /dev/system 4096 25509 104484864
lvmvol /dev/system home 5120 41943040
lvmvol /dev/system root 7589 62169088
lvmvol /dev/system swap 12800 104857600
# Filesystems (only ext2,ext3,ext4,vfat,xfs,reiserfs,btrfs are supported).
# Format: fs <device> <mountpoint> <fstype> [uuid=<uuid>] [label=<label>] [<attributes>]
fs /dev/mapper/system-home /home xfs uuid=71d6e654-92a0-4bc2-b152-e3a5bab13f9f label=/home  options=rw,relatime,attr2,inode64,noquota
fs /dev/mapper/system-root / btrfs uuid=97590a87-5390-44f0-826f-a9425d42e396 label= options=rw,relatime,space_cache,subvolid=5,subvol=/
# Btrfs default subvolume for /dev/mapper/system-root at /
# Format: btrfsdefaultsubvol <device> <mountpoint> <btrfs_subvolume_ID> <btrfs_subvolume_path>
btrfsdefaultsubvol /dev/mapper/system-root / 5 /
# Btrfs snapshot subvolumes for /dev/mapper/system-root at /
# Btrfs snapshot subvolumes are listed here only as documentation.
# There is no recovery of btrfs snapshot subvolumes.
# Format: btrfssnapshotsubvol <device> <mountpoint> <btrfs_subvolume_ID> <btrfs_subvolume_path>
#btrfssnapshotsubvol /dev/mapper/system-root / 669 @/.snapshots/255/snapshot
#btrfssnapshotsubvol /dev/mapper/system-root / 670 @/.snapshots/256/snapshot
#btrfssnapshotsubvol /dev/mapper/system-root / 700 @/.snapshots/279/snapshot
#btrfssnapshotsubvol /dev/mapper/system-root / 701 @/.snapshots/280/snapshot
#btrfssnapshotsubvol /dev/mapper/system-root / 702 @/.snapshots/281/snapshot
#btrfssnapshotsubvol /dev/mapper/system-root / 703 @/.snapshots/282/snapshot
#btrfssnapshotsubvol /dev/mapper/system-root / 704 @/.snapshots/283/snapshot
#btrfssnapshotsubvol /dev/mapper/system-root / 705 @/.snapshots/284/snapshot
# Btrfs normal subvolumes for /dev/mapper/system-root at /
# Format: btrfsnormalsubvol <device> <mountpoint> <btrfs_subvolume_ID> <btrfs_subvolume_path>
# Btrfs subvolumes that belong to snapper are listed here only as documentation.
# Snapper's base subvolume '/@/.snapshots' is deactivated here because during 'rear recover'
# it is created by 'snapper/installation-helper --step 1' (which fails if it already exists).
# Furthermore any normal btrfs subvolume under snapper's base subvolume would be wrong.
# See https://github.com/rear/rear/issues/944#issuecomment-238239926
# and https://github.com/rear/rear/issues/963#issuecomment-240061392
# how to create a btrfs subvolume in compliance with the SLES12 default brtfs structure.
# In short: Normal btrfs subvolumes on SLES12 must be created directly below '/@/'
# e.g. '/@/var/lib/mystuff' (which requires that the btrfs root subvolume is mounted)
# and then the subvolume is mounted at '/var/lib/mystuff' to be accessible from '/'
# plus usually an entry in /etc/fstab to get it mounted automatically when booting.
# Because any '@/.snapshots' subvolume would let 'snapper/installation-helper --step 1' fail
# such subvolumes are deactivated here to not let 'rear recover' fail:
#btrfsnormalsubvol /dev/mapper/system-root / 258 @/.snapshots
btrfsnormalsubvol /dev/mapper/system-root / 257 @
btrfsnormalsubvol /dev/mapper/system-root / 259 @/boot/grub2/powerpc-ieee1275
btrfsnormalsubvol /dev/mapper/system-root / 260 @/opt
btrfsnormalsubvol /dev/mapper/system-root / 261 @/srv
btrfsnormalsubvol /dev/mapper/system-root / 262 @/tmp
btrfsnormalsubvol /dev/mapper/system-root / 263 @/usr/local
btrfsnormalsubvol /dev/mapper/system-root / 264 @/var/cache
btrfsnormalsubvol /dev/mapper/system-root / 265 @/var/crash
btrfsnormalsubvol /dev/mapper/system-root / 266 @/var/lib/libvirt/images
btrfsnormalsubvol /dev/mapper/system-root / 267 @/var/lib/machines
btrfsnormalsubvol /dev/mapper/system-root / 268 @/var/lib/mailman
btrfsnormalsubvol /dev/mapper/system-root / 269 @/var/lib/mariadb
btrfsnormalsubvol /dev/mapper/system-root / 270 @/var/lib/mysql
btrfsnormalsubvol /dev/mapper/system-root / 271 @/var/lib/named
btrfsnormalsubvol /dev/mapper/system-root / 272 @/var/lib/pgsql
btrfsnormalsubvol /dev/mapper/system-root / 273 @/var/log
btrfsnormalsubvol /dev/mapper/system-root / 274 @/var/opt
btrfsnormalsubvol /dev/mapper/system-root / 275 @/var/spool
btrfsnormalsubvol /dev/mapper/system-root / 276 @/var/tmp
# All mounted btrfs subvolumes (including mounted btrfs default subvolumes and mounted btrfs snapshot subvolumes).
# Determined by the findmnt command that shows the mounted btrfs_subvolume_path.
# Format: btrfsmountedsubvol <device> <subvolume_mountpoint> <mount_options> <btrfs_subvolume_path>
btrfsmountedsubvol /dev/mapper/system-root / rw,relatime,space_cache,subvolid=5,subvol=/ /
btrfsmountedsubvol /dev/mapper/system-root /var/log rw,relatime,space_cache,subvolid=273,subvol=/@/var/log @/var/log
btrfsmountedsubvol /dev/mapper/system-root /var/lib/mysql rw,relatime,space_cache,subvolid=270,subvol=/@/var/lib/mysql @/var/lib/mysql
btrfsmountedsubvol /dev/mapper/system-root /var/lib/pgsql rw,relatime,space_cache,subvolid=272,subvol=/@/var/lib/pgsql @/var/lib/pgsql
btrfsmountedsubvol /dev/mapper/system-root /var/lib/mariadb rw,relatime,space_cache,subvolid=269,subvol=/@/var/lib/mariadb @/var/lib/mariadb
btrfsmountedsubvol /dev/mapper/system-root /var/lib/libvirt/images rw,relatime,space_cache,subvolid=266,subvol=/@/var/lib/libvirt/images @/var/lib/libvirt/images
btrfsmountedsubvol /dev/mapper/system-root /var/lib/named rw,relatime,space_cache,subvolid=271,subvol=/@/var/lib/named @/var/lib/named
btrfsmountedsubvol /dev/mapper/system-root /var/crash rw,relatime,space_cache,subvolid=265,subvol=/@/var/crash @/var/crash
btrfsmountedsubvol /dev/mapper/system-root /var/lib/machines rw,relatime,space_cache,subvolid=267,subvol=/@/var/lib/machines @/var/lib/machines
btrfsmountedsubvol /dev/mapper/system-root /.snapshots rw,relatime,space_cache,subvolid=258,subvol=/@/.snapshots @/.snapshots
btrfsmountedsubvol /dev/mapper/system-root /opt rw,relatime,space_cache,subvolid=260,subvol=/@/opt @/opt
btrfsmountedsubvol /dev/mapper/system-root /usr/local rw,relatime,space_cache,subvolid=263,subvol=/@/usr/local @/usr/local
btrfsmountedsubvol /dev/mapper/system-root /tmp rw,relatime,space_cache,subvolid=262,subvol=/@/tmp @/tmp
btrfsmountedsubvol /dev/mapper/system-root /var/cache rw,relatime,space_cache,subvolid=264,subvol=/@/var/cache @/var/cache
btrfsmountedsubvol /dev/mapper/system-root /var/tmp rw,relatime,space_cache,subvolid=276,subvol=/@/var/tmp @/var/tmp
btrfsmountedsubvol /dev/mapper/system-root /var/lib/mailman rw,relatime,space_cache,subvolid=268,subvol=/@/var/lib/mailman @/var/lib/mailman
btrfsmountedsubvol /dev/mapper/system-root /var/spool rw,relatime,space_cache,subvolid=275,subvol=/@/var/spool @/var/spool
btrfsmountedsubvol /dev/mapper/system-root /var/opt rw,relatime,space_cache,subvolid=274,subvol=/@/var/opt @/var/opt
btrfsmountedsubvol /dev/mapper/system-root /srv rw,relatime,space_cache,subvolid=261,subvol=/@/srv @/srv
btrfsmountedsubvol /dev/mapper/system-root /boot/grub2/powerpc-ieee1275 rw,relatime,space_cache,subvolid=259,subvol=/@/boot/grub2/powerpc-ieee1275 @/boot/grub2/powerpc-ieee1275
# Mounted btrfs subvolumes that have the 'no copy on write' attribute set.
# Format: btrfsnocopyonwrite <btrfs_subvolume_path>
# Swap partitions or swap files
# Format: swap <filename> uuid=<uuid> label=<label>
swap /dev/mapper/system-swap uuid=10bd73bf-48b3-46d6-8608-7ce9972ea4ab label=
multipath /dev/mapper/mpathc 107374182400 /dev/sda,/dev/sdb,/dev/sdc,/dev/sdd,/dev/sde,/dev/sdf,/dev/sdg,/dev/sdh
part /dev/mapper/mpathc 7176704 1048576 primary boot,prep /dev/mapper/mpathc1
part /dev/mapper/mpathc 106994442240 8225280 primary lvm /dev/mapper/mpathc2

Thanks,

@schabrolles
Copy link
Member

@bern66
I don't see why pv:/dev/mapper/mpathc_part2 is not recreated. I would need to have a look at the log file in debug mode.

try again with rear -d recover and send me the log file generated.

@schabrolles
Copy link
Member

@bern66,

I think I begin to understand what happen here.... but don't have the root cause yet.
It seems that your SLES12 created the partition on the multipath device in an unusual way (for a sles12)
partition are named /dev/mapper/mpathc2 while usually it is /dev/mapper/mpathc_part2
(@jsmeix what do you think .... Do you know why partition are named like RedHat here .... this multipath_partition naming convention will kill me .... )

@bern66
Copy link
Author

bern66 commented May 4, 2018

@schabrolles
The requested log file is attached.

Thanks
rear-tstinf01-partial-2018-05-04T09_57_36-04_00.log.gz

@bern66
Copy link
Author

bern66 commented May 4, 2018

@schabrolles
Regarding the naming convention, actually we have been advised by SuSE to not use friendly names. As a test at some point I changed it to user_friendly_names yes to try to fix my problem but that did not help. I'll have to switch it back to no.

For example:

tstinf02:~ # head /etc/multipath.conf
# Default multipath.conf file created for install boot

# Used mpathN names
defaults {
user_friendly_names no
}
devices {
    device {
        vendor "IBM"

@bern66
Copy link
Author

bern66 commented May 4, 2018

Switching to user_friendly_names no changes the names from this:

tstinf01:~ # ls -l /dev/mapper/
total 0
crw------- 1 root root  10, 236 May  4 11:22 control
brw-r----- 1 root disk 254,   0 May  4 11:22 mpathc
brw-r----- 1 root disk 254,   1 May  4 11:22 mpathc1
brw-r----- 1 root disk 254,   2 May  4 11:22 mpathc2
lrwxrwxrwx 1 root root        7 May  4 11:22 mpathc_part1 -> ../dm-1
lrwxrwxrwx 1 root root        7 May  4 11:22 mpathc_part2 -> ../dm-2
lrwxrwxrwx 1 root root        7 May  4 11:22 system-home -> ../dm-5
lrwxrwxrwx 1 root root        7 May  4 11:22 system-root -> ../dm-3
lrwxrwxrwx 1 root root        7 May  4 11:22 system-swap -> ../dm-4

to:

tstinf01:~ # ls -l /dev/mapper/
total 0
brw-r----- 1 root disk 254,   0 May  4 11:24 3600507680c800450b80000000000093e
brw-r----- 1 root disk 254,   1 May  4 11:24 3600507680c800450b80000000000093e1
brw-r----- 1 root disk 254,   2 May  4 11:24 3600507680c800450b80000000000093e2
lrwxrwxrwx 1 root root        7 May  4 11:24 3600507680c800450b80000000000093e_part1 -> ../dm-1
lrwxrwxrwx 1 root root        7 May  4 11:24 3600507680c800450b80000000000093e_part2 -> ../dm-2
crw------- 1 root root  10, 236 May  4 11:24 control
lrwxrwxrwx 1 root root        7 May  4 11:24 system-home -> ../dm-5
lrwxrwxrwx 1 root root        7 May  4 11:24 system-root -> ../dm-3
lrwxrwxrwx 1 root root        7 May  4 11:24 system-swap -> ../dm-4

@schabrolles
Copy link
Member

@bern66
from what I see, It looks good with or without firendly name for a sles12... (XXXXX_part2)
What I don't understand is why your disklayout.conf report /dev/mapper/mpathc2 and not mpathc_part2.

If it is the case, this mean the problem is during the "mkrescue" part and not during the restore.
could you also run the following command rear -d mkrescue and send me the output ?

@bern66
Copy link
Author

bern66 commented May 4, 2018

Attached is the log of rear -d mkrescue.

Thanks a lot for your assistance!

rear-tstinf01.log

@schabrolles
Copy link
Member

My mistake ... it was rear -D mkrescue sorry for that... I need the debug script

@bern66
Copy link
Author

bern66 commented May 4, 2018

No problem... here it is!

Thanks again for your assitance!

rear-tstinf01.log.gz

@schabrolles
Copy link
Member

@bern66,
I made a quick change for another problem, maybe it could also help you.
could you try this version :

git clone https://github.com/schabrolles/rear -b issue_1766

@jsmeix
Copy link
Member

jsmeix commented May 7, 2018

@schabrolles
regarding your #1796 (comment)

Since SLES12 it should be usually /dev/mapper/mpathc-part2
(the /dev/mapper/mpathc_part2 form is usually for SLE11).

I am not a multipath user so that all what I know about
SUSE multipath partition names is what I got via mail in
#1765 (comment)
and
#1765 (comment)
which is basically that on SLES12 it is always

/dev/mapper/foo => /dev/mapper/foo-part1

@bern66
accordingly @schabrolles had recently documented
the known SUSE multipath partition names
in usr/share/rear/lib/layout-functions.sh
in the get_part_device_name_format() function
cf. https://github.com/rear/rear/pull/1765/files

In your initial comment here you wrote

Relax-and-Recover 2.3 / 2018-04-20

but #1765 was committed
160e326
on April 23 2018
so that your ReaR from 2018-04-20 is a bit too old.

In general regardless if your particular issue (unexpected SUSE multipath partition name)
is already fixed in our current ReaR upstream GitHub master code
I recommend to try out our current ReaR upstream GitHub master code
because that is the only place where we at ReaR upstream fix bugs.

To use our current ReaR upstream GitHub master code
do the following:

Basically "git clone" it into a separated directory and then
configure and run ReaR from within that directory like:

# git clone https://github.com/rear/rear.git

# mv rear rear.github.master

# cd rear.github.master

# vi etc/rear/local.conf

# usr/sbin/rear -D mkbackup

Note the relative paths "etc/rear/" and "usr/sbin/".

If the issue also happens with current ReaR upstream GitHub master code
please provide us a complete ReaR debug log file of "rear -D mkrescue/mkbackup"
and the resulting disklayout.conf file from your original system
plus a complete ReaR debug log file of "rear -D recover"
so that we can have a look how it behaves in your particular environment
cf. "Debugging issues with Relax-and-Recover" at
https://en.opensuse.org/SDB:Disaster_Recovery

If it perhaps "just works" with current ReaR upstream GitHub master code
we would really appreciate an explicit positive feedback.

@jsmeix
Copy link
Member

jsmeix commented May 7, 2018

@bern66
if on your particular SLES12 system
your multipath devices are named

 /dev/mapper/mpathc2

and not in the usual SLES12 form which is

/dev/mapper/mpathc-part2

#1765 (comment)
and
#1765 (comment)
seem to indicate that on your particular SLES12 system
there is no udev rule file /usr/lib/udev/rules.d/66-kpartx.rules
(it is provided by the kpartx RPM)
or it is there but the actual rule therein

RUN+="/sbin/kpartx -u -p -part /dev/$name"

does not work as it should on your particular SLES12 system.
But because I am not a multipath user this is only a blind guess.

@schabrolles
Copy link
Member

schabrolles commented May 7, 2018

@jsmeix

I think the problem here is the presence of device /dev/mapper/mpathc1 which is unusual ..
It seems to not be a link but a real device create with mknode command ... I don't why... (I don't have such device on my multipathed SLE12).

because of that multipathed partition are recorded like /dev/mapper/mapthc1 in disklayout.conf
The problem could be related to issue #1766 where I propose a new way to discover the multipathed partition name based on /sys and device-mapper.

I think it should also solve the issue here because it will find the dm-X device as partition and then find the real name (instead of trying to guess from device name + [1-9]* , or -part[1-9]* .... )

schabrolles@c635fd9

If @bern66 or @badarmontassar confirm it solves their issue, I will make a PR.

@jsmeix
Copy link
Member

jsmeix commented May 7, 2018

@bern66
how did you install your particular SLES12 system?

On my SLES12-SP3 installed form an original SLES12 installation medium I get

# rpm -qf /usr/lib/udev/rules.d/66-kpartx.rules
kpartx-0.7.1+7+suse.3edc5f7d-1.26.x86_64

# rpm -e --test kpartx
error: Failed dependencies:
        kpartx is needed by (installed) dmraid-1.0.0.rc16-34.3.x86_64
        kpartx is needed by (installed) multipath-tools-0.7.1+7+suse.3edc5f7d-1.26.x86_64

# rpm -e --test dmraid
error: Failed dependencies:
        dmraid is needed by (installed) os-prober-1.61-29.1.x86_64

# rpm -e --test multipath-tools
error: Failed dependencies:
        multipath-tools is needed by (installed) patterns-sles-base-12-77.8.x86_64

i.e. one cannot have kpartx not installed without breaking
several other RPM dependencies that are usually installed.

@jsmeix
Copy link
Member

jsmeix commented May 7, 2018

@schabrolles

I think the presence of a multipath device /dev/mapper/mpathc1
instead of the usually expected /dev/mapper/mpathc-part1 on SLES12
indicates that this particular SLES12 system is not as it should be.
This might lead to an endless sequence of other problems for the user
because I think nobody expects a SLES12 system with such multipath
so that the user may run into endless more troubles e.g. when asking
our official SUSE support or someone else about whatever issues
that are somehow related to his particular SLES12 multipath system.
Furthermore I fear whatever other stuff in SUSE (e.g. YaST or whatever)
may "do strage things" if the multipath device names are not the expected ones.

Nevertheless if you can make the ReaR multipath code to even "just work"
for any kind of multipath device names it would be of course absolutely great,
in particular when users then could report "all fails - except ReaR" ;-)

@bern66
Copy link
Author

bern66 commented May 7, 2018

Hello everyone,

There is so many things in your previous comments, I'll need some times to reply to all your questions and suggestions.

But a first few points:

  • kpartx is installed on our systems;
  • I did not installed our SLES12 system myself. We are in an IBM Power System environment and an image has been created initially from which all other systems are build from. I am fairly new in this environment;
  • I'll give a try with the latest version for ReaR.

Also, a SuSE consultant came to help us with many aspects of our environment and didn't mention anything wrong with our multipath setting. But he mention that we should not use friendly names specifically to support DR configurations.

Regarding the naming of the multipath, here is what I have when using friendly names:

tstinf01:~ # ls -l /dev/mapper/
total 0
crw------- 1 root root  10, 236 May  7 07:17 control
brw-r----- 1 root disk 254,   0 May  7 07:17 mpathc
brw-r----- 1 root disk 254,   1 May  7 07:17 mpathc1
brw-r----- 1 root disk 254,   2 May  7 07:17 mpathc2
lrwxrwxrwx 1 root root        7 May  7 07:17 mpathc_part1 -> ../dm-1
lrwxrwxrwx 1 root root        7 May  7 07:17 mpathc_part2 -> ../dm-2
lrwxrwxrwx 1 root root        7 May  7 07:17 system-home -> ../dm-5
lrwxrwxrwx 1 root root        7 May  7 07:17 system-root -> ../dm-3
lrwxrwxrwx 1 root root        7 May  7 07:17 system-swap -> ../dm-4

And here is what I have when not using friendly names:

tstinf01:~ # ls -l /dev/mapper/
total 0
brw-r----- 1 root disk 254,   0 May  7 08:17 3600507680c800450b80000000000093e
brw-r----- 1 root disk 254,   1 May  7 08:17 3600507680c800450b80000000000093e1
brw-r----- 1 root disk 254,   2 May  7 08:17 3600507680c800450b80000000000093e2
lrwxrwxrwx 1 root root        7 May  7 08:17 3600507680c800450b80000000000093e_part1 -> ../dm-1
lrwxrwxrwx 1 root root        7 May  7 08:17 3600507680c800450b80000000000093e_part2 -> ../dm-2
crw------- 1 root root  10, 236 May  7 08:17 control
lrwxrwxrwx 1 root root        7 May  7 08:17 system-home -> ../dm-5
lrwxrwxrwx 1 root root        7 May  7 08:17 system-root -> ../dm-3
lrwxrwxrwx 1 root root        7 May  7 08:17 system-swap -> ../dm-4

The version of my system is 12.2:

tstinf01:~ # lsb_release -a
LSB Version:    n/a
Distributor ID: SUSE
Description:    SUSE Linux Enterprise Server for SAP Applications 12 SP2
Release:        12.2
Codename:       n/a

Thanks for your assistance, you are amazing!

@bern66
Copy link
Author

bern66 commented May 7, 2018

I compiled the latest version of ReaR and ended up with the same problem as initially described except that I use no friendly names:

RESCUE tstinf01:~ # rear -D recover
Relax-and-Recover 2.3. / 2018-05-07
Using log file: /var/log/rear/rear-tstinf01.log
Running workflow recover within the ReaR rescue/recovery system

[snip]

UserInput: No choices - result is 'yes'
User confirmed to proceed with recovery
No code has been generated to recreate pv:/dev/mapper/3600507680c800450b80000000000093e_part2 (lvmdev).
    To recreate it manually add code to /var/lib/rear/layout/diskrestore.sh or abort.
UserInput -I ADD_CODE_TO_RECREATE_MISSING_PVDEVMAPPER3600507680C800450B80000000000093EPART2LVMDEV needed in /usr/share/rear/layout/prepare/default/600_show_unprocessed.sh line 33
Manually add code that recreates pv:/dev/mapper/3600507680c800450b80000000000093e_part2 (lvmdev)
1) View /var/lib/rear/layout/diskrestore.sh
2) Edit /var/lib/rear/layout/diskrestore.sh
3) Go to Relax-and-Recover shell
4) Continue 'rear recover'
5) Abort 'rear recover'
(default '4' timeout 300 seconds)

One thing I do not understand, @schabrolles you said in a comment that XXXX_part2 was not there but it is. See below:

tstinf02:~ # grep part2 disklayout.conf
lvmdev /dev/system /dev/mapper/3600507680c800450b80000000000093e_part2 RWHsoG-C5a3-78M5-FmaZ-xMvD-dN5N-jSsKXJ 208973520

I attach the disklayout.conf and the log file of rear -D recover if it can help.

log_disklayout.tar.gz

@schabrolles
Copy link
Member

schabrolles commented May 7, 2018

@bern66,

The issue is you have lvmdev /dev/system /dev/mapper/3600507680c800450b80000000000093e_part2 but the partition is named part /dev/mapper/3600507680c800450b80000000000093e2 instead of part /dev/mapper/3600507680c800450b80000000000093e_part2.

As explained in a previous comment, could you give a try to the patch I prepared for issue #1766. I think it could help here.

git clone https://github.com/schabrolles/rear -b issue_1766
mv rear rear.github.issue_1766
cd rear.github.issue_1766
vi etc/rear/local.conf
usr/sbin/rear -D mkbackup

@bern66
Copy link
Author

bern66 commented May 8, 2018

@schabrolles
I tried the fix for issue 1766 and got the same problem.

RESCUE tstinf01:~ # rear -D recover
Relax-and-Recover 2.3-git_1766. / 2018-05-08
Using log file: /var/log/rear/rear-tstinf01.log
Running workflow recover within the ReaR rescue/recovery system
Testing connection to TSM server
. . .
UserInput -I DISK_LAYOUT_PROCEED_RECOVERY needed in /usr/share/rear/layout/prepare/default/250_compare_disks.sh line 146
Proceed with recovery (yes) otherwise manual disk layout configuration is enforced
(default 'yes' timeout 30 seconds)
UserInput: No real user input (empty or only spaces) - using default input
UserInput: No choices - result is 'yes'
Proceeding with recovery by default
No code has been generated to recreate pv:/dev/mapper/3600507680c800450b80000000000093e_part2 (lvmdev).
    To recreate it manually add code to /var/lib/rear/layout/diskrestore.sh or abort.
UserInput -I ADD_CODE_TO_RECREATE_MISSING_PVDEVMAPPER3600507680C800450B80000000000093EPART2LVMDEV needed in /usr/share/rear/layout/prepare/default/600_show_unprocessed.sh line 33
Manually add code that recreates pv:/dev/mapper/3600507680c800450b80000000000093e_part2 (lvmdev)
1) View /var/lib/rear/layout/diskrestore.sh
2) Edit /var/lib/rear/layout/diskrestore.sh

I attached the disklayout.conf and the log of read -D recover.

rear-20180508.zip

I'll now take a look at your message regarding 1802.

Thanks for your help

@schabrolles
Copy link
Member

@bern66,
send me the logfile generated during "rear -D mkbackup"

@schabrolles
Copy link
Member

@bern66,
just run findmnt /

here is the output I got on my system (LPAR on POWER - SLES12SP2)

TARGET SOURCE                                            FSTYPE OPTIONS
/      /dev/mapper/system-root[/@/.snapshots/1/snapshot] btrfs  rw,relatime,space_cache,subvolid=279,subvol=/@/.snapshot

@jsmeix
Copy link
Member

jsmeix commented May 17, 2018

I would have been really surprised if the kind of underlying block-device
(e.g. a plain disk partition like /dev/sda2 versus a logical volume like /dev/dm-1)
would make a difference which btrfs structure is created on it.

The following shows - as far as I can reproduce it - that the btrfs structure
(and in particular how the btrfs parts are mounted)
does not depend on whether or not the "LVM-based Proposal" is used.

When I install a SLES12-GA/SP0 system
from an original SUSE SLES12-GA/SP0 installation medium
with its original SUSE SLES12-GA/SP0 installer
(i.e. the YaST installer on that SLES12-GA/SP0 installation medium)
on a virtual KVM/QEMU machine with a single 20GiB virtual harddisk
I get when I select the "LVM-based Proposal" in YaST
this result in the installed system:

# cat /etc/issue
Welcome to SUSE Linux Enterprise Server 12  (x86_64) - Kernel \r (\l).

# findmnt -a -o SOURCE,TARGET,FSTYPE -t btrfs
SOURCE                                            TARGET                   FSTYPE
/dev/mapper/system-root[/@]                       /                        btrfs
/dev/mapper/system-root[/@/.snapshots]            |-/.snapshots            btrfs
/dev/mapper/system-root[/@/var/lib/mailman]       |-/var/lib/mailman       btrfs
/dev/mapper/system-root[/@/var/spool]             |-/var/spool             btrfs
/dev/mapper/system-root[/@/tmp]                   |-/tmp                   btrfs
/dev/mapper/system-root[/@/var/tmp]               |-/var/tmp               btrfs
/dev/mapper/system-root[/@/home]                  |-/home                  btrfs
/dev/mapper/system-root[/@/var/opt]               |-/var/opt               btrfs
/dev/mapper/system-root[/@/var/lib/pgsql]         |-/var/lib/pgsql         btrfs
/dev/mapper/system-root[/@/var/lib/named]         |-/var/lib/named         btrfs
/dev/mapper/system-root[/@/var/log]               |-/var/log               btrfs
/dev/mapper/system-root[/@/var/crash]             |-/var/crash             btrfs
/dev/mapper/system-root[/@/usr/local]             |-/usr/local             btrfs
/dev/mapper/system-root[/@/srv]                   |-/srv                   btrfs
/dev/mapper/system-root[/@/opt]                   |-/opt                   btrfs
/dev/mapper/system-root[/@/boot/grub2/x86_64-efi] |-/boot/grub2/x86_64-efi btrfs
/dev/mapper/system-root[/@/boot/grub2/i386-pc]    `-/boot/grub2/i386-pc    btrfs

# file /dev/mapper/system-root
/dev/mapper/system-root: symbolic link to `../dm-1'

# readlink -e /dev/mapper/system-root
/dev/dm-1

# file /dev/dm-1
/dev/dm-1: block special (254/1)

# lsblk -i -p -o NAME,KNAME,PKNAME,MAJ:MIN,TYPE,FSTYPE,SIZE /dev/sda
NAME                        KNAME     PKNAME    MAJ:MIN TYPE FSTYPE       SIZE
/dev/sda                    /dev/sda              8:0   disk               20G
`-/dev/sda1                 /dev/sda1 /dev/sda    8:1   part LVM2_member   20G
  |-/dev/mapper/system-swap /dev/dm-0 /dev/sda1 254:0   lvm  swap         1.5G
  `-/dev/mapper/system-root /dev/dm-1 /dev/sda1 254:1   lvm  btrfs       18.5G

# pvdisplay 
  --- Physical volume ---
  PV Name               /dev/sda1
  VG Name               system
  PV Size               20.00 GiB / not usable 3.00 MiB
  Allocatable           yes 
  PE Size               4.00 MiB
  Total PE              5119
  Free PE               2
  Allocated PE          5117
  PV UUID               hyUS23-Gd2Y-YR70-kLlZ-22BQ-n4ee-AneoHm

# vgdisplay 
  --- Volume group ---
  VG Name               system
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  3
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                2
  Open LV               2
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               20.00 GiB
  PE Size               4.00 MiB
  Total PE              5119
  Alloc PE / Size       5117 / 19.99 GiB
  Free  PE / Size       2 / 8.00 MiB
  VG UUID               tejprz-WpZp-jLD1-JSKc-fTcK-16W9-DCyfw4

# lvdisplay 
  --- Logical volume ---
  LV Path                /dev/system/root
  LV Name                root
  VG Name                system
  LV UUID                YPFEHh-VedF-fm2Y-rtwT-dCFO-z2fk-SSfgMv
  LV Write Access        read/write
  LV Creation host, time (none), 2018-05-17 08:39:24 +0200
  LV Status              available
  # open                 1
  LV Size                18.53 GiB
  Current LE             4744
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     1024
  Block device           254:1
   
  --- Logical volume ---
  LV Path                /dev/system/swap
  LV Name                swap
  VG Name                system
  LV UUID                3GWaTs-1IKN-OEen-mh3s-RBEY-PQNx-mO2mMJ
  LV Write Access        read/write
  LV Creation host, time (none), 2018-05-17 08:39:25 +0200
  LV Status              available
  # open                 2
  LV Size                1.46 GiB
  Current LE             373
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     1024
  Block device           254:0

When I install a SLES12-SP3 system
from an original SUSE SLES12-SP3 installation medium
with its original SUSE SLES12-SP3 installer
(i.e. the YaST installer on that SLES12-SP3 installation medium)
on a virtual KVM/QEMU machine with a single 20GiB virtual harddisk
I get when I select the "LVM-based Proposal" in YaST
this result in the installed system:

# cat /etc/issue
Welcome to SUSE Linux Enterprise Server 12 SP3  (x86_64) - Kernel \r (\l).

# findmnt -a -o SOURCE,TARGET,FSTYPE -t btrfs
SOURCE                                             TARGET                    FSTYPE
/dev/mapper/system-root[/@/.snapshots/1/snapshot]  /                         btrfs
/dev/mapper/system-root[/@/usr/local]              |-/usr/local              btrfs
/dev/mapper/system-root[/@/var/lib/named]          |-/var/lib/named          btrfs
/dev/mapper/system-root[/@/srv]                    |-/srv                    btrfs
/dev/mapper/system-root[/@/var/lib/mariadb]        |-/var/lib/mariadb        btrfs
/dev/mapper/system-root[/@/var/lib/pgsql]          |-/var/lib/pgsql          btrfs
/dev/mapper/system-root[/@/var/cache]              |-/var/cache              btrfs
/dev/mapper/system-root[/@/boot/grub2/i386-pc]     |-/boot/grub2/i386-pc     btrfs
/dev/mapper/system-root[/@/home]                   |-/home                   btrfs
/dev/mapper/system-root[/@/var/tmp]                |-/var/tmp                btrfs
/dev/mapper/system-root[/@/var/spool]              |-/var/spool              btrfs
/dev/mapper/system-root[/@/.snapshots]             |-/.snapshots             btrfs
/dev/mapper/system-root[/@/var/lib/mailman]        |-/var/lib/mailman        btrfs
/dev/mapper/system-root[/@/opt]                    |-/opt                    btrfs
/dev/mapper/system-root[/@/var/lib/mysql]          |-/var/lib/mysql          btrfs
/dev/mapper/system-root[/@/var/lib/machines]       |-/var/lib/machines       btrfs
/dev/mapper/system-root[/@/var/log]                |-/var/log                btrfs
/dev/mapper/system-root[/@/var/crash]              |-/var/crash              btrfs
/dev/mapper/system-root[/@/var/lib/libvirt/images] |-/var/lib/libvirt/images btrfs
/dev/mapper/system-root[/@/tmp]                    |-/tmp                    btrfs
/dev/mapper/system-root[/@/boot/grub2/x86_64-efi]  |-/boot/grub2/x86_64-efi  btrfs
/dev/mapper/system-root[/@/var/opt]                `-/var/opt                btrfs

# file /dev/mapper/system-root
/dev/mapper/system-root: symbolic link to `../dm-1'

# readlink -e /dev/mapper/system-root
/dev/dm-1

# file /dev/dm-1
/dev/dm-1: block special (254/1)

# lsblk -i -p -o NAME,KNAME,PKNAME,MAJ:MIN,TYPE,FSTYPE,SIZE /dev/sda
NAME                        KNAME     PKNAME    MAJ:MIN TYPE FSTYPE       SIZE
/dev/sda                    /dev/sda              8:0   disk               20G
`-/dev/sda1                 /dev/sda1 /dev/sda    8:1   part LVM2_member   20G
  |-/dev/mapper/system-swap /dev/dm-0 /dev/sda1 254:0   lvm  swap         1.5G
  `-/dev/mapper/system-root /dev/dm-1 /dev/sda1 254:1   lvm  btrfs       18.6G

# pvdisplay 
  --- Physical volume ---
  PV Name               /dev/sda1
  VG Name               system
  PV Size               20.00 GiB / not usable 3.00 MiB
  Allocatable           yes 
  PE Size               4.00 MiB
  Total PE              5119
  Free PE               2
  Allocated PE          5117
  PV UUID               wKSGQ9-VyTZ-8vCv-t0gL-elPF-5d7u-h0nr5y

# vgdisplay 
  --- Volume group ---
  VG Name               system
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  3
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                2
  Open LV               2
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               20.00 GiB
  PE Size               4.00 MiB
  Total PE              5119
  Alloc PE / Size       5117 / 19.99 GiB
  Free  PE / Size       2 / 8.00 MiB
  VG UUID               nw0UCG-WT1G-3cv1-AA73-nMC7-i1cd-nDRZPm

# lvdisplay 
  --- Logical volume ---
  LV Path                /dev/system/root
  LV Name                root
  VG Name                system
  LV UUID                pcAtC2-nFQf-RtLx-wNFs-10r6-W6cm-97LIDR
  LV Write Access        read/write
  LV Creation host, time install, 2018-05-17 09:32:47 +0200
  LV Status              available
  # open                 1
  LV Size                18.54 GiB
  Current LE             4746
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     1024
  Block device           254:1
   
  --- Logical volume ---
  LV Path                /dev/system/swap
  LV Name                swap
  VG Name                system
  LV UUID                01GXgu-AxQv-TfzR-Pitr-OwHL-Pat9-nc5X2M
  LV Write Access        read/write
  LV Creation host, time install, 2018-05-17 09:32:47 +0200
  LV Status              available
  # open                 2
  LV Size                1.45 GiB
  Current LE             371
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     1024
  Block device           254:0

@bern66
Copy link
Author

bern66 commented May 17, 2018

Should I understand there is a fix for my problem? If there is a fix, is it available so I could test it in my environment?

Thanks,

@jsmeix
Copy link
Member

jsmeix commented May 17, 2018

No,
it confirms #1796 (comment)
also for the "LVM-based Proposal" in YaST as far as I can reproduce it.

I know about no SLES12 btrfs where its root subvolume
is mounted at all.

What is mounted at the root of the filesystem tree
(i.e. what is mounted at the '/' mountpoint directory) is
for SLES12-SP0 the normal /@ subvolume (which causes an issue) and
since SLES12-SP1 a snapper /@/.snapshots/1/snapshot subvolume.

jsmeix added a commit that referenced this issue May 18, 2018
…k_again_related_to_issue_1796

Make SLES12-GA/SP0 btrfs recovery work again (related to issue 1796):
In layout/save/GNU/Linux/230_filesystem_layout.sh the code that
excludes/disables/comments btrfs subvolumes that belong to snapper
in disklayout.conf is now conditionally run only if there is
a SLES12-SP1 (and later) btrfs subvolumes setup
(i.e. when the default subvolume path contains '@/.snapshots/'), cf.
#1796 (comment)
@jsmeix
Copy link
Member

jsmeix commented May 18, 2018

With #1813 merged it should work again
to recreate a SLES12-GA/SP0 system whith its default btrfs structure
cf. #1796 (comment)

@bern66
Copy link
Author

bern66 commented May 18, 2018

Thanks @jsmeix I'll give it a try today.

@jsmeix
Copy link
Member

jsmeix commented May 18, 2018

@bern66
very likely this will not help you unless you get a btrfs structure
that is one of the known SLES12 btrfs structures,
i.e. either the one of a SLES12-GA/SP0 system
where what is mounted at '/' shows up as

# findmnt -a -o SOURCE,TARGET -t btrfs
SOURCE                                            TARGET
/dev/mapper/system-root[/@]                       /
...

or the one of a SLES12-SP1-or-later system
where what is mounted at '/' shows up as

# findmnt -a -o SOURCE,TARGET -t btrfs
SOURCE                                             TARGET
/dev/mapper/system-root[/@/.snapshots/1/snapshot]  /
...

If you can you should try to get a SLES12-SP1-or-later
btrfs structure because the old SLES12-GA/SP0 btrfs setup
has an issue which is a "disk space leak" bug,
see
https://www.suse.com/releasenotes/x86_64/SUSE-SLES/12-SP1/
that reads (excerpt):

2.2.9 Installing into a Snapper-Controlled Btrfs Subvolume

Prior to SUSE Linux Enterprise 12 SP1, after the first rollback
of the system the original root volume was no longer reachable
and would never be removed automatically.
This resulted in a disk space leak.

Starting with SP1, YaST installs the system into a subvolume
controlled by Snapper.

@bern66
Copy link
Author

bern66 commented May 22, 2018

@jsmeix @schabrolles
I did a fresh install of SLES12-SP2:

tstinf03:/ # lsb_release -a
LSB Version:    n/a
Distributor ID: SUSE
Description:    SUSE Linux Enterprise Server for SAP Applications 12 SP2
Release:        12.2
Codename:       n/a

In a Power environment:

tstinf03:/ # arch
ppc64le

For which I understand I have a standard btrfs structure:

tstinf03:/ # btrfs subvolume  list -a /
ID 257 gen 118 top level 5 path <FS_TREE>/@
ID 258 gen 212 top level 257 path <FS_TREE>/@/.snapshots
ID 260 gen 236 top level 258 path <FS_TREE>/@/.snapshots/1/snapshot
ID 261 gen 209 top level 257 path <FS_TREE>/@/boot/grub2/powerpc-ieee1275
ID 262 gen 231 top level 257 path <FS_TREE>/@/home
ID 263 gen 118 top level 257 path <FS_TREE>/@/opt
ID 264 gen 212 top level 257 path <FS_TREE>/@/srv
ID 265 gen 236 top level 257 path <FS_TREE>/@/tmp
ID 266 gen 237 top level 257 path <FS_TREE>/@/usr/local
ID 267 gen 237 top level 257 path <FS_TREE>/@/var/cache
ID 268 gen 219 top level 257 path <FS_TREE>/@/var/crash
ID 269 gen 219 top level 257 path <FS_TREE>/@/var/lib/libvirt/images
ID 270 gen 219 top level 257 path <FS_TREE>/@/var/lib/machines
ID 271 gen 219 top level 257 path <FS_TREE>/@/var/lib/mailman
ID 272 gen 219 top level 257 path <FS_TREE>/@/var/lib/mariadb
ID 273 gen 219 top level 257 path <FS_TREE>/@/var/lib/mysql
ID 274 gen 219 top level 257 path <FS_TREE>/@/var/lib/named
ID 275 gen 219 top level 257 path <FS_TREE>/@/var/lib/pgsql
ID 276 gen 238 top level 257 path <FS_TREE>/@/var/log
ID 277 gen 219 top level 257 path <FS_TREE>/@/var/opt
ID 278 gen 238 top level 257 path <FS_TREE>/@/var/spool
ID 279 gen 231 top level 257 path <FS_TREE>/@/var/tmp
ID 292 gen 162 top level 258 path <FS_TREE>/@/.snapshots/2/snapshot
ID 293 gen 164 top level 258 path <FS_TREE>/@/.snapshots/3/snapshot
ID 294 gen 165 top level 258 path <FS_TREE>/@/.snapshots/4/snapshot
ID 295 gen 166 top level 258 path <FS_TREE>/@/.snapshots/5/snapshot
ID 296 gen 167 top level 258 path <FS_TREE>/@/.snapshots/6/snapshot
ID 297 gen 168 top level 258 path <FS_TREE>/@/.snapshots/7/snapshot
ID 298 gen 169 top level 258 path <FS_TREE>/@/.snapshots/8/snapshot
ID 299 gen 170 top level 258 path <FS_TREE>/@/.snapshots/9/snapshot
ID 300 gen 171 top level 258 path <FS_TREE>/@/.snapshots/10/snapshot
ID 301 gen 172 top level 258 path <FS_TREE>/@/.snapshots/11/snapshot
ID 302 gen 180 top level 258 path <FS_TREE>/@/.snapshots/12/snapshot
ID 303 gen 181 top level 258 path <FS_TREE>/@/.snapshots/13/snapshot
ID 304 gen 183 top level 258 path <FS_TREE>/@/.snapshots/14/snapshot
ID 305 gen 184 top level 258 path <FS_TREE>/@/.snapshots/15/snapshot
ID 306 gen 190 top level 258 path <FS_TREE>/@/.snapshots/16/snapshot
ID 307 gen 191 top level 258 path <FS_TREE>/@/.snapshots/17/snapshot
ID 308 gen 202 top level 258 path <FS_TREE>/@/.snapshots/18/snapshot
ID 309 gen 204 top level 258 path <FS_TREE>/@/.snapshots/19/snapshot

I then did a rear backup with rear -vD mkbackup and tried a rear recover with rear -vD recover. The recover still ended in error. See below the output of the recover and attached is the rear recover log. So far from my understanding, the problem seems to be the subvol=@/.snapshots.

RESCUE pgiststinf03:~ # rear -vD recover
Relax-and-Recover 2.3-git. / 2018-05-18
Using log file: /var/log/rear/rear-pgiststinf03.log
Running workflow recover within the ReaR rescue/recovery system
Starting required daemons for NFS: RPC portmapper (portmap or rpcbind) and rpc.statd if available.
Started RPC portmapper 'rpcbind'.
RPC portmapper 'rpcbind' available.
Started rpc.statd.
RPC status rpc.statd available.
Starting rpc.idmapd failed.
Using backup archive '/tmp/rear.qeI4KlUR0ME2oC2/outputfs/pgiststinf03/backup.tar.gz'
Will do driver migration (recreating initramfs/initrd)
Calculating backup archive size
Backup archive size is 1.4G     /tmp/rear.qeI4KlUR0ME2oC2/outputfs/pgiststinf03/backup.tar.gz (compressed)
Setting up multipathing
Activating multipath
multipath activated
Starting multipath daemon
multipathd started
Listing multipath device found
 size=100G
Comparing disks
Device mapper!3600507680c800450b800000000000ba7 does not exist (manual configuration needed)
Switching to manual disk layout configuration
Using /dev/mapper/3600507680c800450b800000000000bb9 (same size) for recreating /dev/mapper/3600507680c800450b800000000000ba7
Current disk mapping table (source -> target):
    /dev/mapper/3600507680c800450b800000000000ba7 /dev/mapper/3600507680c800450b800000000000bb9
UserInput -I LAYOUT_MIGRATION_CONFIRM_MAPPINGS needed in /usr/share/rear/layout/prepare/default/300_map_disks.sh line 211
Confirm or edit the disk mapping
1) Confirm disk mapping and continue 'rear recover'
2) Edit disk mapping (/var/lib/rear/layout/disk_mappings)
3) Use Relax-and-Recover shell and return back to here
4) Abort 'rear recover'
(default '1' timeout 300 seconds)
1
UserInput: Valid choice number result 'Confirm disk mapping and continue 'rear recover''
User confirmed disk mapping
UserInput -I LAYOUT_FILE_CONFIRMATION needed in /usr/share/rear/layout/prepare/default/500_confirm_layout_file.sh line 26
Confirm or edit the disk layout file
1) Confirm disk layout and continue 'rear recover'
2) Edit disk layout (/var/lib/rear/layout/disklayout.conf)
3) View disk layout (/var/lib/rear/layout/disklayout.conf)
4) View original disk space usage (/var/lib/rear/layout/config/df.txt)
5) Use Relax-and-Recover shell and return back to here
6) Abort 'rear recover'
(default '1' timeout 300 seconds)
1
UserInput: Valid choice number result 'Confirm disk layout and continue 'rear recover''
User confirmed disk layout file
Doing SLES12-SP1 (and later) btrfs subvolumes setup because the default subvolume path contains '@/.snapshots/'
UserInput -I LAYOUT_CODE_CONFIRMATION needed in /usr/share/rear/layout/recreate/default/100_confirm_layout_code.sh line 26
Confirm or edit the disk recreation script
1) Confirm disk recreation script and continue 'rear recover'
2) Edit disk recreation script (/var/lib/rear/layout/diskrestore.sh)
3) View disk recreation script (/var/lib/rear/layout/diskrestore.sh)
4) View original disk space usage (/var/lib/rear/layout/config/df.txt)
5) Use Relax-and-Recover shell and return back to here
6) Abort 'rear recover'
(default '1' timeout 300 seconds)
1
UserInput: Valid choice number result 'Confirm disk recreation script and continue 'rear recover''
User confirmed disk recreation script
Start system layout restoration.
Creating partitions for disk /dev/mapper/3600507680c800450b800000000000bb9 (msdos)
Creating LVM PV /dev/mapper/3600507680c800450b800000000000bb9-part2
Creating LVM VG system
Creating LVM volume system/root
Creating LVM volume system/swap
Creating filesystem of type btrfs with mount point / on /dev/mapper/system-root.
Mounting filesystem /
/usr/lib/snapper/installation-helper not executable may indicate an error with btrfs default subvolume setup for @/.snapshots/1/snapshot on /dev/mapper/system-root
UserInput -I LAYOUT_CODE_RUN needed in /usr/share/rear/layout/recreate/default/200_run_layout_code.sh line 127
The disk layout recreation script failed
1) Rerun disk recreation script (/var/lib/rear/layout/diskrestore.sh)
2) View 'rear recover' log file (/var/log/rear/rear-pgiststinf03.log)
3) Edit disk recreation script (/var/lib/rear/layout/diskrestore.sh)
4) View original disk space usage (/var/lib/rear/layout/config/df.txt)
5) Use Relax-and-Recover shell and return back to here
6) Abort 'rear recover'
(default '1' timeout 300 seconds)
6
UserInput: Valid choice number result 'Abort 'rear recover''
ERROR: User chose to abort 'rear recover' in /usr/share/rear/layout/recreate/default/200_run_layout_code.sh
Aborting due to an error, check /var/log/rear/rear-pgiststinf03.log for details
Exiting rear recover (PID 2830) and its descendant processes
Running exit tasks
You should also rm -Rf /tmp/rear.qeI4KlUR0ME2oC2
Terminated
RESCUE pgiststinf03:~ #

rear-pgiststinf03.log-20180522.gz

@schabrolles
Copy link
Member

schabrolles commented May 22, 2018

@bern66 could you please check the following into your "recovery system"

ls -l /usr/lib/snapper/installation-helper

Did you add the SLES12SP2 addtional configuration lines into your /etc/rear/local.conf file ?

## SLES12
BACKUP_OPTIONS="nfsvers=4,nolock"
REQUIRED_PROGS=( "${REQUIRED_PROGS[@]}" snapper chattr lsattr )
COPY_AS_IS=( "${COPY_AS_IS[@]}" /usr/lib/snapper/installation-helper /etc/snapper/config-templates/default )

for subvol in $(findmnt -n -r -t btrfs | cut -d ' ' -f 1 | grep -v '^/$' | egrep -v 'snapshots|crash') ; do
    BACKUP_PROG_INCLUDE=( "${BACKUP_PROG_INCLUDE[@]}" "$subvol" )
done

POST_RECOVERY_SCRIPT=( 'if snapper --no-dbus -r $TARGET_FS_ROOT get-config | grep -q "^QGROUP.*[0-9]/[0-9]" ; then snapper --no-dbus -r $TARGET_FS_ROOT set-config QGROUP= ; snapper --no-dbus -r $TARGET_FS_ROOT setup-quota && echo snapper setup-quota done || echo snapper setup-quota failed ; else echo snapper setup-quota not used ; fi' )

@bern66
Copy link
Author

bern66 commented May 22, 2018

Damn! I forgot this line and another one. Now it works for a normal btrfs structure under IBM Power Systems. I now have to find out if I can fix the anomaly in our actual btrfs systems.

Thanks!

@jsmeix
Copy link
Member

jsmeix commented May 23, 2018

@bern66
in your
#1796 (comment)
the btrfs subvolume list -a / output looks o.k.
but that only means you have the usual SLES12-SP1-or-later btrfs subvolumes
but that does not tell how those subvolumes are mounted, in particular
it does not tell what subvolume is mounted at the root of the filesystem tree
(i.e. what subvolume is mounted at the / directory)
only findmnt -a -o SOURCE,TARGET -t btrfs will tell that.

In general regarding btrfs and how to find out what its actual structure is
(not how it looks from within the mounted tree of filesystems and subvolume), see
#1496 (comment)
therein what I wrote about
"In general when you have to deal with btrfs subvolumes".
therein items (1) (2) and (3) - item (4) should not happen for a pristine
SLES12 default installation but it could happen when whatever tools
create additional btrfs subvolumes in a SLES12 system.

Regarding what subvolume is mounted at the / directory in SLES12:
It is the btrfs default subvolume that gets mounted at / in SLES12
i.e. what btrfs subvolume get-default / shows.

For some very initial basics about btrfs on SLES12 you may also have a look at
https://en.opensuse.org/images/8/8b/Relax-and-Recover_jsmeix_presentation.pdf
therein the two pages about "Relax-and-Recover on SLE12"
which is about the old SLE12-GA/SP0 (where .../@ is the default subvolume)
that is linked as "Fundamentals about Relax-and-Recover presentation PDF"
in the "See also" section in
https://en.opensuse.org/SDB:Disaster_Recovery

@jsmeix
Copy link
Member

jsmeix commented May 23, 2018

@bern66
in your older #1796 (comment)
(I don't know if that is still valid here)
therein in your https://github.com/rear/rear/files/1984200/rear-20180508.zip
your disklayout.conf contains

# Btrfs default subvolume for /dev/mapper/system-root at /
# Format: btrfsdefaultsubvol <device> <mountpoint> <btrfs_subvolume_ID> <btrfs_subvolume_path>
btrfsdefaultsubvol /dev/mapper/system-root / 5 /

which is not what matches an original SLES12 btrfs stucture.

On SLES12-GA/SP0 the btrfs default subvolume is

# btrfs subvolume get-default /
ID 257 gen 601 top level 5 path @

# grep ^btrfsdefaultsubvol var/lib/rear/layout/disklayout.conf
btrfsdefaultsubvol /dev/mapper/system-root / 257 @

On SLES12-SP1-or-later the btrfs default subvolume is

# btrfs subvolume get-default /
ID 259 gen 654 top level 258 path @/.snapshots/1/snapshot

# grep ^btrfsdefaultsubvol var/lib/rear/layout/disklayout.conf
btrfsdefaultsubvol /dev/mapper/system-root / 259 @/.snapshots/1/snapshot

@bern66
Copy link
Author

bern66 commented May 24, 2018

As far as I understand it, the btrfs structure is ok.

tstinf03:~> findmnt -a -o SOURCE,TARGET -t btrfs
SOURCE                                                  TARGET
/dev/mapper/system-root[/@/.snapshots/1/snapshot]       /
/dev/mapper/system-root[/@/srv]                         |-/srv
/dev/mapper/system-root[/@/var/crash]                   |-/var/crash
/dev/mapper/system-root[/@/boot/grub2/powerpc-ieee1275] |-/boot/grub2/powerpc-ieee1275
/dev/mapper/system-root[/@/var/lib/machines]            |-/var/lib/machines
/dev/mapper/system-root[/@/var/opt]                     |-/var/opt
/dev/mapper/system-root[/@/opt]                         |-/opt
/dev/mapper/system-root[/@/var/spool]                   |-/var/spool
/dev/mapper/system-root[/@/var/lib/mysql]               |-/var/lib/mysql
/dev/mapper/system-root[/@/var/lib/libvirt/images]      |-/var/lib/libvirt/images
/dev/mapper/system-root[/@/.snapshots]                  |-/.snapshots
/dev/mapper/system-root[/@/var/lib/pgsql]               |-/var/lib/pgsql
/dev/mapper/system-root[/@/var/lib/mailman]             |-/var/lib/mailman
/dev/mapper/system-root[/@/home]                        |-/home
/dev/mapper/system-root[/@/tmp]                         |-/tmp
/dev/mapper/system-root[/@/usr/local]                   |-/usr/local
/dev/mapper/system-root[/@/var/lib/mariadb]             |-/var/lib/mariadb
/dev/mapper/system-root[/@/var/cache]                   |-/var/cache
/dev/mapper/system-root[/@/var/log]                     |-/var/log
/dev/mapper/system-root[/@/var/tmp]                     |-/var/tmp
/dev/mapper/system-root[/@/var/lib/named]               `-/var/lib/named

The / seems mounted at a normal place.

tstinf03:~ # btrfs subvolume get-default /
ID 279 gen 2925 top level 277 path @/.snapshots/1/snapshot

Now that I know rear can do the job, I'll have to find a way to fix our systems in problem.

Thanks for your assistance.

@jsmeix
Copy link
Member

jsmeix commented May 25, 2018

@bern66
now the btrfs structure looks good!

An addedum FYI which could be also of interest for @schabrolles
what I meanwhile found out how a plain SLES12-SP2 default installation
can be different from a default installation of SLES_SAP12-SP2
"SUSE Linux Enterprise Server for SAP Applications 12 SP2":

I asked a colleague at SUSE about
what kind of btrfs setup a default installation of
"SUSE Linux Enterprise Server for SAP Applications 12 SP2"
should result.

He told me that SLES_SAP-12-SP2 consists of a pristine SLES12-SP2
plus additional stuff (mainly SLES_HA plus some SAP specific stuff)
so that when one installs SLES_SAP-12-SP2 from scratch
one should get a pristine SLES12-SP2 btrfs setup.

But this is not fully true because actually there are differences.

It is expected that the disk space makes a difference
whether or not one gets a btrfs setup with enabled
or disabled snapshots.

What is unexpected is that even with exactly same disk space
a default SLES_SAP-12-SP2 installation still differs
from a default SLES12-SP2 installation.

When I install SLES12-SP2
on a single 20 GiB (virtual) harddisk
on a KVM/QEMU virtual machine I get
by default plain partitioning without LVM and
a btrfs setup with enabled snapshots
so that in particular what is mounted at '/'
is a snapper controlled btrfs snapshot subvolume:

# findmnt -a -o SOURCE,TARGET -t btrfs
SOURCE                                             TARGET
/dev/mapper/system-root[/@/.snapshots/1/snapshot]  /
...

I also get that btrfs setup with enabled snapshots
when I select the "LVM-based Proposal" during
SLES12-SP2 installation on the same virtual machine.

When I install SLES_SAP-12-SP2
on a single 20 GiB (virtual) harddisk
on a KVM/QEMU virtual machine I get
by default the "LVM-based Proposal"
(in contrast to what I get with plain SLES12-SP2)
and I get a btrfs setup with disabled snapshots
(in contrast to what I get with SLES12-SP2 on a 20 GiB harddisk)
as follows:

# lsblk -i -p -o NAME,KNAME,TYPE,FSTYPE,SIZE /dev/sda
NAME                        KNAME     TYPE FSTYPE       SIZE
/dev/sda                    /dev/sda  disk               20G
`-/dev/sda1                 /dev/sda1 part LVM2_member   20G
  |-/dev/mapper/system-swap /dev/dm-0 lvm  swap         636M
  `-/dev/mapper/system-root /dev/dm-1 lvm  btrfs       19.4G

# findmnt -a -o SOURCE,TARGET -t btrfs
SOURCE                                             TARGET
/dev/mapper/system-root[/@]                        /
/dev/mapper/system-root[/@/home]                   |-/home
/dev/mapper/system-root[/@/opt]                    |-/opt
/dev/mapper/system-root[/@/var/lib/machines]       |-/var/lib/machines
/dev/mapper/system-root[/@/boot/grub2/i386-pc]     |-/boot/grub2/i386-pc
/dev/mapper/system-root[/@/var/crash]              |-/var/crash
/dev/mapper/system-root[/@/boot/grub2/x86_64-efi]  |-/boot/grub2/x86_64-efi
/dev/mapper/system-root[/@/var/lib/pgsql]          |-/var/lib/pgsql
/dev/mapper/system-root[/@/var/lib/mailman]        |-/var/lib/mailman
/dev/mapper/system-root[/@/var/lib/libvirt/images] |-/var/lib/libvirt/images
/dev/mapper/system-root[/@/var/cache]              |-/var/cache
/dev/mapper/system-root[/@/var/lib/mysql]          |-/var/lib/mysql
/dev/mapper/system-root[/@/var/lib/mariadb]        |-/var/lib/mariadb
/dev/mapper/system-root[/@/var/opt]                |-/var/opt
/dev/mapper/system-root[/@/tmp]                    |-/tmp
/dev/mapper/system-root[/@/var/lib/named]          |-/var/lib/named
/dev/mapper/system-root[/@/srv]                    |-/srv
/dev/mapper/system-root[/@/var/tmp]                |-/var/tmp
/dev/mapper/system-root[/@/var/spool]              |-/var/spool
/dev/mapper/system-root[/@/var/log]                |-/var/log
/dev/mapper/system-root[/@/usr/local]              `-/usr/local

I tested "rear mkbackup" plus "rear recover" on that system
(i.e. when what is mounted at '/' is the btrfs normal subvolume '/@')
which worked for me
(using ReaR with the #1813 fix).

When I install SLES_SAP-12-SP2
on a single 40 GiB (virtual) harddisk
on a KVM/QEMU virtual machine I get
by default the "LVM-based Proposal" and
a btrfs setup with enabled snapshots
as follows:

# lsblk -i -p -o NAME,KNAME,TYPE,FSTYPE,SIZE /dev/sda
NAME                        KNAME     TYPE FSTYPE       SIZE
/dev/sda                    /dev/sda  disk               40G
`-/dev/sda1                 /dev/sda1 part LVM2_member   40G
  |-/dev/mapper/system-swap /dev/dm-0 lvm  swap           2G
  `-/dev/mapper/system-root /dev/dm-1 lvm  btrfs       38.1G

# findmnt -a -o SOURCE,TARGET -t btrfs
SOURCE                                             TARGET
/dev/mapper/system-root[/@/.snapshots/1/snapshot]  /
/dev/mapper/system-root[/@/var/lib/mysql]          |-/var/lib/mysql
/dev/mapper/system-root[/@/boot/grub2/x86_64-efi]  |-/boot/grub2/x86_64-efi
/dev/mapper/system-root[/@/var/lib/machines]       |-/var/lib/machines
/dev/mapper/system-root[/@/opt]                    |-/opt
/dev/mapper/system-root[/@/tmp]                    |-/tmp
/dev/mapper/system-root[/@/var/lib/mariadb]        |-/var/lib/mariadb
/dev/mapper/system-root[/@/usr/local]              |-/usr/local
/dev/mapper/system-root[/@/var/opt]                |-/var/opt
/dev/mapper/system-root[/@/var/crash]              |-/var/crash
/dev/mapper/system-root[/@/var/spool]              |-/var/spool
/dev/mapper/system-root[/@/var/lib/mailman]        |-/var/lib/mailman
/dev/mapper/system-root[/@/var/lib/named]          |-/var/lib/named
/dev/mapper/system-root[/@/var/cache]              |-/var/cache
/dev/mapper/system-root[/@/var/log]                |-/var/log
/dev/mapper/system-root[/@/home]                   |-/home
/dev/mapper/system-root[/@/var/tmp]                |-/var/tmp
/dev/mapper/system-root[/@/srv]                    |-/srv
/dev/mapper/system-root[/@/.snapshots]             |-/.snapshots
/dev/mapper/system-root[/@/var/lib/libvirt/images] |-/var/lib/libvirt/images
/dev/mapper/system-root[/@/var/lib/pgsql]          |-/var/lib/pgsql
/dev/mapper/system-root[/@/boot/grub2/i386-pc]     `-/boot/grub2/i386-pc

which is the same as when I install SLES12-SP2
on a single 20 GiB (virtual) harddisk
on a KVM/QEMU virtual machine
and select the "LVM-based Proposal", cf.
#1796 (comment)
for SLES12-SP3.

@bern66
Copy link
Author

bern66 commented May 25, 2018

@jsmeix
All your examples shows / mounted under /@ or /@/.snapshots just like my test system. All our systems show a root filesystem mounted directly under / like below and I understand this is the source of all my problems.

tstinf01:~/scripts # findmnt -a -o SOURCE,TARGET -t btrfs
SOURCE                                                  TARGET
/dev/mapper/system-root                                 /
/dev/mapper/system-root[/@/opt]                         |-/opt
/dev/mapper/system-root[/@/var/spool]                   |-/var/spool
/dev/mapper/system-root[/@/var/cache]                   |-/var/cache

@jsmeix
Copy link
Member

jsmeix commented May 25, 2018

@bern66
exactly!

FYI (also @schabrolles ):
Next week I am not in the office, therefore:
Have a nice (and hopefully relaxed) weekend and a successful next week!

@bern66
Copy link
Author

bern66 commented May 28, 2018

@jsmeix @schabrolles
The btrfs problem is easily fixed.

tstinf04:~ # findmnt  -ao SOURCE,TARGET -t btrfs
SOURCE                                                  TARGET
/dev/mapper/system-root                                 /
/dev/mapper/system-root[/@/var/opt]                     |-/var/opt
/dev/mapper/system-root[/@/var/spool]                   |-/var/spool
/dev/mapper/system-root[/@/tmp]                         |-/tmp
/dev/mapper/system-root[/@/.snapshots]                  |-/.snapshots
tstinf04:~ # snapper create
tstinf04:~ # snapper ls
Type   | #   | Pre # | Date                     | User | Cleanup | Description  | Userdata
-------+-----+-------+--------------------------+------+---------+--------------+--------------
single | 0   |       |                          | root |         | current      |
pre    | 241 |       | Thu Jan 11 10:16:25 2018 | root | number  | zypp(zypper) | important=no
post   | 242 | 241   | Thu Jan 11 10:16:26 2018 | root | number  |              | important=no
pre    | 243 |       | Thu Jan 11 10:16:37 2018 | root | number  | zypp(zypper) | important=no
post   | 244 | 243   | Thu Jan 11 10:16:38 2018 | root | number  |              | important=no
pre    | 245 |       | Mon May  7 09:33:14 2018 | root | number  | zypp(zypper) | important=yes
post   | 246 | 245   | Mon May  7 09:34:36 2018 | root | number  |              | important=yes
single | 247 |       | Mon May 28 07:44:35 2018 | root |         |              |
tstinf04:~ # snapper rollback 247
Creating read-only snapshot of current system. (Snapshot 248.)
Creating read-write snapshot of snapshot 247. (Snapshot 249.)
Setting default subvolume to snapshot 249.

tstinf04:~ # shutdown -r now

pgiststinf04:~ # findmnt  -ao SOURCE,TARGET -t btrfs
SOURCE                                                  TARGET
/dev/mapper/system-root[/@/.snapshots/249/snapshot]     /
/dev/mapper/system-root[/@/usr/local]                   |-/usr/local
/dev/mapper/system-root[/@/var/lib/pgsql]               |-/var/lib/pgsql
/dev/mapper/system-root[/@/var/crash]                   |-/var/crash

It is that easy to fix this problem. Thanks for your time. Now I am on intensive ReaR testing.

@bern66
Copy link
Author

bern66 commented May 28, 2018

After a rear recover the system show a snapshot of 1.

tstinf04:~ # findmnt -ao SOURCE,TARGET -t btrfs
SOURCE                                                  TARGET
/dev/mapper/system-root[/@/.snapshots/1/snapshot]       /
/dev/mapper/system-root[/@/var/lib/named]               |-/var/lib/named
/dev/mapper/system-root[/@/var/log]                     |-/var/log

@schabrolles
Copy link
Member

I mark this one as fixed as the problem described in the title is solved by the different patches referenced into this thread.

@schabrolles
Copy link
Member

@bern66,

The fact the the snapshot is now 1 looks good to me. ReaR doesn't backup the snapshots layer. it backups the system as it is when you run the rear mkbackup command and recreate a new system during recovery that can still work with btrfs snapshot and snapper. (so restart with snapshot 1)

@bern66
Copy link
Author

bern66 commented May 29, 2018

Ok, that is what I understood. Thanks!

@bern66 bern66 closed this as completed May 29, 2018
@jsmeix
Copy link
Member

jsmeix commented Jun 4, 2018

FYI regarding
after rear recover the system show a snapshot of 1
see in
usr/share/rear/conf/examples/SLE12-SP2-btrfs-example.conf
the comment

# Regarding btrfs snapshots:
# Recovery of btrfs snapshot subvolumes is not possible.
# Only recovery of "normal" btrfs subvolumes is possible.
# On SLE12-SP1 and SP2 the only exception is the btrfs snapshot subvolume
# that is mounted at '/' but that one is not recreated but instead
# it is created anew from scratch during the recovery installation with the
# default first btrfs snapper snapshot subvolume path "@/.snapshots/1/snapshot"
# by the SUSE tool "installation-helper --step 1" (cf. below).
# Other snapshots like "@/.snapshots/234/snapshot" are not recreated.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
external tool The issue depends on other software e.g. third-party backup tools. fixed / solved / done support / question
Projects
None yet
Development

No branches or pull requests

3 participants