Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RAID issue: several RAID1 arrays on partitions on disks with 'unknown' partition tables #3185

Open
madurani opened this issue Mar 27, 2024 · 11 comments
Labels
not ReaR / invalid The root cause is not in the ReaR code or ReaR is misused. support / question

Comments

@madurani
Copy link

  • ReaR version ("/usr/sbin/rear -V"):
    Relax-and-Recover 2.7 / 2022-07-13

  • If your ReaR version is not the current version, explain why you can't upgrade:
    Last in repo

  • OS version ("cat /etc/os-release" or "lsb_release -a" or "cat /etc/rear/os.conf"):
    openSUSE Leap 15.5

  • ReaR configuration files ("cat /etc/rear/site.conf" and/or "cat /etc/rear/local.conf"):

OUTPUT=ISO
BACKUP=NETFS
BACKUP_URL=nfs://tms3028/svc/tms3028/data
NETFS_KEEP_OLD_BACKUP_COPY=yes
BACKUP_PROG_COMPRESS_OPTIONS=( --use-compress-program=pigz )
REQUIRED_PROGS=("${REQUIRED_PROGS[@]}" 'pigz' 'ifdown' )
USE_STATIC_NETWORKING=y
USE_RESOLV_CONF=()
ISO_DEFAULT="automatic"
USER_INPUT_TIMEOUT=5
BACKUP_PROG_EXCLUDE=( '/backup' '/data' '/home' '/srv'  )
KERNEL_CMDLINE="ip=192.168.0.1 nm=24 netdev=eth1 gw=192.168.0.1"
  • Hardware vendor/product (PC or PowerNV BareMetal or ARM) or VM (KVM guest or PowerVM LPAR):
    PC

  • System architecture (x86 compatible or PPC64/PPC64LE or what exact ARM device):
    x86_64

  • Firmware (BIOS or UEFI or Open Firmware) and bootloader (GRUB or ELILO or Petitboot):
    BIOS

  • Storage (local disk or SSD) and/or SAN (FC or iSCSI or FCoE) and/or multipath (DM or NVMe):
    local ssd disks

  • Storage layout ("lsblk -ipo NAME,KNAME,PKNAME,TRAN,TYPE,FSTYPE,LABEL,SIZE,MOUNTPOINT"):

gpm:/usr/share/rear/layout/prepare/GNU/Linux # lsblk -ipo NAME,KNAME,PKNAME,TRAN,TYPE,FSTYPE,LABEL,SIZE,MOUNTPOINT
+ lsblk -ipo NAME,KNAME,PKNAME,TRAN,TYPE,FSTYPE,LABEL,SIZE,MOUNTPOINT
NAME         KNAME     PKNAME    TRAN   TYPE  FSTYPE            LABEL   SIZE MOUNTPOINT
/dev/sda     /dev/sda            sata   disk                          931.5G 
|-/dev/sda1  /dev/sda1 /dev/sda         part  linux_raid_member gpm:0   100G 
| `-/dev/md0 /dev/md0  /dev/sda1        raid1 ext4                     99.9G /
|-/dev/sda2  /dev/sda2 /dev/sda         part  linux_raid_member gpm:1   100G 
| `-/dev/md1 /dev/md1  /dev/sda2        raid1 ext4                     99.9G /data
|-/dev/sda3  /dev/sda3 /dev/sda         part  linux_raid_member gpm:2   500G 
| `-/dev/md2 /dev/md2  /dev/sda3        raid1 ext4                    499.9G /home
|-/dev/sda4  /dev/sda4 /dev/sda         part                              1K 
|-/dev/sda5  /dev/sda5 /dev/sda         part  linux_raid_member gpm:3   100G 
| `-/dev/md3 /dev/md3  /dev/sda5        raid1 ext4                     99.9G /srv
|-/dev/sda6  /dev/sda6 /dev/sda         part  linux_raid_member gpm:4    50G 
| `-/dev/md4 /dev/md4  /dev/sda6        raid1 ext4                       50G /asc
|-/dev/sda7  /dev/sda7 /dev/sda         part  linux_raid_member gpm:5    50G 
| `-/dev/md5 /dev/md5  /dev/sda7        raid1 ext4                       50G /tmp
`-/dev/sda8  /dev/sda8 /dev/sda         part  linux_raid_member gpm:6  31.5G 
  `-/dev/md6 /dev/md6  /dev/sda8        raid1 swap                     31.5G [SWAP]
/dev/sdb     /dev/sdb            sata   disk                          931.5G 
|-/dev/sdb1  /dev/sdb1 /dev/sdb         part  linux_raid_member gpm:0   100G 
| `-/dev/md0 /dev/md0  /dev/sdb1        raid1 ext4                     99.9G /
|-/dev/sdb2  /dev/sdb2 /dev/sdb         part  linux_raid_member gpm:1   100G 
| `-/dev/md1 /dev/md1  /dev/sdb2        raid1 ext4                     99.9G /data
|-/dev/sdb3  /dev/sdb3 /dev/sdb         part  linux_raid_member gpm:2   500G 
| `-/dev/md2 /dev/md2  /dev/sdb3        raid1 ext4                    499.9G /home
|-/dev/sdb4  /dev/sdb4 /dev/sdb         part                              1K 
|-/dev/sdb5  /dev/sdb5 /dev/sdb         part  linux_raid_member gpm:3   100G 
| `-/dev/md3 /dev/md3  /dev/sdb5        raid1 ext4                     99.9G /srv
|-/dev/sdb6  /dev/sdb6 /dev/sdb         part  linux_raid_member gpm:4    50G 
| `-/dev/md4 /dev/md4  /dev/sdb6        raid1 ext4                       50G /asc
|-/dev/sdb7  /dev/sdb7 /dev/sdb         part  linux_raid_member gpm:5    50G 
| `-/dev/md5 /dev/md5  /dev/sdb7        raid1 ext4                       50G /tmp
`-/dev/sdb8  /dev/sdb8 /dev/sdb         part  linux_raid_member gpm:6  31.5G 
  `-/dev/md6 /dev/md6  /dev/sdb8        raid1 swap                     31.5G [SWAP]
/dev/sdc     /dev/sdc            usb    disk                              0B 
/dev/sr0     /dev/sr0            sata   rom                            1024M 

Rear command after starting crashed with error(ERROR: Unsupported partition table 'unknown'):

gpm:/usr/share/rear/layout/prepare/GNU/Linux # rear -v -D mkbackup
+ rear -v -D mkbackup
Relax-and-Recover 2.7 / 2022-07-13
Running rear mkbackup (PID 22360 date 2024-03-27 13:29:18)
Command line options: /usr/sbin/rear -v -D mkbackup
Using log file: /var/log/rear/rear-gpm.log
Using build area: /var/tmp/rear.wXU8C4Um7Bwu257
Running 'init' stage ======================
Running workflow mkbackup on the normal/original system
Running 'prep' stage ======================
Using backup archive '/var/tmp/rear.wXU8C4Um7Bwu257/outputfs/gpm/backup.tar.gz'
Using '/usr/bin/xorrisofs' to create ISO filesystem images
Using autodetected kernel '/boot/vmlinuz-5.14.21-150500.55.39-default' as kernel in the recovery system
Running 'layout/save' stage ======================
Creating disk layout
Overwriting existing disk layout file /var/lib/rear/layout/disklayout.conf
ERROR: Unsupported partition table 'unknown' (must be one of 'msdos' 'gpt' 'gpt_sync_mbr' 'dasd')
Some latest log messages since the last called script 200_partition_layout.sh:
  2024-03-27 13:29:21.258329016 Entering debugscript mode via 'set -x'.
  2024-03-27 13:29:21.269645756 Saving disks and their partitions
  Error: Can't have a partition outside the disk!
  Error: Can't have a partition outside the disk!
Error exit of rear mkbackup (PID 22360) and its descendant processes
Exiting subshell 1 (where the actual error happened)
Aborting due to an error, check /var/log/rear/rear-gpm.log for details
Exiting rear mkbackup (PID 22360) and its descendant processes ...
Running exit tasks
To remove the build area you may use (with caution): rm -Rf --one-file-system /var/tmp/rear.wXU8C4Um7Bwu257
Terminated

Reason is unknown partition on sda disk which is member of raid1:

gpm:/usr/share/rear/layout/prepare/GNU/Linux # grep -v '^#' /var/lib/rear/layout/disklayout.conf 
+ grep --color=auto -v '^#' /var/lib/rear/layout/disklayout.conf
disk /dev/sda 1000204886016 unknown

Raid configuration:

gpm:/usr/share/rear/layout/prepare/GNU/Linux # mdadm --detail --scan --config=partitions
+ mdadm --detail --scan --config=partitions
ARRAY /dev/md0 metadata=1.2 name=gpm:0 UUID=7ce553bc:c228e1f8:9f571002:80fc9282
ARRAY /dev/md6 metadata=1.2 name=gpm:6 UUID=027c4b60:bc0428f3:459b9d16:9899e2d6
ARRAY /dev/md5 metadata=1.2 name=gpm:5 UUID=3dabeffa:fe73dce2:e80e9c13:ee782d5b
ARRAY /dev/md2 metadata=1.2 name=gpm:2 UUID=562cde3d:ea14ada5:bbe75715:81f659bd
ARRAY /dev/md4 metadata=1.2 name=gpm:4 UUID=3507a2dd:923d63f2:a747f34f:63126f9e
ARRAY /dev/md1 metadata=1.2 name=gpm:1 UUID=1d118b9b:e52e2188:70c75680:580fda55
ARRAY /dev/md3 metadata=1.2 name=gpm:3 UUID=1ca1aba5:29b38b47:567991c9:1a665cac
@jsmeix
Copy link
Member

jsmeix commented Mar 27, 2024

@madurani
as far as I see
you made several RAID1 arrays (/dev/md0 to /dev/md6)
each one for matching partitions on sda and sdb.

What I had tested was RAID1 made of whole disks, cf.
https://github.com/rear/rear/wiki/Test-Matrix-ReaR-2.7

What do the commands

# parted -s /dev/sda unit GiB print

# parted -s /dev/sdb unit GiB print

show on your system?

@madurani
Copy link
Author

madurani commented Mar 27, 2024

gpm:/opt/rear # parted -s /dev/sda unit GiB print
+ parted -s /dev/sda unit GiB print
Error: Can't have a partition outside the disk!
Model: ATA WDC WD10EZEX-08M (scsi)
Disk /dev/sda: 932GiB
Sector size (logical/physical): 512B/4096B
Partition Table: unknown
Disk Flags: 

gpm:/opt/rear # parted -s /dev/sdb unit GiB print
+ parted -s /dev/sdb unit GiB print
Error: Can't have a partition outside the disk!
Model: ATA WDC WD10EZEX-60M (scsi)
Disk /dev/sdb: 932GiB
Sector size (logical/physical): 512B/4096B
Partition Table: unknown
Disk Flags: 

@jsmeix
Copy link
Member

jsmeix commented Mar 27, 2024

Huh!
The 'parted' output seems to indicate that
your disk setup is somehow just broken
(but I am not a RAID expert),
cf. what I had on
https://github.com/rear/rear/wiki/Test-Matrix-ReaR-2.7

@jsmeix
Copy link
Member

jsmeix commented Mar 27, 2024

@madurani
can you describe what you did to setup your RAID1?
E.g. what tool you used or what commands you called?

What I did during my tests on
https://github.com/rear/rear/wiki/Test-Matrix-ReaR-2.7
is using the partitioning tool in YaST.

With that I made a single RAID1 array of my two
whole disks /dev/sda and /dev/sdb which results
the single RAID1 array /dev/md127
(with size of the smaller disk).

That RAID1 array /dev/md127 behaves like a whole disk
and therein I created partitions as shown by 'lsblk'

`-/dev/md127     raid1                      9G
  |-/dev/md127p1 part                     102M
  |-/dev/md127p2 part  swap                 1G
  `-/dev/md127p3 part  btrfs              7.5G

Finally on the /dev/md127p3 partition
a btrfs filesystem is created.

In the end there is
only one btrfs filesystem
on one /dev/md127p3 partition
on one /dev/md127 RAID1 array
and only that one RAID1 array
is on two disks /dev/sda and /dev/sdb

@madurani
Copy link
Author

I didn't configure mentioned raid. It was done long time ago.

@jsmeix
Copy link
Member

jsmeix commented Mar 27, 2024

@madurani
does your system work OK with your current RAID1 setup?

Again: I am not a RAID expert.
So your current RAID1 setup could be even correct,
but at least to me it looks "unexpected".

And with your current 'parted' output
your current RAID1 setup is not supported in ReaR
because ReaR depends on a normal 'parted' output.

@madurani
Copy link
Author

Yes system work properly, without problem. I wanted use rear as backup before os update.

@jsmeix
Copy link
Member

jsmeix commented Mar 27, 2024

@rear/contributors
could you please have a look here (as time permits).
Perhaps I misunderstand something and
I would like to avoid causing false alarm
about a possibly broken disk setup when
the current system works without problems.

@jsmeix jsmeix changed the title Raid issue RAID issue: several RAID1 arrays on partitions on disks with 'unknown' partition tables Mar 27, 2024
@pcahyna
Copy link
Member

pcahyna commented Mar 27, 2024

I think we need more information about the partitions. Since parted refuses to show us any detail, please print the partition information using another tool: fdisk -x /dev/sda or gdisk -l /dev/sda.

@madurani
Copy link
Author

gpm:~ # fdisk -x /dev/sda
Disk /dev/sda: 931.51 GiB, 1000204886016 bytes, 1953525168 sectors
Disk model: WDC WD10EZEX-08M
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: dos
Disk identifier: 0x8c20fcea

Device     Boot      Start        End    Sectors Id Type                  Start-C/H/S   End-C/H/S Attrs
/dev/sda1  *          2048  209717247  209715200 fd Linux raid autodetect     0/32/33 1023/254/63    80
/dev/sda2        209717248  419432447  209715200 fd Linux raid autodetect 1023/254/63 1023/254/63 
/dev/sda3        419432448 1468008447 1048576000 fd Linux raid autodetect 1023/254/63 1023/254/63 
/dev/sda4       1468008448 1953525759  485517312  f W95 Ext'd (LBA)       1023/254/63 1023/254/63 
/dev/sda5       1468010496 1677725695  209715200 fd Linux raid autodetect 1023/254/63 1023/254/63 
/dev/sda6       1677727744 1782585343  104857600 fd Linux raid autodetect 1023/254/63 1023/254/63 
/dev/sda7       1782587392 1887444991  104857600 fd Linux raid autodetect 1023/254/63 1023/254/63 
/dev/sda8       1887447040 1953525759   66078720 fd Linux raid autodetect 1023/254/63 1023/254/63 


gpm:~ # fdisk -x /dev/sdb
Disk /dev/sdb: 931.51 GiB, 1000204886016 bytes, 1953525168 sectors
Disk model: WDC WD10EZEX-60M
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: dos
Disk identifier: 0x00000000

Device     Boot      Start        End    Sectors Id Type                  Start-C/H/S   End-C/H/S Attrs
/dev/sdb1  *          2048  209717247  209715200 fd Linux raid autodetect     0/32/33 1023/254/63    80
/dev/sdb2        209717248  419432447  209715200 fd Linux raid autodetect 1023/254/63 1023/254/63 
/dev/sdb3        419432448 1468008447 1048576000 fd Linux raid autodetect 1023/254/63 1023/254/63 
/dev/sdb4       1468008448 1953525759  485517312  f W95 Ext'd (LBA)       1023/254/63 1023/254/63 
/dev/sdb5       1468010496 1677725695  209715200 fd Linux raid autodetect 1023/254/63 1023/254/63 
/dev/sdb6       1677727744 1782585343  104857600 fd Linux raid autodetect 1023/254/63 1023/254/63 
/dev/sdb7       1782587392 1887444991  104857600 fd Linux raid autodetect 1023/254/63 1023/254/63 
/dev/sdb8       1887447040 1953525759   66078720 fd Linux raid autodetect 1023/254/63 1023/254/63 

@pcahyna
Copy link
Member

pcahyna commented Mar 27, 2024

gpm:~ # fdisk -x /dev/sda
Disk /dev/sda: 931.51 GiB, 1000204886016 bytes, 1953525168 sectors
...
/dev/sda8 1887447040 1953525759 66078720 fd Linux raid autodetect 1023/254/63 1023/254/63

This is not right. Doesn't the kernel complain about this on boot? What is the size of /dev/sda8 as seen by the kernel blockdev --getsz /dev/sda8 ? What is the size of /dev/md6 - blockdev --getsz /dev/md6 and mdadm --detail /dev/md6 ?

Anyway, not a ReaR bug, nor a parted bug, although I would prefer parted to say what exactly it dislikes instead of just complaining that it dislikes something. What does parted /dev/sda unit GiB print show?

@jsmeix jsmeix added support / question not ReaR / invalid The root cause is not in the ReaR code or ReaR is misused. labels Apr 16, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
not ReaR / invalid The root cause is not in the ReaR code or ReaR is misused. support / question
Projects
None yet
Development

No branches or pull requests

3 participants