Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ensure supported partition tables #2803

Merged
merged 2 commits into from May 18, 2022

Conversation

jsmeix
Copy link
Member

@jsmeix jsmeix commented May 10, 2022

In layout/save/GNU/Linux/200_partition_layout.sh
ensure $disk_label is one of the supported partition tables, cf.
#2801 (comment)

In layout/save/GNU/Linux/200_partition_layout.sh
ensure $disk_label is one of the supported partition tables, cf.
#2801 (comment)
@jsmeix jsmeix added the enhancement Adaptions and new features label May 10, 2022
@jsmeix jsmeix added this to the ReaR v2.7 milestone May 10, 2022
@jsmeix jsmeix requested a review from a team May 10, 2022 08:48
@jsmeix jsmeix self-assigned this May 10, 2022
@jsmeix jsmeix changed the title Update 200_partition_layout.sh Ensure supported partition tables May 10, 2022
@jsmeix
Copy link
Member Author

jsmeix commented May 11, 2022

This pull request was triggered by
#2801 (comment)
therein in particular the last part that reads

I will have a look at
usr/share/rear/layout/save/GNU/Linux/200_partition_layout.sh
how to make it behave more reliably and failsafe,
in particular error out directly therein when things failed
instead of error out later in 950_verify_disklayout_file.sh

Therefore the changes in this pull request are
generic enhancements to detect early i.e. directly
in the code where the entries are genereated in
layout/save/GNU/Linux/200_partition_layout.sh
when invalid entries would be generated
(regardless what the actual reason is), see
#2801 (comment)

Because I get same disklayout.conf on my homeoffice laptop
there should be no regressions because of those changes
so that I would like to merge them tomorrow afternoon
unless there are objections from one of you
@rear/contributors

@pcahyna
Copy link
Member

pcahyna commented May 11, 2022

first thought: shouldn't we check the exit code from parted? I can try to recreate a broken setup (like #2801) and check if parted returns a nonzero exit code when this error happens.

@jsmeix
Copy link
Member Author

jsmeix commented May 12, 2022

@pcahyna
it would be very helpful if you could test
what the parted exit code is in such cases.

Of course ReaR should care about exit codes of called programs
but my problem in this case is that I don't want to introduce
regressions when I error out in case on non-zero parted exit code
because parted may result some non-zero exit codes that are meant
only as a "warning" but not as a real (fatal) error.
Unfortunately "man parted" does not tell anything about exit codes and
https://www.gnu.org/software/parted/manual/parted.html
only mentiones "exit with status 1" for the 'align-check' command
which indicates one should perhaps not rely on parted exit codes
because in general one should not rely on undocumented behaviour.

Because from my current point of view proper parted exit code handling
may become a rather complicated task I suggest to keep this pull request
as is and add parted exit code handling via a separated pull request
so we have time to do that later as needed (e.g. for ReaR 2.8).

@jsmeix
Copy link
Member Author

jsmeix commented May 12, 2022

By the way:
I think sooner or later
layout/save/GNU/Linux/200_partition_layout.sh
should be completely overhauled.
But currently I am not yet sufficiently annoyed with it
(cf. #2802 (comment))
so for now I try to leave it as is as far as possible.

@pcahyna
Copy link
Member

pcahyna commented May 12, 2022

The fact that parted does not document its exit codes is an argument against checking the exit code. I think that we probably could rely on the usual semantics (nonzero exit code means an error, if nothing else is documented), but I agree that we should not make a potentially disruptive change soon before a planned release.

Is it preferable to enumerate all supported partition types? I wonder whether when in the current state another partition type (not in the supported list) is encountered, the code breaks, or does the right thing. (Of course, the partition type must not be unknown.) Is the parted output sufficiently regular that it has a chance to be saved into the disklayout file and then restored successfully without a layout-specific code in ReaR?

@jsmeix
Copy link
Member Author

jsmeix commented May 12, 2022

When in the current state a partition type is reported by parted
that is not in the supported list it must error out intentionally
(which this pull request implements) because our current code in
layout/save/GNU/Linux/200_partition_layout.sh
does not work with partition types that are not in the supported list.

Added comment why layout/save/GNU/Linux/200_partition_layout.sh
must error out when a partition type is reported by parted that is not in the supported list
cf. #2803 (comment)
@jsmeix
Copy link
Member Author

jsmeix commented May 12, 2022

@pcahyna
we have afternoon so if you are not against the changes here
I would like to merge them "as is" in about one hour or so
to do at least some small step in 200_partition_layout.sh
towards "care about errors".

@jsmeix
Copy link
Member Author

jsmeix commented May 12, 2022

There is no need that I push here
so I will wait until tomorrow afternoon.

@pcahyna
Copy link
Member

pcahyna commented May 12, 2022

I believe this change is fine, but waiting until tomorrow will hopefully allow me to test several disk layouts.

@jsmeix
Copy link
Member Author

jsmeix commented May 13, 2022

@pcahyna
I was not aware that you intend to test it with several disk layouts.
This is very much appreciated.
I don't want to push you in any way.
Take your time and I will wait until you provided feedback
about your test results.
If you don't find time to do your tests please tell me
because then I would like to "just merge it" so it could
get tested by users who use our current master code.

@pcahyna
Copy link
Member

pcahyna commented May 13, 2022

I was not aware that you intend to test it with several disk layouts.

@jsmeix oops! I thought I had written that I was going to test some disk layouts, but I can't find this comment - maybe it was lost, or more likely I only intended to write it and my memory betrayed me.

What I intend to test is: gpt partitions, dasd partitions, and filesystems directly on disks (without partitions), same with RAID.

@pcahyna
Copy link
Member

pcahyna commented May 13, 2022

I have not noticed #2804 until now, sorry. I suppose that the best way to test is to merge changes both from this and from #2804 and test the merged version?

@jsmeix
Copy link
Member Author

jsmeix commented May 13, 2022

Yes. Both changes have the same intent.
But they do not depend on each other technically.

pcahyna pushed a commit to pcahyna/rear that referenced this pull request May 16, 2022
Added comment why layout/save/GNU/Linux/200_partition_layout.sh
must error out when a partition type is reported by parted that is not in the supported list
cf. rear#2803 (comment)
pcahyna pushed a commit to pcahyna/rear that referenced this pull request May 16, 2022
Added comment why layout/save/GNU/Linux/200_partition_layout.sh
must error out when a partition type is reported by parted that is not in the supported list
cf. rear#2803 (comment)
@pcahyna
Copy link
Member

pcahyna commented May 17, 2022

Hi @jsmeix , sorry for the delay, I have completed tests of several partition layouts now (except dasd).
What I did: installed your changed version (both PRs together)), ran rear savelayout, installed the unchanged version, and ran rear checklayout. It always reported that the layout is identical. The cases I tested were:

  1. GPT and filesystem on GPT partition
  2. filesystem directly on disk (no partition table)
  3. filesystem on RAID on DOS partitions
  4. filesystem on RAID directly on disk (no partition tables)

Here are the layout files resulting from all the cases:

# Disk layout dated 20220517093318 (YYYYmmddHHMMSS)
# NAME                                                  KNAME     PKNAME    TRAN TYPE FSTYPE      LABEL SIZE MOUNTPOINT UUID
# /dev/sda                                              /dev/sda                 disk                    10G            
# |-/dev/sda1                                           /dev/sda1 /dev/sda       part                     4M            
# |-/dev/sda2                                           /dev/sda2 /dev/sda       part xfs                 1G /boot      c845dd14-1386-4fb4-9ba3-b1c1ba8aa6ee
# `-/dev/sda3                                           /dev/sda3 /dev/sda       part LVM2_member         9G            xniJq9-yzu6-nVrj-n5vO-6vqP-IKkN-omDVQI
#   |-/dev/mapper/rhel_ibm--p8--pvm--02--guest--02-root /dev/dm-0 /dev/sda3      lvm  xfs                 8G /          e21a6fcb-fe3a-4df1-a9e5-625b5606cb8d
#   `-/dev/mapper/rhel_ibm--p8--pvm--02--guest--02-swap /dev/dm-1 /dev/sda3      lvm  swap                1G [SWAP]     7fe91aa4-474d-4961-ae15-9b9829e3de4c
# /dev/sdb                                              /dev/sdb                 disk                    10G            
# `-/dev/sdb1                                           /dev/sdb1 /dev/sdb       part xfs                10G /home/foo  0838e307-77b8-4a53-931a-952c523cbe70
# /dev/sdc                                              /dev/sdc                 disk                    10G            
# /dev/sdd                                              /dev/sdd                 disk                    10G            
# Disk /dev/sda
# Format: disk <devname> <size(bytes)> <partition label type>
disk /dev/sda 10737418240 msdos
# Partitions on /dev/sda
# Format: part <device> <partition size(bytes)> <partition start(bytes)> <partition type|name> <flags> /dev/<partition>
part /dev/sda 4194304 1048576 primary boot,prep /dev/sda1
part /dev/sda 1073741824 5242880 primary none /dev/sda2
part /dev/sda 9658433536 1078984704 primary lvm /dev/sda3
# Disk /dev/sdb
# Format: disk <devname> <size(bytes)> <partition label type>
disk /dev/sdb 10737418240 gpt
# Partitions on /dev/sdb
# Format: part <device> <partition size(bytes)> <partition start(bytes)> <partition type|name> <flags> /dev/<partition>
part /dev/sdb 10735321088 1048576 extra none /dev/sdb1
# Disk /dev/sdc
# Format: disk <devname> <size(bytes)> <partition label type>
#disk /dev/sdc 10737418240 unknown
# Partitions on /dev/sdc
# Format: part <device> <partition size(bytes)> <partition start(bytes)> <partition type|name> <flags> /dev/<partition>
# Disk /dev/sdd
# Format: disk <devname> <size(bytes)> <partition label type>
#disk /dev/sdd 10737418240 unknown
# Partitions on /dev/sdd
# Format: part <device> <partition size(bytes)> <partition start(bytes)> <partition type|name> <flags> /dev/<partition>
# Format for LVM PVs
# lvmdev <volume_group> <device> [<uuid>] [<size(bytes)>]
lvmdev /dev/rhel_ibm-p8-pvm-02-guest-02 /dev/sda3 xniJq9-yzu6-nVrj-n5vO-6vqP-IKkN-omDVQI 18864128
# Format for LVM VGs
# lvmgrp <volume_group> <extentsize> [<size(extents)>] [<size(bytes)>]
lvmgrp /dev/rhel_ibm-p8-pvm-02-guest-02 4096 2302 9428992
# Format for LVM LVs
# lvmvol <volume_group> <name> <size(bytes)> <layout> [key:value ...]
lvmvol /dev/rhel_ibm-p8-pvm-02-guest-02 swap 1073741824b linear 
lvmvol /dev/rhel_ibm-p8-pvm-02-guest-02 root 8581545984b linear 
# Filesystems (only ext2,ext3,ext4,vfat,xfs,reiserfs,btrfs are supported).
# Format: fs <device> <mountpoint> <fstype> [uuid=<uuid>] [label=<label>] [<attributes>]
fs /dev/mapper/rhel_ibm--p8--pvm--02--guest--02-root / xfs uuid=e21a6fcb-fe3a-4df1-a9e5-625b5606cb8d label=  options=rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota
fs /dev/sda2 /boot xfs uuid=c845dd14-1386-4fb4-9ba3-b1c1ba8aa6ee label=  options=rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota
fs /dev/sdb1 /home/foo xfs uuid=0838e307-77b8-4a53-931a-952c523cbe70 label=  options=rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota
# Swap partitions or swap files
# Format: swap <filename> uuid=<uuid> label=<label>
swap /dev/mapper/rhel_ibm--p8--pvm--02--guest--02-swap uuid=7fe91aa4-474d-4961-ae15-9b9829e3de4c label=
# Disk layout dated 20220516171240 (YYYYmmddHHMMSS)
# NAME                                         KNAME     PKNAME    TRAN TYPE FSTYPE      LABEL SIZE MOUNTPOINT UUID
# /dev/sda                                     /dev/sda                 disk                    40G            
# |-/dev/sda1                                  /dev/sda1 /dev/sda       part                     4M            
# |-/dev/sda2                                  /dev/sda2 /dev/sda       part xfs                 1G /boot      82e62545-6c93-49a3-bf00-77ef6e9252cc
# `-/dev/sda3                                  /dev/sda3 /dev/sda       part LVM2_member        39G            kZh27S-6mqH-ijVd-xYfl-vYfM-LfaX-bUfide
#   |-/dev/mapper/rhel_ibm--p9z--27--lp10-root /dev/dm-0 /dev/sda3      lvm  xfs                35G /          140256a4-357a-4a77-9875-54c768a09e1b
#   `-/dev/mapper/rhel_ibm--p9z--27--lp10-swap /dev/dm-1 /dev/sda3      lvm  swap                4G [SWAP]     8ea7f30e-03fd-41d4-a1da-2a1c2569deca
# /dev/sdb                                     /dev/sdb                 disk xfs                 5G /home/foo  a011c30b-828f-40d1-a5cc-6e600c683df0
# /dev/sdc                                     /dev/sdc                 disk                     5G            
# /dev/sdd                                     /dev/sdd                 disk                     5G            
# /dev/sde                                     /dev/sde                 disk                     5G            
# Disk /dev/sda
# Format: disk <devname> <size(bytes)> <partition label type>
disk /dev/sda 42949672960 msdos
# Partitions on /dev/sda
# Format: part <device> <partition size(bytes)> <partition start(bytes)> <partition type|name> <flags> /dev/<partition>
part /dev/sda 4194304 1048576 primary boot,prep /dev/sda1
part /dev/sda 1073741824 5242880 primary none /dev/sda2
part /dev/sda 41870688256 1078984704 primary lvm /dev/sda3
# Disk /dev/sdb
# Format: disk <devname> <size(bytes)> <partition label type>
disk /dev/sdb 5368709120 loop
# Partitions on /dev/sdb
# Format: part <device> <partition size(bytes)> <partition start(bytes)> <partition type|name> <flags> /dev/<partition>
# Disk /dev/sdc
# Format: disk <devname> <size(bytes)> <partition label type>
#disk /dev/sdc 5368709120 unknown
# Partitions on /dev/sdc
# Format: part <device> <partition size(bytes)> <partition start(bytes)> <partition type|name> <flags> /dev/<partition>
# Disk /dev/sdd
# Format: disk <devname> <size(bytes)> <partition label type>
#disk /dev/sdd 5368709120 unknown
# Partitions on /dev/sdd
# Format: part <device> <partition size(bytes)> <partition start(bytes)> <partition type|name> <flags> /dev/<partition>
# Disk /dev/sde
# Format: disk <devname> <size(bytes)> <partition label type>
#disk /dev/sde 5368709120 unknown
# Partitions on /dev/sde
# Format: part <device> <partition size(bytes)> <partition start(bytes)> <partition type|name> <flags> /dev/<partition>
# Format for LVM PVs
# lvmdev <volume_group> <device> [<uuid>] [<size(bytes)>]
lvmdev /dev/rhel_ibm-p9z-27-lp10 /dev/sda3 kZh27S-6mqH-ijVd-xYfl-vYfM-LfaX-bUfide 81778688
# Format for LVM VGs
# lvmgrp <volume_group> <extentsize> [<size(extents)>] [<size(bytes)>]
lvmgrp /dev/rhel_ibm-p9z-27-lp10 4096 9982 40886272
# Format for LVM LVs
# lvmvol <volume_group> <name> <size(bytes)> <layout> [key:value ...]
lvmvol /dev/rhel_ibm-p9z-27-lp10 swap 4294967296b linear 
lvmvol /dev/rhel_ibm-p9z-27-lp10 root 37572575232b linear 
# Filesystems (only ext2,ext3,ext4,vfat,xfs,reiserfs,btrfs are supported).
# Format: fs <device> <mountpoint> <fstype> [uuid=<uuid>] [label=<label>] [<attributes>]
fs /dev/mapper/rhel_ibm--p9z--27--lp10-root / xfs uuid=140256a4-357a-4a77-9875-54c768a09e1b label=  options=rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota
fs /dev/sda2 /boot xfs uuid=82e62545-6c93-49a3-bf00-77ef6e9252cc label=  options=rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota
fs /dev/sdb /home/foo xfs uuid=a011c30b-828f-40d1-a5cc-6e600c683df0 label=  options=rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota
# Swap partitions or swap files
# Format: swap <filename> uuid=<uuid> label=<label>
swap /dev/mapper/rhel_ibm--p9z--27--lp10-swap uuid=8ea7f30e-03fd-41d4-a1da-2a1c2569deca label=
# Disk layout dated 20220516171209 (YYYYmmddHHMMSS)
# NAME                      KNAME      PKNAME    TRAN TYPE   FSTYPE            LABEL SIZE MOUNTPOINT UUID
# /dev/vda                  /dev/vda                  disk                            10G            
# |-/dev/vda1               /dev/vda1  /dev/vda       part   xfs                       1G /boot      52bf65ec-c4aa-447a-80ff-93024fdad21f
# `-/dev/vda2               /dev/vda2  /dev/vda       part   LVM2_member               9G            6gUVAS-YKPN-o2fL-ccEt-tj1k-N0TT-wQzPUZ
#   |-/dev/mapper/rhel-root /dev/dm-0  /dev/vda2      lvm    xfs                       5G /          c0aafd61-4935-447f-96ef-3877bd2d463b
#   `-/dev/mapper/rhel-swap /dev/dm-1  /dev/vda2      lvm    swap                      4G [SWAP]     dd196f7d-672d-48c0-acd6-7869fb45516a
# /dev/vdb                  /dev/vdb                  disk                            10G            
# `-/dev/vdb1               /dev/vdb1  /dev/vdb       part   linux_raid_member home   10G            112839b6-5ccd-533b-0619-1d280a697975
#   `-/dev/md127            /dev/md127 /dev/vdb1      raid10 xfs                      20G /home      53d8d1d5-6dec-48d6-b04d-92608ba2607f
# /dev/vdc                  /dev/vdc                  disk                            10G            
# `-/dev/vdc1               /dev/vdc1  /dev/vdc       part   linux_raid_member home   10G            112839b6-5ccd-533b-0619-1d280a697975
#   `-/dev/md127            /dev/md127 /dev/vdc1      raid10 xfs                      20G /home      53d8d1d5-6dec-48d6-b04d-92608ba2607f
# /dev/vdd                  /dev/vdd                  disk                            10G            
# `-/dev/vdd1               /dev/vdd1  /dev/vdd       part   linux_raid_member home   10G            112839b6-5ccd-533b-0619-1d280a697975
#   `-/dev/md127            /dev/md127 /dev/vdd1      raid10 xfs                      20G /home      53d8d1d5-6dec-48d6-b04d-92608ba2607f
# /dev/vde                  /dev/vde                  disk                            10G            
# `-/dev/vde1               /dev/vde1  /dev/vde       part   linux_raid_member home   10G            112839b6-5ccd-533b-0619-1d280a697975
#   `-/dev/md127            /dev/md127 /dev/vde1      raid10 xfs                      20G /home      53d8d1d5-6dec-48d6-b04d-92608ba2607f
# /dev/vdf                  /dev/vdf                  disk                            10G            
# Disk /dev/vda
# Format: disk <devname> <size(bytes)> <partition label type>
disk /dev/vda 10737418240 msdos
# Partitions on /dev/vda
# Format: part <device> <partition size(bytes)> <partition start(bytes)> <partition type|name> <flags> /dev/<partition>
part /dev/vda 1073741824 1048576 primary boot /dev/vda1
part /dev/vda 9662627840 1074790400 primary lvm /dev/vda2
# Disk /dev/vdb
# Format: disk <devname> <size(bytes)> <partition label type>
disk /dev/vdb 10737418240 msdos
# Partitions on /dev/vdb
# Format: part <device> <partition size(bytes)> <partition start(bytes)> <partition type|name> <flags> /dev/<partition>
part /dev/vdb 10736369664 1048576 primary raid /dev/vdb1
# Disk /dev/vdc
# Format: disk <devname> <size(bytes)> <partition label type>
disk /dev/vdc 10737418240 msdos
# Partitions on /dev/vdc
# Format: part <device> <partition size(bytes)> <partition start(bytes)> <partition type|name> <flags> /dev/<partition>
part /dev/vdc 10736369664 1048576 primary raid /dev/vdc1
# Disk /dev/vdd
# Format: disk <devname> <size(bytes)> <partition label type>
disk /dev/vdd 10737418240 msdos
# Partitions on /dev/vdd
# Format: part <device> <partition size(bytes)> <partition start(bytes)> <partition type|name> <flags> /dev/<partition>
part /dev/vdd 10736369664 1048576 primary raid /dev/vdd1
# Disk /dev/vde
# Format: disk <devname> <size(bytes)> <partition label type>
disk /dev/vde 10737418240 msdos
# Partitions on /dev/vde
# Format: part <device> <partition size(bytes)> <partition start(bytes)> <partition type|name> <flags> /dev/<partition>
part /dev/vde 10736369664 1048576 primary raid /dev/vde1
# Disk /dev/vdf
# Format: disk <devname> <size(bytes)> <partition label type>
#disk /dev/vdf 10737418240 msdos
# Partitions on /dev/vdf
# Format: part <device> <partition size(bytes)> <partition start(bytes)> <partition type|name> <flags> /dev/<partition>
# Software RAID devices (mdadm --detail --scan --config=partitions)
# ARRAY /dev/md/home metadata=1.2 name=home UUID=112839b6:5ccd533b:06191d28:0a697975
# Software RAID home device /dev/md127 (mdadm --misc --detail /dev/md127)
# /dev/md127:
#            Version : 1.2
#      Creation Time : Mon May 16 17:02:34 2022
#         Raid Level : raid10
#         Array Size : 20951040 (19.98 GiB 21.45 GB)
#      Used Dev Size : 10475520 (9.99 GiB 10.73 GB)
#       Raid Devices : 4
#      Total Devices : 4
#        Persistence : Superblock is persistent
#      Intent Bitmap : Internal
#        Update Time : Mon May 16 17:12:08 2022
#              State : clean 
#     Active Devices : 4
#    Working Devices : 4
#     Failed Devices : 0
#      Spare Devices : 0
#             Layout : near=2
#         Chunk Size : 512K
# Consistency Policy : bitmap
#               Name : home
#               UUID : 112839b6:5ccd533b:06191d28:0a697975
#             Events : 42
#     Number   Major   Minor   RaidDevice State
#        0     252       33        0      active sync set-A   /dev/vdc1
#        1     252       49        1      active sync set-B   /dev/vdd1
#        2     252       65        2      active sync set-A   /dev/vde1
#        3     252       17        3      active sync set-B   /dev/vdb1
# RAID device /dev/md127
# Format: raid /dev/<kernel RAID device> level=<RAID level> raid-devices=<nr of active devices> devices=<component device1,component device2,...> [name=<array name>] [metadata=<metadata style>] [uuid=<UUID>] [layout=<data layout>] [chunk=<chunk size>] [spare-devices=<nr of spare devices>] [size=<container size>]
raid /dev/md127 level=raid10 raid-devices=4 devices=/dev/vdc1,/dev/vdd1,/dev/vde1,/dev/vdb1 name=home metadata=1.2 uuid=112839b6:5ccd533b:06191d28:0a697975 layout=n2 chunk=512
# RAID disk /dev/md127
# Format: raiddisk <devname> <size(bytes)> <partition label type>
raiddisk /dev/md127 21453864960 loop
# Partitions on /dev/md127
# Format: part <device> <partition size(bytes)> <partition start(bytes)> <partition type|name> <flags> /dev/<partition>
# Format for LVM PVs
# lvmdev <volume_group> <device> [<uuid>] [<size(bytes)>]
lvmdev /dev/rhel /dev/vda2 6gUVAS-YKPN-o2fL-ccEt-tj1k-N0TT-wQzPUZ 18872320
# Format for LVM VGs
# lvmgrp <volume_group> <extentsize> [<size(extents)>] [<size(bytes)>]
lvmgrp /dev/rhel 4096 2303 9433088
# Format for LVM LVs
# lvmvol <volume_group> <name> <size(bytes)> <layout> [key:value ...]
lvmvol /dev/rhel swap 4257218560b linear 
lvmvol /dev/rhel root 5402263552b linear 
# Filesystems (only ext2,ext3,ext4,vfat,xfs,reiserfs,btrfs are supported).
# Format: fs <device> <mountpoint> <fstype> [uuid=<uuid>] [label=<label>] [<attributes>]
fs /dev/mapper/rhel-root / xfs uuid=c0aafd61-4935-447f-96ef-3877bd2d463b label=  options=rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota
fs /dev/md127 /home xfs uuid=53d8d1d5-6dec-48d6-b04d-92608ba2607f label=  options=rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,sunit=1024,swidth=2048,noquota
fs /dev/vda1 /boot xfs uuid=52bf65ec-c4aa-447a-80ff-93024fdad21f label=  options=rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota
# Swap partitions or swap files
# Format: swap <filename> uuid=<uuid> label=<label>
swap /dev/mapper/rhel-swap uuid=dd196f7d-672d-48c0-acd6-7869fb45516a label=
# Disk layout dated 20220517120245 (YYYYmmddHHMMSS)
# NAME                      KNAME      PKNAME    TRAN TYPE   FSTYPE            LABEL SIZE MOUNTPOINT UUID
# /dev/vda                  /dev/vda                  disk                            10G            
# |-/dev/vda1               /dev/vda1  /dev/vda       part   xfs                       1G /boot      bdc9d7c7-a079-4e97-8b83-f0e1f36efbbb
# `-/dev/vda2               /dev/vda2  /dev/vda       part   LVM2_member               9G            50J8JK-gXkM-fPKH-1jeR-rPmV-FMjg-jKAHME
#   |-/dev/mapper/rhel-root /dev/dm-0  /dev/vda2      lvm    xfs                       8G /          24781770-5193-415b-82eb-84c25b71499a
#   `-/dev/mapper/rhel-swap /dev/dm-1  /dev/vda2      lvm    swap                      1G [SWAP]     d8433399-25f3-48e6-93b4-80a1e671cfa3
# /dev/vdb                  /dev/vdb                  disk   linux_raid_member 127    10G            9ee92f3d-bec6-ac8d-c3d6-7125101cbb38
# `-/dev/md127              /dev/md127 /dev/vdb       raid10 xfs                      20G /home/foo  8846972b-b669-4e8b-b8c2-b78b180be39a
# /dev/vdc                  /dev/vdc                  disk   linux_raid_member 127    10G            9ee92f3d-bec6-ac8d-c3d6-7125101cbb38
# `-/dev/md127              /dev/md127 /dev/vdc       raid10 xfs                      20G /home/foo  8846972b-b669-4e8b-b8c2-b78b180be39a
# /dev/vdd                  /dev/vdd                  disk   linux_raid_member 127    10G            9ee92f3d-bec6-ac8d-c3d6-7125101cbb38
# `-/dev/md127              /dev/md127 /dev/vdd       raid10 xfs                      20G /home/foo  8846972b-b669-4e8b-b8c2-b78b180be39a
# /dev/vde                  /dev/vde                  disk   linux_raid_member 127    10G            9ee92f3d-bec6-ac8d-c3d6-7125101cbb38
# `-/dev/md127              /dev/md127 /dev/vde       raid10 xfs                      20G /home/foo  8846972b-b669-4e8b-b8c2-b78b180be39a
# /dev/vdf                  /dev/vdf                  disk                            10G            
# Disk /dev/vda
# Format: disk <devname> <size(bytes)> <partition label type>
disk /dev/vda 10737418240 msdos
# Partitions on /dev/vda
# Format: part <device> <partition size(bytes)> <partition start(bytes)> <partition type|name> <flags> /dev/<partition>
part /dev/vda 1073741824 1048576 primary boot /dev/vda1
part /dev/vda 9662627840 1074790400 primary lvm /dev/vda2
# Disk /dev/vdb
# Format: disk <devname> <size(bytes)> <partition label type>
disk /dev/vdb 10737418240 unknown
# Partitions on /dev/vdb
# Format: part <device> <partition size(bytes)> <partition start(bytes)> <partition type|name> <flags> /dev/<partition>
# Disk /dev/vdc
# Format: disk <devname> <size(bytes)> <partition label type>
disk /dev/vdc 10737418240 unknown
# Partitions on /dev/vdc
# Format: part <device> <partition size(bytes)> <partition start(bytes)> <partition type|name> <flags> /dev/<partition>
# Disk /dev/vdd
# Format: disk <devname> <size(bytes)> <partition label type>
disk /dev/vdd 10737418240 unknown
# Partitions on /dev/vdd
# Format: part <device> <partition size(bytes)> <partition start(bytes)> <partition type|name> <flags> /dev/<partition>
# Disk /dev/vde
# Format: disk <devname> <size(bytes)> <partition label type>
disk /dev/vde 10737418240 unknown
# Partitions on /dev/vde
# Format: part <device> <partition size(bytes)> <partition start(bytes)> <partition type|name> <flags> /dev/<partition>
# Disk /dev/vdf
# Format: disk <devname> <size(bytes)> <partition label type>
#disk /dev/vdf 10737418240 unknown
# Partitions on /dev/vdf
# Format: part <device> <partition size(bytes)> <partition start(bytes)> <partition type|name> <flags> /dev/<partition>
# Software RAID devices (mdadm --detail --scan --config=partitions)
# ARRAY /dev/md127 metadata=1.2 name=127 UUID=9ee92f3d:bec6ac8d:c3d67125:101cbb38
# Software RAID  device /dev/md127 (mdadm --misc --detail /dev/md127)
# /dev/md127:
#            Version : 1.2
#      Creation Time : Tue May 17 12:02:36 2022
#         Raid Level : raid10
#         Array Size : 20953088 (19.98 GiB 21.46 GB)
#      Used Dev Size : 10476544 (9.99 GiB 10.73 GB)
#       Raid Devices : 4
#      Total Devices : 4
#        Persistence : Superblock is persistent
#        Update Time : Tue May 17 12:02:44 2022
#              State : clean, resyncing 
#     Active Devices : 4
#    Working Devices : 4
#     Failed Devices : 0
#      Spare Devices : 0
#             Layout : offset=2
#         Chunk Size : 512K
# Consistency Policy : resync
#      Resync Status : 9% complete
#               Name : 127
#               UUID : 9ee92f3d:bec6ac8d:c3d67125:101cbb38
#             Events : 3
#     Number   Major   Minor   RaidDevice State
#        0     252       64        0      active sync   /dev/vde
#        1     252       16        1      active sync   /dev/vdb
#        2     252       32        2      active sync   /dev/vdc
#        3     252       48        3      active sync   /dev/vdd
# RAID device /dev/md127
# Format: raid /dev/<kernel RAID device> level=<RAID level> raid-devices=<nr of active devices> devices=<component device1,component device2,...> [name=<array name>] [metadata=<metadata style>] [uuid=<UUID>] [layout=<data layout>] [chunk=<chunk size>] [spare-devices=<nr of spare devices>] [size=<container size>]
raid /dev/md127 level=raid10 raid-devices=4 devices=/dev/vde,/dev/vdb,/dev/vdc,/dev/vdd metadata=1.2 uuid=9ee92f3d:bec6ac8d:c3d67125:101cbb38 layout=o2 chunk=512
# RAID disk /dev/md127
# Format: raiddisk <devname> <size(bytes)> <partition label type>
raiddisk /dev/md127 21455962112 loop
# Partitions on /dev/md127
# Format: part <device> <partition size(bytes)> <partition start(bytes)> <partition type|name> <flags> /dev/<partition>
# Format for LVM PVs
# lvmdev <volume_group> <device> [<uuid>] [<size(bytes)>]
lvmdev /dev/rhel /dev/vda2 50J8JK-gXkM-fPKH-1jeR-rPmV-FMjg-jKAHME 18872320
# Format for LVM VGs
# lvmgrp <volume_group> <extentsize> [<size(extents)>] [<size(bytes)>]
lvmgrp /dev/rhel 4096 2303 9433088
# Format for LVM LVs
# lvmvol <volume_group> <name> <size(bytes)> <layout> [key:value ...]
lvmvol /dev/rhel swap 1073741824b linear 
lvmvol /dev/rhel root 8585740288b linear 
# Filesystems (only ext2,ext3,ext4,vfat,xfs,reiserfs,btrfs are supported).
# Format: fs <device> <mountpoint> <fstype> [uuid=<uuid>] [label=<label>] [<attributes>]
fs /dev/mapper/rhel-root / xfs uuid=24781770-5193-415b-82eb-84c25b71499a label=  options=rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota
fs /dev/md127 /home/foo xfs uuid=8846972b-b669-4e8b-b8c2-b78b180be39a label=  options=rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,sunit=1024,swidth=4096,noquota
fs /dev/vda1 /boot xfs uuid=bdc9d7c7-a079-4e97-8b83-f0e1f36efbbb label=  options=rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota
# Swap partitions or swap files
# Format: swap <filename> uuid=<uuid> label=<label>
swap /dev/mapper/rhel-swap uuid=d8433399-25f3-48e6-93b4-80a1e671cfa3 label=

Note that disks without partition tables are sometimes reported as loop and sometimes as unknown. Note also that this work even though loop and unknown are not in the list of the known partition table types, because if no partitions are found, the function exits before reaching your check. Here is excerpt from a log from rear -d checklayout:

2022-05-17 12:49:43.350024870 Including layout/save/GNU/Linux/200_partition_layout.sh
2022-05-17 12:49:43.371916156 Saving disks and their partitions
2022-05-17 12:49:43.640249854 No partitions found on /dev/sdb.
Error: /dev/sdc: unrecognised disk label
2022-05-17 12:49:43.686851607 No partitions found on /dev/sdc.
Error: /dev/sdd: unrecognised disk label
2022-05-17 12:49:43.752422283 No partitions found on /dev/sdd.
Error: /dev/sde: unrecognised disk label

@jsmeix jsmeix merged commit 494c92e into master May 18, 2022
@jsmeix jsmeix deleted the jsmeix-check-supported-partition-table branch May 18, 2022 12:17
@jsmeix
Copy link
Member Author

jsmeix commented May 18, 2022

@pcahyna
no need to be sorry for the delay.
I appreciate your thorough tests now even more
because you even tested disks without partition tables
which cannot show a supported partition table type.
Basically only by luck my changes did not cause
regressions in this particular case.
Thank you for your tests!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement Adaptions and new features fixed / solved / done
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants