Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

disklayout.conf file contains duplicate lines #2187

Closed
gdha opened this issue Jul 17, 2019 · 22 comments
Closed

disklayout.conf file contains duplicate lines #2187

gdha opened this issue Jul 17, 2019 · 22 comments

Comments

@gdha
Copy link
Member

gdha commented Jul 17, 2019

  • ReaR version ("/usr/sbin/rear -V"): 2.4 and higher (also the latest upstream)

  • OS version ("cat /etc/rear/os.conf" or "lsb_release -a" or "cat /etc/os-release"): RHEL 7 - however I do not think it is a OS related issue

  • ReaR configuration files ("cat /etc/rear/site.conf" and/or "cat /etc/rear/local.conf"): n/a - running savelayout is enough with an empty local.conf

  • System architecture (x86 compatible or PPC64/PPC64LE or what exact ARM device): x86

  • Storage (local disk or SSD) and/or SAN (FC or iSCSI or FCoE) and/or multipath (DM or NVMe): local and SAN

  • Description of the issue (ideally so that others can reproduce it):

$ sudo ./usr/sbin/rear -v savelayout
$ sudo grep -v \# ./var/lib/rear/layout/disklayout.conf | sort | uniq -c | sed -e 's/^[ \t]*//' | grep -v ^1
2 lvmvol /dev/vg00 lv_var 8589934592b linear
3 lvmvol /dev/vg_oradata lv_oradata03 1294932639744b linear
3 lvmvol /dev/vg_recovery lvol1 1609538994176b linear

Previous rear versions rear-2.1 and rear-2.0 were tested and did not have this problem.
As a result the recovery interrupt with errors like:

+++ create_component /dev/vg_recovery lvmgrp
+++ local device=/dev/vg_recovery
+++ local type=lvmgrp
+++ local touchfile=lvmgrp--dev-vg_recovery
+++ '[' -e /tmp/rear.o4oIsJt1n6w5LBY/tmp/touch/lvmgrp--dev-vg_recovery ']'
+++ return 0
+++ create_volume_group=1
+++ create_logical_volumes=1
+++ create_thin_volumes_only=0
+++ '[' 1 -eq 1 ']'
+++ LogPrint 'Creating LVM VG '\''vg_recovery'\''; Warning: some properties may not be preserved...'
+++ Log 'Creating LVM VG '\''vg_recovery'\''; Warning: some properties may not be preserved...'
++++ date '+%Y-%m-%d %H:%M:%S.%N '
+++ local 'timestamp=2019-07-16 09:59:57.692061143 '
+++ test 1 -gt 0
+++ echo '2019-07-16 09:59:57.692061143 Creating LVM VG '\''vg_recovery'\''; Warning: some properties may not be preserved...'
2019-07-16 09:59:57.692061143 Creating LVM VG 'vg_recovery'; Warning: some properties may not be preserved...
+++ Print 'Creating LVM VG '\''vg_recovery'\''; Warning: some properties may not be preserved...'
+++ test 1
+++ echo -e 'Creating LVM VG '\''vg_recovery'\''; Warning: some properties may not be preserved...'
+++ '[' -e /dev/vg_recovery ']'
+++ lvm vgcreate --physicalextentsize 4096k vg_recovery /dev/sdi /dev/sdj /dev/sdk
  WARNING: Failed to connect to lvmetad. Falling back to device scanning.
  A volume group called vg_recovery already exists.
  • Workaround, if any: manual intervention was required as afterwards the script was confused what was done or not done. The recovery went fine afterwards when all VGs and Lvols where created automatically by the script and some manually by me.

  • Attachments, as applicable ("rear -D mkrescue/mkbackup/recover" debug log files): I have the logs available when interested...

@gdha gdha added the bug label Jul 17, 2019
@gdha gdha added this to the ReaR v2.6 milestone Jul 17, 2019
@gdha
Copy link
Member Author

gdha commented Jul 17, 2019

Problem is coming from script usr/share/rear/layout/save/GNU/Linux/220_lvm_layout.sh line:
lvm 8>&- 7>&- lvs --separator=":" --noheadings --units b --nosuffix -o origin,lv_name,vg_name,lv_size,lv_layout,pool_lv,chunk_size,stripes

Without the options we get:

[root@ITSGBHHLSP00741:/root]#
#-> lvm lvs --separator=":" --noheadings --units b --nosuffix
  lv-dvl:vg-dvl:-wi-ao----:10737418240::::::::
  lv_audit:vg00:-wi-ao----:4294967296::::::::
  lv_home:vg00:-wi-ao----:4294967296::::::::
  lv_log:vg00:-wi-ao----:4294967296::::::::
  lv_openv:vg00:-wi-ao----:3221225472::::::::
  lv_root:vg00:-wi-ao----:8589934592::::::::
  lv_tidal:vg00:-wi-ao----:2147483648::::::::
  lv_tmp:vg00:-wi-ao----:2147483648::::::::
  lv_var:vg00:-wi-ao----:8589934592::::::::
  swap:vg00:-wi-ao----:4294967296::::::::
  lv_oem:vg_oem:-wi-ao----:5364514816::::::::
  lv_oraarch:vg_oraarch:-wi-ao----:107369988096::::::::
  lv_u01:vg_oracle:-wi-ao----:53687091200::::::::
  lv_u02:vg_oracle:-wi-ao----:5364514816::::::::
  lv_oradata01:vg_oradata:-wi-ao----:42949672960::::::::
  lv_oradata02:vg_oradata:-wi-ao----:42949672960::::::::
  lv_oradata03:vg_oradata:-wi-ao----:1294932639744::::::::
  lv_oradata04:vg_oradata:-wi-ao----:767721209856::::::::
  lv_redo01a:vg_oraredo1:-wi-ao----:5368709120::::::::
  lv_redo01b:vg_oraredo1:-wi-ao----:5364514816::::::::
  lv_redo02a:vg_oraredo2:-wi-ao----:5368709120::::::::
  lv_redo02b:vg_oraredo2:-wi-ao----:5364514816::::::::
  lvol1:vg_recovery:-wi-ao----:1609538994176::::::::
  lv_swap:vg_swap:-wi-ao----:13958643712::::::::

With the options we see duplicate lines:

#-> lvm lvs --separator=":" --noheadings --units b --nosuffix -o origin,lv_name,vg_name,lv_size,lv_layout,pool_lv,chunk_size,stripes
  :lv-dvl:vg-dvl:10737418240:linear::0:1
  :lv_audit:vg00:4294967296:linear::0:1
  :lv_home:vg00:4294967296:linear::0:1
  :lv_log:vg00:4294967296:linear::0:1
  :lv_openv:vg00:3221225472:linear::0:1
  :lv_root:vg00:8589934592:linear::0:1
  :lv_tidal:vg00:2147483648:linear::0:1
  :lv_tmp:vg00:2147483648:linear::0:1
  :lv_var:vg00:8589934592:linear::0:1
  :lv_var:vg00:8589934592:linear::0:1
  :swap:vg00:4294967296:linear::0:1
  :lv_oem:vg_oem:5364514816:linear::0:1
  :lv_oraarch:vg_oraarch:107369988096:linear::0:1
  :lv_u01:vg_oracle:53687091200:linear::0:1
  :lv_u02:vg_oracle:5364514816:linear::0:1
  :lv_oradata01:vg_oradata:42949672960:linear::0:1
  :lv_oradata02:vg_oradata:42949672960:linear::0:1
  :lv_oradata03:vg_oradata:1294932639744:linear::0:1
  :lv_oradata03:vg_oradata:1294932639744:linear::0:1
  :lv_oradata03:vg_oradata:1294932639744:linear::0:1
  :lv_oradata04:vg_oradata:767721209856:linear::0:1
  :lv_redo01a:vg_oraredo1:5368709120:linear::0:1
  :lv_redo01b:vg_oraredo1:5364514816:linear::0:1
  :lv_redo02a:vg_oraredo2:5368709120:linear::0:1
  :lv_redo02b:vg_oraredo2:5364514816:linear::0:1
  :lvol1:vg_recovery:1609538994176:linear::0:1
  :lvol1:vg_recovery:1609538994176:linear::0:1
  :lvol1:vg_recovery:1609538994176:linear::0:1
  :lv_swap:vg_swap:13958643712:linear::0:1

As the code in this part was last modified by @rmetrich I will assign it to you for further inspection.

@gdha
Copy link
Member Author

gdha commented Jul 17, 2019

@rmetrich Do you want me to make a RedHat software case so you can spend some time on it?

@gdha
Copy link
Member Author

gdha commented Jul 18, 2019

A quick fix would be piping through "uniq":

#-> lvm 8>&- 7>&- lvs --separator=":" --noheadings --units b --nosuffix -o origin,lv_name,vg_name,lv_size,lv_layout,pool_lv,chunk_size,stripes,stripe_size
  :lv-dvl:vg-dvl:10737418240:linear::0:1:0
  :lv_audit:vg00:4294967296:linear::0:1:0
  :lv_home:vg00:4294967296:linear::0:1:0
  :lv_log:vg00:4294967296:linear::0:1:0
  :lv_openv:vg00:3221225472:linear::0:1:0
  :lv_root:vg00:8589934592:linear::0:1:0
  :lv_tidal:vg00:2147483648:linear::0:1:0
  :lv_tmp:vg00:2147483648:linear::0:1:0
  :lv_var:vg00:8589934592:linear::0:1:0
  :lv_var:vg00:8589934592:linear::0:1:0
  :swap:vg00:4294967296:linear::0:1:0
  :lv_oem:vg_oem:5364514816:linear::0:1:0
  :lv_oraarch:vg_oraarch:107369988096:linear::0:1:0
  :lv_u01:vg_oracle:53687091200:linear::0:1:0
  :lv_u02:vg_oracle:5364514816:linear::0:1:0
  :lv_oradata01:vg_oradata:42949672960:linear::0:1:0
  :lv_oradata02:vg_oradata:42949672960:linear::0:1:0
  :lv_oradata03:vg_oradata:1294932639744:linear::0:1:0
  :lv_oradata03:vg_oradata:1294932639744:linear::0:1:0
  :lv_oradata03:vg_oradata:1294932639744:linear::0:1:0
  :lv_oradata04:vg_oradata:767721209856:linear::0:1:0
  :lv_redo01a:vg_oraredo1:5368709120:linear::0:1:0
  :lv_redo01b:vg_oraredo1:5364514816:linear::0:1:0
  :lv_redo02a:vg_oraredo2:5368709120:linear::0:1:0
  :lv_redo02b:vg_oraredo2:5364514816:linear::0:1:0
  :lvol1:vg_recovery:1609538994176:linear::0:1:0
  :lvol1:vg_recovery:1609538994176:linear::0:1:0
  :lvol1:vg_recovery:1609538994176:linear::0:1:0
  :lv_swap:vg_swap:13958643712:linear::0:1:0

#-> lvm 8>&- 7>&- lvs --separator=":" --noheadings --units b --nosuffix -o origin,lv_name,vg_name,lv_size,lv_layout,pool_lv,chunk_size,stripes,stripe_size | uniq
  :lv-dvl:vg-dvl:10737418240:linear::0:1:0
  :lv_audit:vg00:4294967296:linear::0:1:0
  :lv_home:vg00:4294967296:linear::0:1:0
  :lv_log:vg00:4294967296:linear::0:1:0
  :lv_openv:vg00:3221225472:linear::0:1:0
  :lv_root:vg00:8589934592:linear::0:1:0
  :lv_tidal:vg00:2147483648:linear::0:1:0
  :lv_tmp:vg00:2147483648:linear::0:1:0
  :lv_var:vg00:8589934592:linear::0:1:0
  :swap:vg00:4294967296:linear::0:1:0
  :lv_oem:vg_oem:5364514816:linear::0:1:0
  :lv_oraarch:vg_oraarch:107369988096:linear::0:1:0
  :lv_u01:vg_oracle:53687091200:linear::0:1:0
  :lv_u02:vg_oracle:5364514816:linear::0:1:0
  :lv_oradata01:vg_oradata:42949672960:linear::0:1:0
  :lv_oradata02:vg_oradata:42949672960:linear::0:1:0
  :lv_oradata03:vg_oradata:1294932639744:linear::0:1:0
  :lv_oradata04:vg_oradata:767721209856:linear::0:1:0
  :lv_redo01a:vg_oraredo1:5368709120:linear::0:1:0
  :lv_redo01b:vg_oraredo1:5364514816:linear::0:1:0
  :lv_redo02a:vg_oraredo2:5368709120:linear::0:1:0
  :lv_redo02b:vg_oraredo2:5364514816:linear::0:1:0
  :lvol1:vg_recovery:1609538994176:linear::0:1:0
  :lv_swap:vg_swap:13958643712:linear::0:1:0

@rmetrich
Copy link
Contributor

Hi @gdha , probably using sort -u is even safer.
The odd thing there is looks like a bug in lvm command, why would some LVs print multiple times?
Do you have a reproducer?

@jsmeix
Copy link
Member

jsmeix commented Jul 19, 2019

@gdha
could you post the output of the command

lsblk -ipo NAME,KNAME,PKNAME,TRAN,TYPE,FSTYPE,SIZE,MOUNTPOINT

because I think it helps in general to have a comprehensible overview
of the block devices structure on your particular system.

@jsmeix
Copy link
Member

jsmeix commented Jul 19, 2019

For the future:
I think in general duplicate entries in disklayout.conf should not matter for ReaR
because I think the automated dependency resolver in ReaR should mark
subsequent duplicates of an entry in disklayout.conf as done
cf. the mark_as_done() function in lib/layout-functions.sh
but I think currently things do not yet work that fail-safe in ReaR.

For now:
As a band-aid for now we may enhance
layout/save/default/950_verify_disklayout_file.sh
to detect duplicate entries in disklayout.conf
and error out during "rear mkrescue/mkbackup".

@jsmeix
Copy link
Member

jsmeix commented Jul 19, 2019

@gdha
only a blind offhanded guess because you wrote SAN:
Usually SAN also means multipath and multipath means
that same block devices may show up as duplicates via different paths.

@gdha
Copy link
Member Author

gdha commented Jul 19, 2019

@gdha
could you post the output of the command

lsblk -ipo NAME,KNAME,PKNAME,TRAN,TYPE,FSTYPE,SIZE,MOUNTPOINT

because I think it helps in general to have a comprehensible overview
of the block devices structure on your particular system.

#-> lsblk -ipo NAME,KNAME,PKNAME,TRAN,TYPE,FSTYPE,SIZE,MOUNTPOINT
NAME                                  KNAME      PKNAME    TRAN   TYPE FSTYPE       SIZE MOUNTPOINT
/dev/fd0                              /dev/fd0                    disk                4K
/dev/sda                              /dev/sda                    disk               45G
|-/dev/sda1                           /dev/sda1  /dev/sda         part ext3         512M /boot
`-/dev/sda2                           /dev/sda2  /dev/sda         part LVM2_member 44.5G
  |-/dev/mapper/vg00-lv_root          /dev/dm-0  /dev/sda2        lvm  ext3           8G /
  |-/dev/mapper/vg00-swap             /dev/dm-1  /dev/sda2        lvm  swap           4G [SWAP]
  |-/dev/mapper/vg00-lv_home          /dev/dm-15 /dev/sda2        lvm  ext3           4G /home
  |-/dev/mapper/vg00-lv_audit         /dev/dm-16 /dev/sda2        lvm  ext3           4G /var/log/audit
  |-/dev/mapper/vg00-lv_log           /dev/dm-17 /dev/sda2        lvm  ext3           4G /var/log
  |-/dev/mapper/vg00-lv_var           /dev/dm-18 /dev/sda2        lvm  ext3           8G /var
  |-/dev/mapper/vg00-lv_tmp           /dev/dm-19 /dev/sda2        lvm  ext3           2G /var/tmp
  |-/dev/mapper/vg00-lv_openv         /dev/dm-20 /dev/sda2        lvm  ext3           3G /usr/openv
  `-/dev/mapper/vg00-lv_tidal         /dev/dm-21 /dev/sda2        lvm  ext3           2G /opt/TIDAL
/dev/sdb                              /dev/sdb                    disk LVM2_member   55G
|-/dev/mapper/vg_oracle-lv_u01        /dev/dm-9  /dev/sdb         lvm  ext3          50G /u01
`-/dev/mapper/vg_oracle-lv_u02        /dev/dm-10 /dev/sdb         lvm  ext3           5G /u02
/dev/sdc                              /dev/sdc                    disk LVM2_member  100G
`-/dev/mapper/vg_oraarch-lv_oraarch   /dev/dm-2  /dev/sdc         lvm  ext3         100G /u02/oraarch
/dev/sdd                              /dev/sdd                    disk LVM2_member 1000G
|-/dev/mapper/vg_oradata-lv_oradata01 /dev/dm-5  /dev/sdd         lvm  ext3          40G /u02/oradata01
|-/dev/mapper/vg_oradata-lv_oradata02 /dev/dm-6  /dev/sdd         lvm  ext3          40G /u02/oradata02
|-/dev/mapper/vg_oradata-lv_oradata03 /dev/dm-7  /dev/sdd         lvm  ext3         1.2T /u02/oradata03
`-/dev/mapper/vg_oradata-lv_oradata04 /dev/dm-8  /dev/sdd         lvm  ext3         715G /u02/oradata04
/dev/sde                              /dev/sde                    disk LVM2_member   10G
|-/dev/mapper/vg_oraredo1-lv_redo01a  /dev/dm-11 /dev/sde         lvm  ext3           5G /u02/oraredo01a
`-/dev/mapper/vg_oraredo1-lv_redo01b  /dev/dm-12 /dev/sde         lvm  ext3           5G /u02/oraredo01b
/dev/sdf                              /dev/sdf                    disk LVM2_member   10G
|-/dev/mapper/vg_oraredo2-lv_redo02a  /dev/dm-22 /dev/sdf         lvm  ext3           5G /u02/oraredo02a
`-/dev/mapper/vg_oraredo2-lv_redo02b  /dev/dm-23 /dev/sdf         lvm  ext3           5G /u02/oraredo02b
/dev/sdg                              /dev/sdg                    disk LVM2_member    5G
`-/dev/mapper/vg_oem-lv_oem           /dev/dm-3  /dev/sdg         lvm  ext3           5G /oem
/dev/sdh                              /dev/sdh                    disk LVM2_member   16G
`-/dev/mapper/vg_swap-lv_swap         /dev/dm-14 /dev/sdh         lvm  swap          13G [SWAP]
/dev/sdi                              /dev/sdi                    disk LVM2_member  500G
`-/dev/mapper/vg_recovery-lvol1       /dev/dm-4  /dev/sdi         lvm  ext3         1.5T /u02/recoveryarea01
/dev/sdj                              /dev/sdj                    disk LVM2_member  500G
`-/dev/mapper/vg_recovery-lvol1       /dev/dm-4  /dev/sdj         lvm  ext3         1.5T /u02/recoveryarea01
/dev/sdk                              /dev/sdk                    disk LVM2_member  500G
`-/dev/mapper/vg_recovery-lvol1       /dev/dm-4  /dev/sdk         lvm  ext3         1.5T /u02/recoveryarea01
/dev/sdl                              /dev/sdl                    disk LVM2_member  500G
`-/dev/mapper/vg_oradata-lv_oradata03 /dev/dm-7  /dev/sdl         lvm  ext3         1.2T /u02/oradata03
/dev/sdm                              /dev/sdm                    disk LVM2_member  550G
`-/dev/mapper/vg_oradata-lv_oradata03 /dev/dm-7  /dev/sdm         lvm  ext3         1.2T /u02/oradata03
/dev/sdn                              /dev/sdn                    disk LVM2_member   15G
`-/dev/mapper/vg--dvl-lv--dvl         /dev/dm-13 /dev/sdn         lvm  ext3          10G /app/scm-dvl
/dev/sr0                              /dev/sr0             sata   rom              1024M

@gdha
Copy link
Member Author

gdha commented Jul 19, 2019

@rmetrich @jsmeix if you would like to see the rear log file and console output then check out Gist https://gist.github.com/gdha/6fe7c90c952a3119ea731d5c7f791a2c

@rmetrich I agree that it is more a lvm bug then rear bug

@gdha
Copy link
Member Author

gdha commented Jul 19, 2019

@rmetrich

Hi @gdha , probably using sort -u is even safer.
The odd thing there is looks like a bug in lvm command, why would some LVs print multiple times?
Do you have a reproducer?

sort -u is faster, but has as effect that the output will be sorted and it would be better to just remove the duplicate lines with uniq

Concerning to be able to fix I will open an HPE case which will be forwarded to RH.

@jsmeix
Copy link
Member

jsmeix commented Jul 19, 2019

When I compare the excerpt from the initial comment

2 lvmvol /dev/vg00 lv_var 8589934592b linear
3 lvmvol /dev/vg_oradata lv_oradata03 1294932639744b linear
3 lvmvol /dev/vg_recovery lvol1 1609538994176b linear

with the lsblk output I see in the lsblk output that
/dev/dm-4 and /dev/dm-7 are each used three times
which somehow might indicate why there are the
triple entries for vg_oradata and vg_recovery
but I fail to find a duplicate for vg00 or lv_var.

I also think that to be on the safe side the ordering
of entries in disklayout.conf should better not be changed.

@rmetrich
Copy link
Contributor

@gdha Can you provide lvs output?
I'm wondering if there is no snapshot on the system.

@gdha
Copy link
Member Author

gdha commented Jul 19, 2019

@rmetrich

@gdha Can you provide lvs output?
I'm wondering if there is no snapshot on the system.

#-> lvs
  LV           VG          Attr       LSize    Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  lv-dvl       vg-dvl      -wi-ao----   10.00g
  lv_audit     vg00        -wi-ao----    4.00g
  lv_home      vg00        -wi-ao----    4.00g
  lv_log       vg00        -wi-ao----    4.00g
  lv_openv     vg00        -wi-ao----    3.00g
  lv_root      vg00        -wi-ao----    8.00g
  lv_tidal     vg00        -wi-ao----    2.00g
  lv_tmp       vg00        -wi-ao----    2.00g
  lv_var       vg00        -wi-ao----    8.00g
  swap         vg00        -wi-ao----    4.00g
  lv_oem       vg_oem      -wi-ao----   <5.00g
  lv_oraarch   vg_oraarch  -wi-ao---- <100.00g
  lv_u01       vg_oracle   -wi-ao----   50.00g
  lv_u02       vg_oracle   -wi-ao----   <5.00g
  lv_oradata01 vg_oradata  -wi-ao----   40.00g
  lv_oradata02 vg_oradata  -wi-ao----   40.00g
  lv_oradata03 vg_oradata  -wi-ao----   <1.18t
  lv_oradata04 vg_oradata  -wi-ao---- <715.00g
  lv_redo01a   vg_oraredo1 -wi-ao----    5.00g
  lv_redo01b   vg_oraredo1 -wi-ao----   <5.00g
  lv_redo02a   vg_oraredo2 -wi-ao----    5.00g
  lv_redo02b   vg_oraredo2 -wi-ao----   <5.00g
  lvol1        vg_recovery -wi-ao----    1.46t
  lv_swap      vg_swap     -wi-ao----   13.00g

@rmetrich
Copy link
Contributor

OK, I understand the root cause.
This duplication happens when the Logical Volume is split in multiple chunks.
This can be discovered using pvdisplay --maps or lvs --segments.

Piping through uniq will fix this indeed.

@gdha
Copy link
Member Author

gdha commented Jul 22, 2019

@rmetrich You are right:

[root@ITSGBHHLSP00741:/tmp/rear.ic55au7x9m87O8b/rootfs/dev]#
#-> lvs --segments
  LV           VG          Attr       #Str Type   SSize
  lv-dvl       vg-dvl      -wi-ao----    1 linear   10.00g
  lv_audit     vg00        -wi-ao----    1 linear    4.00g
  lv_home      vg00        -wi-ao----    1 linear    4.00g
  lv_log       vg00        -wi-ao----    1 linear    4.00g
  lv_openv     vg00        -wi-ao----    1 linear    3.00g
  lv_root      vg00        -wi-ao----    1 linear    8.00g
  lv_tidal     vg00        -wi-ao----    1 linear    2.00g
  lv_tmp       vg00        -wi-ao----    1 linear    2.00g
  lv_var       vg00        -wi-ao----    1 linear    4.00g
  lv_var       vg00        -wi-ao----    1 linear    4.00g
  swap         vg00        -wi-ao----    1 linear    4.00g
  lv_oem       vg_oem      -wi-ao----    1 linear   <5.00g
  lv_oraarch   vg_oraarch  -wi-ao----    1 linear <100.00g
  lv_u01       vg_oracle   -wi-ao----    1 linear   50.00g
  lv_u02       vg_oracle   -wi-ao----    1 linear   <5.00g
  lv_oradata01 vg_oradata  -wi-ao----    1 linear   40.00g
  lv_oradata02 vg_oradata  -wi-ao----    1 linear   40.00g
  lv_oradata03 vg_oradata  -wi-ao----    1 linear  205.00g
  lv_oradata03 vg_oradata  -wi-ao----    1 linear <500.00g
  lv_oradata03 vg_oradata  -wi-ao----    1 linear  501.00g
  lv_oradata04 vg_oradata  -wi-ao----    1 linear <715.00g
  lv_redo01a   vg_oraredo1 -wi-ao----    1 linear    5.00g
  lv_redo01b   vg_oraredo1 -wi-ao----    1 linear   <5.00g
  lv_redo02a   vg_oraredo2 -wi-ao----    1 linear    5.00g
  lv_redo02b   vg_oraredo2 -wi-ao----    1 linear   <5.00g
  lvol1        vg_recovery -wi-ao----    1 linear <500.00g
  lvol1        vg_recovery -wi-ao----    1 linear <500.00g
  lvol1        vg_recovery -wi-ao----    1 linear <499.01g
  lv_swap      vg_swap     -wi-ao----    1 linear   13.00g

However, is the output of lvm now a bug or a feature??

@rmetrich
Copy link
Contributor

I would say it's a feature.
Depending on the columns to display, it may duplicate a LV to show properties specific to a chunk (such as size of chunk, position on disk, etc).
What is missing is the filtering when displayed properties are always identical.

@rmetrich
Copy link
Contributor

Should say "segment" instead of "chunk"

@rmetrich
Copy link
Contributor

The printing of each segment happens as soon as "stripes" property is displayed.
I will work on enhancing the actual code.
Basically, if using "uniq" is not enough to remove duplication, then a warning should be printed in disklayout.conf instructing we are not capable of recreating exactly the LV.
This is anyway already the case since when recreating the LV upon migration, the start of the LV is not kept as the original LV.

@gdha
Copy link
Member Author

gdha commented Jul 22, 2019

@rmetrich Sure, good idea. In migration mode an exact copy is not possible on chunk/segment level anyhow. Not sure a warning should be given as it will only confuse people, no?
Was too quick - in our cases "uniq" was able to remove all duplicate entries
@jsmeix what do you think?

rmetrich added a commit to rmetrich/rear that referenced this issue Jul 22, 2019
Create the 'lvmvol' lines commented out when multiple segments exist for a given LV.
This is not an issue unless Migration Mode is used. In such case, using 'lvcreate' commands already does best effort and loses LV information.

Signed-off-by: Renaud Métrich <rmetrich@redhat.com>
rmetrich added a commit that referenced this issue Jul 23, 2019
Issue #2187 - disklayout.conf file contains duplicate lines
@rmetrich
Copy link
Contributor

@gdha I've requested a backport to RHEL7

@jsmeix
Copy link
Member

jsmeix commented Aug 7, 2019

@rmetrich
isn't this fixed here at ReaR upstream since #2194 and #2196 are merged?

@rmetrich
Copy link
Contributor

rmetrich commented Aug 7, 2019

@jsmeix yes it is.

@rmetrich rmetrich closed this as completed Aug 7, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants