Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

300_map_disks.sh insufficient to automatically find existing unique disk size mapping #2690

Closed
jsmeix opened this issue Oct 4, 2021 · 7 comments
Assignees
Labels
enhancement Adaptions and new features fixed / solved / done minor bug An alternative or workaround exists
Milestone

Comments

@jsmeix
Copy link
Member

jsmeix commented Oct 4, 2021

  • ReaR version ("/usr/sbin/rear -V"):
    current GitHub master code

  • Description of the issue (ideally so that others can reproduce it):

Assume on the original sytem the disks with sizes are

sda 1000
sdb 2000
sdc 3000
sdd 4000

Assume on the replacement hardware the disks with sizes are

sda 4000
sdb 1000
sdc 2000
sdd 3000

In this case a unique disk mapping based on disk size exits
(source target):

sda sdb
sdb sdc
sdc sdd
sdd sda

But current layout/prepare/default/300_map_disks.sh
is overcautious in this particular case here
because it skips when a possibly found target system disk
is already listed as source or target in the mapping file.

First it reads disk sda 1000 from disklayout conf
and finds the unique size matching sdb on the replacement hardware
so it autogenerates the following line in the mapping file:

sda sdb

Next it reads disk sdb 2000 from disklayout conf and
tries to find if there is a current disk with same name and same size as the original
but as sdb is already listed as target disk in the generated mapping file
it skips that mapping.

Then it reads disk sdc 3000 from disklayout conf
and finds the unique size matching sdd on the replacement hardware
so it autogenerates sdc sdd and add it to the mapping file:

sda sdb
sdc sdd

Finally it reads disk sdd 4000 from disklayout conf and
tries to find if there is a current disk with same name and same size as the original
but as sdd is already listed as target disk in the generated mapping file
it skips that mapping.

The main reason for what looks overcautious in this particular case
is when the user has provided a mapping file.
Then the automatism must not get in conflict with disks
from the user provided mapping.
I.e. the user provided mapping must be sacrosanct.
Currently user provided mapping is not distinguished from the
generated automated mapping. All is in one single mapping file.

Another reason is that the automatism must not get in conflict with
what is has already specified in its autogenerated mapping file
(e.g. no disk must be used twice as source or target).

So the automated mapping code in 300_map_disks.sh
needs some major rework.

  • Workaround, if any:

As user manually add the missing mappings via the dialogs that are shown by ReaR.

@jsmeix jsmeix added the enhancement Adaptions and new features label Oct 4, 2021
@jsmeix jsmeix added this to the ReaR v2.7 milestone Oct 4, 2021
@jsmeix jsmeix self-assigned this Oct 4, 2021
jsmeix added a commit that referenced this issue Oct 5, 2021
In layout/prepare/default/300_map_disks.sh overhauled the
automapping of original 'disk' devices and 'multipath' devices
to current block devices in the currently running recovery system
see #2690
@jsmeix
Copy link
Member Author

jsmeix commented Oct 5, 2021

#2693 should fix this issue here
but it needs some more testing with some different cases
to avoid possible regressions in other cases.

@jsmeix jsmeix added the minor bug An alternative or workaround exists label Oct 6, 2021
@pcahyna
Copy link
Member

pcahyna commented Oct 13, 2021

@jsmeix without trying the code, I have the impression that your reproducer is needlessly complicated. Wouldn't original disks

sda 1000
sdb 2000

and replacement disks

sda 2000
sdb 1000

be enough to reproduce the problem? First, sda would be mapped to sdb and then sdb would be skipped.

@jsmeix
Copy link
Member Author

jsmeix commented Oct 14, 2021

@pcahyna my initial description of the issue is based
on a SUSE customer issue who had 4 disks.
It is not the smallest possible reproducer.

@pcahyna
Copy link
Member

pcahyna commented Oct 14, 2021

@jsmeix I see, and is my reasoning above correct, or am I missing something?

@jsmeix
Copy link
Member Author

jsmeix commented Oct 14, 2021

@pcahyna
I would have to test with two disks to know for sure
but in #2693
I didn't test with two disks

jsmeix added a commit that referenced this issue Oct 15, 2021
In layout/prepare/default/300_map_disks.sh overhauled the
automapping of original 'disk' devices and 'multipath' devices
to current block devices in the currently running recovery system
so that now it automatically finds an existing unique disk size mapping
also when there is a unique mapping between more than two disks,
see #2690
@jsmeix
Copy link
Member Author

jsmeix commented Oct 15, 2021

I tested two disks, see
#2693 (comment)

@jsmeix
Copy link
Member Author

jsmeix commented Oct 15, 2021

With #2693 merged
this issue should be fixed.

@jsmeix jsmeix closed this as completed Oct 15, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement Adaptions and new features fixed / solved / done minor bug An alternative or workaround exists
Projects
None yet
Development

No branches or pull requests

2 participants