Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enable creation of non consecutive partitions #2081

Merged
merged 6 commits into from May 24, 2019

Conversation

rmetrich
Copy link
Contributor

@rmetrich rmetrich commented Mar 12, 2019

Relax-and-Recover (ReaR) Pull Request Template

Please fill in the following items before submitting a new pull request:

Pull Request Details:

Tested with 4 disks:

  • Disk 1: GPT with consecutive partitions
  • Disk 2: MSDOS with consecutive partitions
  • Disk 3: GPT with non-consecutive partitions (1, 3, 4, 6)
  • Disk 4: MSDOS with non-consecutive partitions (1, 3 (extended), 5 (logical), 6 (logical))
  • Brief description of the changes in this pull request:

parted is not capable of creating non-consecutive partitions. To still be able to do so, the trick consists in creating dummy partitions to fill the gaps between partition numbers.
Allocation of these dummy partitions is done from the end of the target partition, because parted is not capable of resizing a partition from the beginning.

Example (see also in /usr/share/rear/lib/layout-functions.sh, create_disk_partition() function):

The GPT disk contains only 1 partition numbered 3 named PART

# parted -m -s /dev/sda unit B print
/dev/sda:21474836480B:scsi:512:512:gpt:QEMU QEMU HARDDISK:;
3:1048576B:2097151B:1048576B::PART:;

We need to create these partitions to recreate the exact same partitions:

# parted -m -s /dev/sda unit B print
/dev/sda:21474836480B:scsi:512:512:gpt:QEMU QEMU HARDDISK:;
3:1048576B:2096127B:1047552B::PART:;
2:2096128B:2096639B:512B::dummy2:;
1:2096640B:2097151B:512B::dummy1:;

Then delete dummy partition 1 and 2 and resize 3 to its original end.

@jsmeix jsmeix added the enhancement Adaptions and new features label Mar 12, 2019
@jsmeix
Copy link
Member

jsmeix commented Mar 12, 2019

@rmetrich
Wow!
I will have a look tomorrow...

Copy link
Member

@jsmeix jsmeix left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am not against using associative arrays for newer functionality in ReaR
but then there must be a test if associative arrays are supported
like on my SLES11 system

# declare -A arr=() && echo OK || echo FAIL with exit code $?
-bash: declare: -A: invalid option
declare: usage: declare [-afFirtx] [-p] [name[=value] ...]
FAIL with exit code 2

I think for this pull request the parted version does not really matter
but what matters is whether or not associative arrays work so that
a test for this should be used to distinguish if ReaR can work with
non-consecutive partitions or if "rear mkrescue" should error out
in case of non-consecutive partitions.

@jsmeix
Copy link
Member

jsmeix commented Mar 14, 2019

@rmetrich
when I was thinking about how to implement that in the past
because of #1681
and #1771
I had the idea to create all dummy partitions all at the end of the disk.

I assume the mimimum size of a partition is 4096 bytes = 4 * 512 bytes
(I hope this is safe for https://en.wikipedia.org/wiki/Advanced_Format )
then if we use 1 MiB at the end of the disk for dummy partitions
we should be able to create up to 256 = 1024 * 1024 / 4096
dummy partitions in that last 1 MiB at the end of the disk.

This way we could create all "real" partitions normally
without the need to resize them - except the last "real" partition
that must be created at most up to the end of the disk minus 1 MiB.

After all "real" partitions were created normally we can remove
the dummy partitions at the end of the disk.

I think we could even leave the last "real" partition as is
which would make it at most 1 MiB smaller than it was before.
I think that 1 MiB less cannot be of real importance.

Or could that somehow mess up things when LVM or things like that
is used on the last "real" partition that got recreated 1 MiB smaller?

I think things will mess up when the last "real" partition
is recreated 1 MiB smaller and that partition is an extended partition
and the last logical partition goes up to the end of the extended partition
which means if the extended partition is recreated 1 MiB smaller then
the last logical partition must be also recreated 1 MiB smaller.

@rmetrich
Copy link
Contributor Author

@jsmeix I did some testing and it messed up LVM indeed. I believe resizing the partition is safer. Also it's hard to tell which is the last partition to resize because partitions must be in right order but allocated blocks are not (partition 1 can point to end of disk for example).

@jsmeix
Copy link
Member

jsmeix commented Mar 14, 2019

@rmetrich
thank you for your testing - that makes things clear.

FYI:
With "last partition" I meant what is located "last" on the disk
(not what has the highest partition number), i.e. same as in
layout/prepare/default/420_autoresize_last_partitions.sh
https://github.com/rear/rear/blob/master/usr/share/rear/layout/prepare/default/420_autoresize_last_partitions.sh#L161

# The last partition is the /dev/<partition> with biggest <partition start(bytes)> value.

jsmeix referenced this pull request Mar 26, 2019
with 'set -eu' and - by the way - adapted the coding style
according to https://github.com/rear/rear/wiki/Coding-Style
@jsmeix
Copy link
Member

jsmeix commented Mar 28, 2019

@rmetrich
do you think you can make this working for ReaR 2.5
or should it better be postponed?

I think changes in the layout code that recreates partitions
are now somewhat late for ReaR 2.5 and because currently
we do not have real user issues with sparse partition schemes
I think it should be postponed after ReaR 2.5 to avoid regressions.

@rmetrich
Copy link
Contributor Author

Better postpone it to later ... even if it seems to work.

@rmetrich rmetrich requested a review from jsmeix March 28, 2019 12:46
@jsmeix jsmeix added this to the ReaR v2.6 milestone Mar 28, 2019
Copy link
Member

@jsmeix jsmeix left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

parted ... resizepart ... does not work on older systems, see
https://github.com/rear/rear/pull/2081/files#r270031759

@jsmeix
Copy link
Member

jsmeix commented Mar 28, 2019

A side note FYI:

When you need "associative" arrays that are indexed by a number
(in your case the partition number for partitions_to_resize())
you can use the normal bash 3.x non-associative arrays
because one can set arbirary index numbers like:

# arr[2]="two"
# arr[4]="four"
# arr[7]="seven"

# declare -p arr
declare -a arr='([2]="two" [4]="four" [7]="seven")'

# for index in $( seq 256 ) ; do test "${arr[$index]}" && echo "element with index $index is '${arr[$index]}'"  ; done
element with index 2 is 'two'
element with index 4 is 'four'
element with index 7 is 'seven'

Alternatively when you need pairs of numbers in an array
you may concatenate the numbers of one pair
with a non-digit separator character like
partition_number:end_btye
as in

arr=( 2:12345 7:23456 123:3456789 )

and split them later with something like

for e in "${arr[@]}" ; do
    partition_number=${e%%:*}
    end_btye=${e##*:}
    echo partition_number $partition_number end_btye $end_btye
done

which results this output

partition_number 2 end_btye 12345
partition_number 7 end_btye 23456
partition_number 123 end_btye 3456789

Alternatively when you need tuples of words in an array
you can store them as strings with usual $IFS separators
like in this code example

# Array elemets are "partition_number start_byte end_btye"
arr=( "2 1234 2345" "7 3456 4567" "123 5678 6789" )

declare -p arr

for e in "${arr[@]}" ; do
    read partition_number start_byte end_btye junk <<<"$e"
    echo partition_number:$partition_number start_byte:$start_byte end_btye:$end_btye
done

which results this output

declare -a arr='([0]="2 1234 2345" [1]="7 3456 4567" [2]="123 5678 6789")'
partition_number:2 start_byte:1234 end_btye:2345
partition_number:7 start_byte:3456 end_btye:4567
partition_number:123 start_byte:5678 end_btye:6789

@jsmeix
Copy link
Member

jsmeix commented Mar 29, 2019

@rmetrich
many thanks for all your efforts to deal with
all that various different parted versions!

@rmetrich
Copy link
Contributor Author

Sorry it's still WiP, testing ...

@rmetrich
Copy link
Contributor Author

Code is now stable, I hope :-)

Copy link
Member

@jsmeix jsmeix left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

With the latest changes it looks o.k. to me
from plain looking at the code
so I approve it.

@rmetrich
Copy link
Contributor Author

@jsmeix Could you test on a SLES12-SP4, which has "parted resize" but not "parted resizepart", but still "parted resize" with only NUM and END arguments.

You just need to add some disk, partition it with msdos or GPT with gaps in partitions, and try restoring :-)

@jsmeix
Copy link
Member

jsmeix commented Mar 29, 2019

I will test it on SLES and provide feedback
( but not before next Monday ;-)

@rmetrich
many thanks for your efforts and have a nice weekend!

@jsmeix
Copy link
Member

jsmeix commented Apr 2, 2019

I did my first test:
Things seem to work well for me on SLES12-SP4
with missing primary MSDOS partitions
(I get byte-by-byte identical partitions recreated).

@jsmeix
Copy link
Member

jsmeix commented Apr 2, 2019

I did my second test:
Things seem to work well for me on SLES15 with missing GPT partitions
(I get byte-by-byte identical partitions recreated).

@rmetrich
Copy link
Contributor Author

rmetrich commented Apr 2, 2019

Thanks @jsmeix for testing on SUSE.

@jsmeix
Copy link
Member

jsmeix commented Apr 2, 2019

@gdha @schabrolles @gozora
could you also have a look here?

@jsmeix
Copy link
Member

jsmeix commented Apr 2, 2019

@rmetrich
I like to also test SLES11 to see how things behave
when non-consecutive partitions are not supported...

@jsmeix
Copy link
Member

jsmeix commented May 23, 2019

@rmetrich
feel free to merge it as soon as you like.

I will not find time to also test SLES11 to see how things behave
when non-consecutive partitions are not supported, cf.
#2081 (comment)

Personally I would appreciate it if you merge it sooner than later
because I need to do enhancements for partition creation
on IBM Z architecture, cf.
#2142 (comment)
and I would prefer to do my enhancements
on top of your enhancements in this pull request here.

Signed-off-by: Renaud Métrich <rmetrich@redhat.com>
- no associative arrays
- no stderr redirection

Signed-off-by: Renaud Métrich <rmetrich@redhat.com>
Signed-off-by: Renaud Métrich <rmetrich@redhat.com>
Signed-off-by: Renaud Métrich <rmetrich@redhat.com>
…ecause it tries to resize the underlying file system also, and this is usually not implemented into parted

Signed-off-by: Renaud Métrich <rmetrich@redhat.com>
Signed-off-by: Renaud Métrich <rmetrich@redhat.com>
@rmetrich
Copy link
Contributor Author

@jsmeix Done :-)

@jsmeix
Copy link
Member

jsmeix commented May 24, 2019

@rmetrich
you did something but this pull request is
not yet merged into the current master code,
cf. https://github.com/rear/rear/commits

@rmetrich rmetrich merged commit e670ee3 into rear:master May 24, 2019
@rmetrich
Copy link
Contributor Author

Odd, I clicked!
Anyway this time, should be ok

@jsmeix
Copy link
Member

jsmeix commented May 27, 2019

Never mind!
That's just how computers behave nowadays:
Internally deterministic but unpredictable from the outside.

I guess the 'click' event handler callback function manager got a
temporary system communication bus conflict with the user interface
database agent when launching a request handler for the time slice
watchdog processor on the remote client control daemon instance
container cloud blockchain as a service buzz bleep fubar argh!

@jsmeix
Copy link
Member

jsmeix commented May 27, 2019

There is a possibly severe issue/regression since this is merged:

Before there have been the explicit parted calls
in the diskrestore.sh script so that in MIGATION_MODE
one could edit them directly, for example

parted -s /dev/dasda mklabel dasd >&2
parted -s /dev/dasda mkpart "'dasda1'" 98304B 314621951B >&2
parted -s /dev/dasda mkpart "'dasda2'" 314621952B 838926335B >&2
parted -s /dev/dasda mkpart "'dasda3'" 838926336B 7385198591B >&2

cf. #2142 (comment)

Since this is merged the explicit parted calls in diskrestore.sh
are replaced by indirect function calls like in my case now

create_disk_label /dev/dasda dasd
create_disk_partition "/dev/dasda" "ext2" 1 98304 314621951
create_disk_partition "/dev/dasda" "ext2" 2 314621952 838926335
create_disk_partition "/dev/dasda" "ext2" 3 838926336 7385333759

which could no longer be "just edited" by an admin
unless he knows how those functions evaluate.

I think it is the main purpose of the generated diskrestore.sh script
that when things do not "just work" an admin can relatively easily
understand what actually goes on and change that as he needs
to fix disk layout recreation issues during "rear recover",
see in the section about "Restore to different hardware"
"The Ad-Hoc Way" at
https://github.com/rear/rear/blob/master/doc/user-guide/06-layout-configuration.adoc
therein in particular the line

It’s simple bash code. Change it to use better values.

is now no longer true, it is no longer simple bash code.

See also in
https://en.opensuse.org/SDB:Disaster_Recovery

Disaster recovery with Relax-and-Recover (ReaR)
...
if the worst comes to the worst - even temporary quick
and dirty workarounds are relatively easily possible. 

@rmetrich
Copy link
Contributor Author

rmetrich commented Jun 4, 2019

There is a possibly severe issue/regression since this is merged:

Before there have been the explicit parted calls
in the diskrestore.sh script so that in MIGATION_MODE
one could edit them directly, for example

parted -s /dev/dasda mklabel dasd >&2
parted -s /dev/dasda mkpart "'dasda1'" 98304B 314621951B >&2
parted -s /dev/dasda mkpart "'dasda2'" 314621952B 838926335B >&2
parted -s /dev/dasda mkpart "'dasda3'" 838926336B 7385198591B >&2

cf. #2142 (comment)

Since this is merged the explicit parted calls in diskrestore.sh
are replaced by indirect function calls like in my case now

create_disk_label /dev/dasda dasd
create_disk_partition "/dev/dasda" "ext2" 1 98304 314621951
create_disk_partition "/dev/dasda" "ext2" 2 314621952 838926335
create_disk_partition "/dev/dasda" "ext2" 3 838926336 7385333759

which could no longer be "just edited" by an admin
unless he knows how those functions evaluate.

I think it is the main purpose of the generated diskrestore.sh script
that when things do not "just work" an admin can relatively easily
understand what actually goes on and change that as he needs
to fix disk layout recreation issues during "rear recover",
see in the section about "Restore to different hardware"
"The Ad-Hoc Way" at
https://github.com/rear/rear/blob/master/doc/user-guide/06-layout-configuration.adoc
therein in particular the line

It’s simple bash code. Change it to use better values.

is now no longer true, it is no longer simple bash code.

See also in
https://en.opensuse.org/SDB:Disaster_Recovery

Disaster recovery with Relax-and-Recover (ReaR)
...
if the worst comes to the worst - even temporary quick
and dirty workarounds are relatively easily possible. 

The old parted directives can still be used by the user to recreate his partitions, so there is no real regression here. Also the create_disk_partition function has been documented to let the user understand how it works.

@jsmeix
Copy link
Member

jsmeix commented Jun 4, 2019

@rmetrich
on my test system I did some enhancements that output comments
with the actual parted calls in the diskrestore.sh script so the user
can see the real commands but I did not yet do a pull request for it...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement Adaptions and new features fixed / solved / done
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants