Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cannot backup/restore very small disks (typically ~40MB or less) #55

Closed
shasheene opened this issue Apr 25, 2020 · 1 comment
Closed
Assignees
Labels
bug Something isn't working
Milestone

Comments

@shasheene
Copy link
Member

Rescuezilla v1.0.5 and v1.0.5.1 incorrectly compares a memory address to the length of the drive, causing the backup process to not proceed (without even a dialogue box -- #29).

Background: during the development of Rescuezilla v1.0.5, all known source code of Redo Backup and Recovery were found and unified into a git repository. I applied several changes from another developer's Redo Backup v1.0.3 patchset into the Rescuezilla git repository (specifically commit 292e1d6), but couldn’t run the application due an aspect of Perl’s values() function having been deprecated and removed in newer versions of Perl. To resolve this I created commit: c00448e to try and workaround that deprecated Perl function, then continued on. It was this second commit that introduced this bug.

In testing using Resucezilla v1.0.5.1, the value of the memory address was typically something like 0x285a234 (of course every time an application is launched, the exact memory addresses may change). This variable gets interpreted by Rescuezilla v1.0.5/v1.0.5.1 as ~42.3 megabytes and is compared against the length of the disk. If the memory address is larger than the disk size, this triggers a “stuck at 0%” issue with the status bar saying something like “Preparing to create backup of Drive 1, Part 1...” (which happens when an unexpected situation occurs early in the backup/restore process, #29 will improve the handling of this).

Please note: given the memory address varies between launches, it’s possible that larger drives could trigger this issue, but this has not yet been observed.

The incorrect comparison operation became clear during early testing of a 64-bit build (#3), as the much larger 64-bit memory addresses (eg, 0x55a0bae63808) meant the bug is triggered for any drive smaller than 94.1 terabytes – which is of course all drives as of writing, especially given Rescuezilla’s currently limited support of RAID.

@shasheene shasheene added the bug Something isn't working label Apr 25, 2020
@shasheene shasheene added this to the v1.0.6 milestone Apr 25, 2020
@shasheene shasheene self-assigned this Apr 25, 2020
@shasheene shasheene added this to To do in Rescuezilla v1.0.6 Milestone Kanban Board via automation Apr 25, 2020
@shasheene shasheene moved this from To do to In progress in Rescuezilla v1.0.6 Milestone Kanban Board Apr 25, 2020
shasheene added a commit to shasheene/rescuezilla-dev that referenced this issue Apr 25, 2020
Ensures the byte offset of the final partition being backed up is correctly
calculated instead of the current situation where the variable contains a
memory address. The calculation fix prevents a sanity check from being
incorrectly triggered during a comparison operation with the disk length
approximately 32MB or less (depends on exact memory addresses)

Also fixes the sfdisk fields parsing (the 'Id' field has been renamed 'type')
and ensures the byte offset to the end partition is correctly used, so that the
.size file contains the offset to the end of the final partition. This ensures
the ability to restore a subset of partitions to a smaller drive, as was
intended by [1] but never realized in Rescuezilla until now.

Further background: The largest partition byte size calculation feature was
added during the development of v1.0.5 (by cherry-picking a git commit authored
in 2012 a separate Redo repository in 2012 [1]). During testing at the time, an
'Experimental values on scalar is now forbidden' would pop up, but without
sufficient Perl experience at the time, it was unclear that root cause was the
values() function no longer being able to operate on hash references, so this
issue was "fixed" by removing the values() function altogether, not realizing
this would mean the $ptab_bytes variable ends up with a hash reference instead
of the largest partition size.

This incorrect $ptab_bytes value gets compared to the length of the disk (from
blockdev), and the application has a sanity check which exits if it's smaller
than the disk length.

There has been no reports of application exiting due to this issue (though it
may have happened to some people). Because Rescuezilla v1.0.5/v1.0.5.1 use
32-bit memory addresses, the typical length required to trigger this issue is
~32MB or less, but it could potentially trigger on larger disks if the memory
address happened to be larger.

The proper fix is to dereference the hashref before running the values()
function as intended. Task rescuezilla#55 contains more information.

[1] 292e1d6

[2] c00448e

[3] https://perldoc.perl.org/functions/values.html

Fixes rescuezilla#55
shasheene added a commit to shasheene/rescuezilla-dev that referenced this issue Apr 25, 2020
Ensures the byte offset of the final partition being backed up is correctly
calculated instead of the current situation where the variable contains a
memory address. The calculation fix prevents a sanity check from being
incorrectly triggered during a comparison operation with the disk length
approximately 32MB or less (depends on exact memory addresses)

Also fixes the sfdisk fields parsing (the 'Id' field has been renamed 'type')
and ensures the byte offset to the end partition is correctly used, so that the
.size file contains the offset to the end of the final partition. This ensures
the ability to restore a subset of partitions to a smaller drive, as was
intended by [1] but never realized in Rescuezilla until now.

Further background: The largest partition byte size calculation feature was
added during the development of v1.0.5 (by cherry-picking a git commit authored
in 2012 a separate Redo repository in 2012 [1]). During testing at the time, an
'Experimental values on scalar is now forbidden' would pop up, but without
sufficient Perl experience at the time, it was unclear that root cause was the
values() function no longer being able to operate on hash references, so this
issue was "fixed" by removing the values() function altogether, not realizing
this would mean the $ptab_bytes variable ends up with a hash reference instead
of the largest partition size.

This incorrect $ptab_bytes value gets compared to the length of the disk (from
blockdev), and the application has a sanity check which exits if it's smaller
than the disk length.

There has been no reports of application exiting due to this issue (though it
may have happened to some people). Because Rescuezilla v1.0.5/v1.0.5.1 use
32-bit memory addresses, the typical length required to trigger this issue is
~32MB or less, but it could potentially trigger on larger disks if the memory
address happened to be larger.

The proper fix is to dereference the hashref before running the values()
function as intended. Task rescuezilla#55 contains more information.

[1] 292e1d6

[2] c00448e

[3] https://perldoc.perl.org/functions/values.html

Fixes rescuezilla#55
Rescuezilla v1.0.6 Milestone Kanban Board automation moved this from In progress to Done Apr 25, 2020
@shasheene
Copy link
Member Author

Fixed in Rescuezilla v1.0.6.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
No open projects
Development

No branches or pull requests

1 participant