New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Clean up disks before recreating partitions/volumes/filesystems/... #799
Comments
@gdha @schlomo @thefrenchone @tbsky I ask you all for feedback what you think about it. |
What wipefs results in the rear recovery system Welcome to Relax and Recover. Run "rear recover" to restore your system ! RESCUE f197:~ # ls -l /dev/sd* brw-rw---- 1 root disk 8, 0 Mar 16 11:02 /dev/sda brw-rw---- 1 root disk 8, 1 Mar 16 11:02 /dev/sda1 brw-rw---- 1 root disk 8, 2 Mar 16 11:02 /dev/sda2 RESCUE f197:~ # parted /dev/sda print Model: ATA QEMU HARDDISK (scsi) Disk /dev/sda: 21.5GB Sector size (logical/physical): 512B/512B Partition Table: msdos Disk Flags: Number Start End Size Type File system Flags 1 1049kB 1571MB 1570MB primary linux-swap(v1) type=83 2 1571MB 21.5GB 19.9GB primary ext4 boot, type=83 RESCUE f197:~ # wipefs -a -f /dev/sda2 /dev/sda2: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef RESCUE f197:~ # ls -l /dev/sd* brw-rw---- 1 root disk 8, 0 Mar 16 11:42 /dev/sda brw-rw---- 1 root disk 8, 1 Mar 16 11:02 /dev/sda1 brw-rw---- 1 root disk 8, 2 Mar 16 11:42 /dev/sda2 RESCUE f197:~ # parted /dev/sda print Model: ATA QEMU HARDDISK (scsi) Disk /dev/sda: 21.5GB Sector size (logical/physical): 512B/512B Partition Table: msdos Disk Flags: Number Start End Size Type File system Flags 1 1049kB 1571MB 1570MB primary linux-swap(v1) type=83 2 1571MB 21.5GB 19.9GB primary boot, type=83 RESCUE f197:~ # wipefs -a -f /dev/sda1 /dev/sda1: 10 bytes were erased at offset 0x00000ff6 (swap): 53 57 41 50 53 50 41 43 45 32 RESCUE f197:~ # parted /dev/sda print Model: ATA QEMU HARDDISK (scsi) Disk /dev/sda: 21.5GB Sector size (logical/physical): 512B/512B Partition Table: msdos Disk Flags: Number Start End Size Type File system Flags 1 1049kB 1571MB 1570MB primary type=83 2 1571MB 21.5GB 19.9GB primary boot, type=83 RESCUE f197:~ # ls -l /dev/sd* brw-rw---- 1 root disk 8, 0 Mar 16 11:42 /dev/sda brw-rw---- 1 root disk 8, 1 Mar 16 11:42 /dev/sda1 brw-rw---- 1 root disk 8, 2 Mar 16 11:42 /dev/sda2 RESCUE f197:~ # wipefs -a -f /dev/sda /dev/sda: 2 bytes were erased at offset 0x000001fe (dos): 55 aa RESCUE f197:~ # parted /dev/sda print Error: /dev/sda: unrecognised disk label Model: ATA QEMU HARDDISK (scsi) Disk /dev/sda: 21.5GB Sector size (logical/physical): 512B/512B Partition Table: unknown Disk Flags: RESCUE f197:~ # ls -l /dev/sd* brw-rw---- 1 root disk 8, 0 Mar 16 11:42 /dev/sda brw-rw---- 1 root disk 8, 1 Mar 16 11:42 /dev/sda1 brw-rw---- 1 root disk 8, 2 Mar 16 11:42 /dev/sda2 RESCUE f197:~ # partprobe -s /dev/sda RESCUE f197:~ # ls -l /dev/sd* brw-rw---- 1 root disk 8, 0 Mar 16 11:47 /dev/sda brw-rw---- 1 root disk 8, 1 Mar 16 11:42 /dev/sda1 brw-rw---- 1 root disk 8, 2 Mar 16 11:42 /dev/sda2 RESCUE f197:~ # parted /dev/sda mklabel msdos Information: You may need to update /etc/fstab. RESCUE f197:~ # ls -l /dev/sd* brw-rw---- 1 root disk 8, 0 Mar 16 11:51 /dev/sda RESCUE f197:~ # partprobe -s /dev/sda /dev/sda: msdos partitions RESCUE f197:~ # parted /dev/sda print Model: ATA QEMU HARDDISK (scsi) Disk /dev/sda: 21.5GB Sector size (logical/physical): 512B/512B Partition Table: msdos Disk Flags: Number Start End Size Type File system Flags RESCUE f197:~ # parted /dev/sda mklabel gpt Warning: The existing disk label on /dev/sda will be destroyed and all data on this disk will be lost. Do you want to continue? Yes/No? y Information: You may need to update /etc/fstab. RESCUE f197:~ # parted /dev/sda print Model: ATA QEMU HARDDISK (scsi) Disk /dev/sda: 21.5GB Sector size (logical/physical): 512B/512B Partition Table: gpt Disk Flags: Number Start End Size File system Name Flags RESCUE f197:~ # ls -l /dev/sd* brw-rw---- 1 root disk 8, 0 Mar 16 11:55 /dev/sda Summary: After wipefs the harddisk /dev/sda looks empty for parted To make the old/outdated partition device nodes go away It seems to be fail-safe to set a hardcoded "msdos" dummy label parted -s $device mklabel $label >&2 |
wipefs would be great to have run earlier. It's code generated from 10_include_partition_code.sh that would occasionally fail or require extra time for me. Just removing LVM information was usually enough. At the moment I can't test this but hopefully soon I'll be able to. I'll try adding the wipefs command to clear the disk earlier. |
@mattihautameki |
For the fun of it: For the current sources see the But it seems that fails curently in the same way It seems currently "everybody" has issues with "udev vs. parted", |
Because the more I learn about it |
An addedum regarding a higher stack of storage objects One can try to remove mdadm superblocks from hardrives by mdadm --zero-superblock /dev/sd{a,b,c,d} to avoid hdds to be detected as mdadm devices. Ideally only calling the generic "wipefs" tool |
FYI regarding One may have a look at Key steps:
|
@jsmeix perhaps shred utility could also be useful (http://www.computerhope.com/unix/shred.htm). I noticed that RH engineers are using this command to wipe a disk (before doing rear recover test) - e.g. |
Let's just make sure that we don't touch the hard disks before
|
I think to have such a "cleanupdisk" script behave Such an early "cleanupdisk" script would have to run ... "wipefs" musts probably be run before anything is done with the harddisk, in particular before a "parted" command is run ... |
@jsmeix very good point. Yes, of course the cleanup stuff should be added to the beginning of the |
New layout/recreate/default/150_wipe_disks.sh to wipe disks. The disks that will be completely wiped are those disks where in diskrestore.sh the create_disk_label function is called (the create_disk_label function calls "parted -s $disk mklabel $label") i.e. the disks that will be completely overwritten by diskrestore.sh. This implements #799 "Clean up disks before recreating partitions/volumes/filesystems/..." The intent is to be also used later as a precondition for the future new 'storage' stage/code as future replacement of the 'layout' stage/code cf. #2510
As the PR #2514 is in progress we better re-open this issue |
Stale issue message |
I like to merge #2514 |
Wipe disks before recreating partitions/volumes/filesystems/... see #799 See the new DISKS_TO_BE_WIPED in default.conf and for details see usr/share/rear/layout/recreate/default/README.wipe_disks This is currently new and experimental functionality so that currently by default via DISKS_TO_BE_WIPED='false' no disk is wiped to avoid possible regressions until this new feature was more tested by interested users via explicit DISKS_TO_BE_WIPED='' in local.conf see #2514
With #2514 merged There is now the new DISKS_TO_BE_WIPED in default.conf This is currently new and experimental functionality so that |
Phew! |
By default let "rear recover" wipe disks that get completely recreated via DISKS_TO_BE_WIPED="" in default.conf In ReaR 2.7 default.conf has DISKS_TO_BE_WIPED='false' but now after ReaR 2.7 release this feature should be used by default by users who use our GitHub master code so that we could use it by default in ReaR 2.8. See #2514 and #799
Hereby I propose to let a "cleanupdisk" script run early
(i.e. before anything is done with the harddisk,
in particular before a "parted" command is run).
The purpose of the "cleanupdisk" script is to wipe any
possibly remainders of various kind of metadata information
from the harddisk that could belong to various higher layers
of storage objects.
Currently (cf. #540) "wipefs" is run in
130_include_filesystem_code.sh for each partition device node
before a filesystem is created on that partition device node.
But after I wrote #791 (comment) I noticed that running "wipefs" before filesystems are created is probably too late.
I had this "too late" problem already recognized in #540 (comment) (there "it failed for RHEL6 at the partitioning level because of old data of the MD level so that before partitioning the MD tool would have to be run to clean up old MD data") but unfortunately that had slipped my lossy mind :-(
See #791 (comment) for the reason why "wipefs" musts probably be run before anything is done with the harddisk, in particular before a "parted" command is run (excerpt):
Here what I get in the ReaR recovery system
directly after login as root
on pristine new hardware
(where "pristine new hardware" is a new
from scratch created QEMU/KVM virtual
machine with full hardware virtualization):
And now ( tada - surprise! - not really ;-)
what I get in the ReaR recovery system
directly after login as root
on same kind of a machine where I already
had done a "rear recover" some time before
(i.e. where a subsequent "rear recover" would run
on a system where the harddisk was already in use):
Accordingly I think ReaR should run something like
to fully clean up the used harddisk before doing anything with it.
Regarding the '-f' option see #540 (comment)
The text was updated successfully, but these errors were encountered: