Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] VMware disks not wiped by installer, fails to install #2066

Open
aplace-lab opened this issue Mar 23, 2022 · 4 comments
Open

[BUG] VMware disks not wiped by installer, fails to install #2066

aplace-lab opened this issue Mar 23, 2022 · 4 comments
Labels
kind/bug Issues that are defects reported by users or that we know have reached a real release

Comments

@aplace-lab
Copy link

Describe the bug

A fresh install of Harvester 1.0.0 on disks previously consumed by VMware datastores are not cleaned enough to install Harvester on.

To Reproduce
Steps to reproduce the behavior:

  1. Start installing using an old VMware disk as the target device.

Expected behavior

Install completes.

Actual behavior

Installer fails with this error:
grub2-install: error: failed to get canonical path of '/run/cos/target/boot/efi'

SSH into installer shows the following (note sdf1 with TYPE="VMFS_volume_member"):

/dev/sdf1: UUID_SUB="619c5e4d-b771b87c-a7a4-ac162d7a1ec8" TYPE="VMFS_volume_member" PARTLABEL="oem" PARTUUID="3e2a8cbb-c572-481c-b4e8-f123f274878d"
/dev/sdf2: LABEL="COS_STATE" UUID="eaf0b196-1eff-4a2f-841d-bdd6ff26e71d" BLOCK_SIZE="4096" TYPE="ext4" PARTLABEL="state" PARTUUID="a872ef24-3b82-4ded-9db0-f55e838df440"
/dev/sdf3: LABEL="COS_RECOVERY" UUID="89bdfe06-15cd-4ec0-a15f-87a7e87d1952" BLOCK_SIZE="4096" TYPE="ext4" PARTLABEL="recovery" PARTUUID="3e9c067c-22bf-47d5-9289-0568a6d296df"
/dev/sdf4: LABEL="COS_PERSISTENT" UUID="cc5cf0f1-3e18-4162-a679-7a0e1eb2ad46" BLOCK_SIZE="4096" TYPE="ext4" PARTLABEL="persistent" PARTUUID="59947708-f6cb-404c-bfb4-0048ef0c5031"
/dev/sdf5: LABEL="HARV_LH_DEFAULT" UUID="d9c92945-fa0a-4063-b7b1-674730da61d8" BLOCK_SIZE="4096" TYPE="ext4" PARTLABEL="longhorn" PARTUUID="b8ef2c7b-d516-4243-a06f-48deb7639cd4"

Support bundle
Unavailable, install doesn't complete.

Resolution

  1. SSH into the installer after configuring network.
  2. Run sudo dd if=/dev/zero of=/dev/sd... bs=64M count=100 to remove the VMFS tag
  3. Continue installing as normal
  4. blkid now shows:
/dev/sdf1: LABEL="COS_OEM" UUID="49ec3b78-41e7-4173-9ed2-a5c084948841" BLOCK_SIZE="1024" TYPE="ext4" PARTLABEL="oem" PARTUUID="a0afbd10-a363-4da2-9adf-33d0bc8bf782"
/dev/sdf2: LABEL="COS_STATE" UUID="4827b80b-3566-49cd-9847-4cade58746df" BLOCK_SIZE="4096" TYPE="ext4" PARTLABEL="state" PARTUUID="d458c915-6a14-4f7c-9707-10e55f76ce6b"
/dev/sdf3: LABEL="COS_RECOVERY" UUID="13c9fddc-5084-4300-b1be-376d23d3443b" BLOCK_SIZE="4096" TYPE="ext4" PARTLABEL="recovery" PARTUUID="d9505c6d-6135-40d6-b26c-f4c2c9234528"
/dev/sdf4: LABEL="COS_PERSISTENT" UUID="290a128a-7b15-4a0d-bd70-2b1abbb4ee8c" BLOCK_SIZE="4096" TYPE="ext4" PARTLABEL="persistent" PARTUUID="61adb1eb-56cc-489d-a4d9-507de4b4e01c"
/dev/sdf5: LABEL="HARV_LH_DEFAULT" UUID="c945d294-3768-4a1e-8cc3-373b9665efde" BLOCK_SIZE="4096" TYPE="ext4" PARTLABEL="longhorn" PARTUUID="4774f009-b3d9-4fb7-b5c5-fc0d1d294c27"

Environment:

  • Harvester ISO version: v1.0.0
  • Underlying Infrastructure (e.g. Baremetal with Dell PowerEdge R630): Dell PowerEdge R720xd
@aplace-lab aplace-lab added the kind/bug Issues that are defects reported by users or that we know have reached a real release label Mar 23, 2022
@aplace-lab
Copy link
Author

Similar situation here: #1813

@rebeccazzzz rebeccazzzz added this to New in Community Issue Review via automation Mar 23, 2022
@samuelattwood
Copy link

I will third this issue. I had already tried wiping signatures with wipefs -a, which did not resolve it, but zeroing the beginning sectors of the disk corrected the problem.

@averyfreeman
Copy link

ermigard I just wasted 5 hours of my life on this issue :P

@scog
Copy link

scog commented Jul 13, 2023

Heavens to betsy; ran into this same issue on Harvester v1.1.2 using a PXEBoot installation process. Had to CTRL+ALT+F2 and run dd if=/dev/zero of=/dev/sda count=10000 ... then re-run the installation process.

Sure would be nice if there was a Harvester config.yaml option to overwrite the existing drive during the installation process. 😄

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Issues that are defects reported by users or that we know have reached a real release
Development

No branches or pull requests

4 participants