Join GitHub today
GitHub is home to over 40 million developers working together to host and review code, manage projects, and build software together.Sign up
dm-verity: How to fix incorrect hash? #2603
Container Linux Version
Unfortunately, there are no instructions on how to extract that from the CoreOS rescue system if USR-A/B cannot be mounted.
Bare metal server from Hetzner.
The system should boot.
The system does not boot.
A fault disk was replaced, and the following command was used to copy the first partitions to the new disk.
The layout looks like below, so the values for
Restarting the system, the boot fails because the verity hash mismatch causes read errors.
The system should give more useful feedback. Even if it’s just, that CoreOS needs to be installed from scratch, as there is no way around fixing it. (That is what has to be done now, so further debugging or log gathering will prove difficult.)
CL doesn't support having multiple USR-A / USR-B / EFI-SYSTEM / OEM partitions. I suspect that on update the kernel in one EFI-SYSTEM is getting updated and the USR A/B on the other disk is getting updated. The kernel has the verity hash but is checking against the wrong USR A/B. You might be able to salvage it by deleting everything but raid.1.1 on one of the disks, and seeing if the auto-rollback will work. If it doesn't you'll probably need to reinstall. Basically make sure you only have 1 one all the NAMED partitions.
Just to clarify on this (I work with @paulmenzel) a reinstallation only worked after we wiped both disks with
We first tried to just reinstall like normal and copied the partitions table and contents with
sgdisk /dev/nvme0n1 -R /dev/nvme1n1 && \ sgdisk -G /dev/nvme1n1 dd if=/dev/nvme0n1 of=/dev/nvme1n1 bs=512 count=4427776 # (first 9 partitions)
but whilst the server then booted up, restarting it seemed to trigger the same verity error.