Join GitHub today
GitHub is home to over 36 million developers working together to host and review code, manage projects, and build software together.Sign up
Corrupted data using raw zfs send/recv with encryption #8852
I have two machines
$ zfs get all tank/drive | grep -v default NAME PROPERTY VALUE SOURCE tank/drive type filesystem - tank/drive creation Sat Jun 1 16:51 2019 - tank/drive used 77.4G - tank/drive available 3.44T - tank/drive referenced 77.4G - tank/drive compressratio 1.94x - tank/drive mounted yes - tank/drive mountpoint /mnt/drive local tank/drive compression gzip local tank/drive atime off local tank/drive createtxg 235 - tank/drive version 5 - tank/drive utf8only off - tank/drive normalization none - tank/drive casesensitivity sensitive - tank/drive guid 12695194897195142574 - tank/drive usedbysnapshots 264K - tank/drive usedbydataset 77.4G - tank/drive usedbychildren 0B - tank/drive usedbyrefreservation 0B - tank/drive objsetid 80 - tank/drive refcompressratio 1.94x - tank/drive written 264K - tank/drive logicalused 150G - tank/drive logicalreferenced 150G - tank/drive encryption aes-256-ccm - tank/drive keylocation prompt local tank/drive keyformat hex - tank/drive encryptionroot tank/drive - tank/drive keystatus available -
I send that dataset from
In the middle of the send/recv I had a network failure and had to use the resumable send/recv feature to resume the send. At the very end, the operation completes successfully.
To test that the data really is the same between
This suggests that either a problem with the send or the recv.
This may not be the most helpful, but I'm happy to run more commands to help diagnose the problem.
After accessing the problematic directory,
changed the title
Corrupted data using resumable raw zfs send/recv with encryption
Jun 4, 2019
referenced this issue
Jun 4, 2019
Based on the errors you're seeing it sounds as if a dnode block on the receive side isn't exactly matching its counterpart from the source. This would result in authentication errors and the symptoms you're seeing. Can you run
Any hints you can provide which would help us reproduce this locally would be very helpful.
Not sure what to look for, but here's the full printout:
$ sudo zpool events -v TIME CLASS Jun 3 2019 15:10:40.835224357 ereport.fs.zfs.zpool class = "ereport.fs.zfs.zpool" ena = 0x3992fbba301c01 detector = (embedded nvlist) version = 0x0 scheme = "zfs" pool = 0x79cb7bbee62fca18 (end detector) pool = "zfs-crypt" pool_guid = 0x79cb7bbee62fca18 pool_state = 0x0 pool_context = 0x2 pool_failmode = "wait" time = 0x5cf59ae0 0x31c88325 eid = 0x1
I didn't bother mangling my dataset names like I did before. The pool with issues is called