New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
zfs receive crashed with "internal error: Invalid argument" #7755
Comments
|
My immediate suggestion would be to cut a new set of ZFS userland binaries with the same version, but debug symbols, so you can get a reasonable backtrace. (That crash is probably something inane like trying to string format and dereferencing into a null struct, though, if I had to guess.) What's the state of the receive on the destination - e.g. is there a receive-resume-token property on the dataset, and does it crash again in the same fashion if you try resuming with it? This reminds me of #7576. |
|
Thanks @rincebrain! I found a nice way to reproduce this in just a few steps. The culprit is passing the "-o checksum=sha512" flag to "zfs recv". If I remove that flag, it works. Interestingly, the filesystem does have the correct checksum set when I verify it afterwards via "zfs get checksum". (However I don't know a way to verify that the contents are actually hashed with that checksum.) Repro steps: Attaching the backtrace that I got from gdb after I rebuilt the zfs-utils with full debugging symbols and -O0: gdb.txt Regarding the state of the receive: There's no receive_resume_token property set on the destination filesystem, even if I pass the "-s" flag to "zfs send". All files show up in the destination filesystem and the contents seem to be fine. I agree that this looks to be the same issue as your #7576. |
#7478 fixes how we identify a top-level dataset in an incremental replication stream with intermediary snapshots ( This new issue seems to affect only some properties, the reproducer doesn't use |
|
Well, that will teach me to try and debug problems too late at night, but TY for finding and fixing the actual problem anyway! :) |
This change modifies how 'checksum' and 'dedup' properties are verified in zfs_check_settable() handling the case where they are explicitly inherited in the dataset hierarchy when receiving a recursive send stream. Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov> Reviewed-by: Tom Caputi <tcaputi@datto.com> Signed-off-by: loli10K <ezomori.nozomu@gmail.com> Closes #7755 Closes #7576 Closes #7757
This change modifies how 'checksum' and 'dedup' properties are verified in zfs_check_settable() handling the case where they are explicitly inherited in the dataset hierarchy when receiving a recursive send stream. Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov> Reviewed-by: Tom Caputi <tcaputi@datto.com> Signed-off-by: loli10K <ezomori.nozomu@gmail.com> Closes openzfs#7755 Closes openzfs#7576 Closes openzfs#7757
This change modifies how 'checksum' and 'dedup' properties are verified in zfs_check_settable() handling the case where they are explicitly inherited in the dataset hierarchy when receiving a recursive send stream. Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov> Reviewed-by: Tom Caputi <tcaputi@datto.com> Signed-off-by: loli10K <ezomori.nozomu@gmail.com> Closes openzfs#7755 Closes openzfs#7576 Closes openzfs#7757 Requires-spl: refs/pull/707/head
This change modifies how 'checksum' and 'dedup' properties are verified in zfs_check_settable() handling the case where they are explicitly inherited in the dataset hierarchy when receiving a recursive send stream. Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov> Reviewed-by: Tom Caputi <tcaputi@datto.com> Signed-off-by: loli10K <ezomori.nozomu@gmail.com> Closes openzfs#7755 Closes openzfs#7576 Closes openzfs#7757 Requires-spl: refs/pull/707/head
This change modifies how 'checksum' and 'dedup' properties are verified in zfs_check_settable() handling the case where they are explicitly inherited in the dataset hierarchy when receiving a recursive send stream. Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov> Reviewed-by: Tom Caputi <tcaputi@datto.com> Signed-off-by: loli10K <ezomori.nozomu@gmail.com> Closes openzfs#7755 Closes openzfs#7576 Closes openzfs#7757 Requires-spl: refs/pull/707/head
This change modifies how 'checksum' and 'dedup' properties are verified in zfs_check_settable() handling the case where they are explicitly inherited in the dataset hierarchy when receiving a recursive send stream. Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov> Reviewed-by: Tom Caputi <tcaputi@datto.com> Signed-off-by: loli10K <ezomori.nozomu@gmail.com> Closes openzfs#7755 Closes openzfs#7576 Closes openzfs#7757
This change modifies how 'checksum' and 'dedup' properties are verified in zfs_check_settable() handling the case where they are explicitly inherited in the dataset hierarchy when receiving a recursive send stream. Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov> Reviewed-by: Tom Caputi <tcaputi@datto.com> Signed-off-by: loli10K <ezomori.nozomu@gmail.com> Closes openzfs#7755 Closes openzfs#7576 Closes openzfs#7757
System information
Describe the problem you're observing
"zfs receive" crashed just when it was done receiving 875G of data. Although this worries me a bit, the destination seems to be fine and all data is there.
Describe how to reproduce the problem
I can reproduce it with the two steps shown above. Here's what I did in full. (I'm playing around with ZFS on a test server and am migrating some data from a two-disk mirror to a four-disk RAIDZ1.)
zpool detach pool ata-WDC_WD40EZRX-00SPEB0_WD-WCC4E5KU5YRA. Add two more disks to the server. Now I have three free disks and can create a degraded four-disk RAIDZ1 for the migration.truncate -s 4000787030016 /fake-WDC_WD40EZRX-00SPEB0_WD-WCC4E4UVASH2zpool offline tank /fake-WDC_WD40EZRX-00SPEB0_WD-WCC4E4UVASH2zfs snapshot -r pool@nowzfs send -R pool@now | zfs recv -Fdu -o checksum=sha512 tankInclude any warning/errors/backtraces from the system logs
There's nothing else of relevance in the syslog (e.g. no messages from the kernel).
Please let me know if I can provide more information that would help.
The text was updated successfully, but these errors were encountered: