Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

zfs send -R -i does not fail if source snapshot does not exist #3894

Closed
jvsalo opened this issue Oct 7, 2015 · 5 comments
Closed

zfs send -R -i does not fail if source snapshot does not exist #3894

jvsalo opened this issue Oct 7, 2015 · 5 comments
Labels
Status: Inactive Not being actively updated Status: Stale No recent activity for issue

Comments

@jvsalo
Copy link

jvsalo commented Oct 7, 2015

In ZFS 0.6.5 (and also older versions) the following command will succeed:

# zfs send -R -i dataset@snapshot_that_never_existed dataset@snapshot_that_exists

Also happens with -I. It appears ZFS does "zfs send -R ..." instead. Without -R, it fails as expected:

incremental source (dataset@snasphot_that_never_existed) does not exist

This can result in confusing error messages with replication, when source snapshot does not exist any more. Then the receiving party gets: "cannot receive new filesystem stream: destination 'pool/dataset' exists". This is because the stream from "zfs send -R -i" is not, in fact, an incremental stream.

@rageagainstthebugs
Copy link

This be duplicate #3010 (comment)

@loli10K
Copy link
Contributor

loli10K commented Feb 16, 2019

Still an issue here, reproducer needs both -i and -R:

root@linux:~# uname -a
Linux linux 3.16.0-4-amd64 #1 SMP Debian 3.16.51-3 (2017-12-13) x86_64 GNU/Linux
root@linux:~# cat /proc/sys/kernel/spl/gitrev
zfs-0.8.0-rc3-50-g2d76ab9
root@linux:~# 
root@linux:~# 
root@linux:~# POOLNAME='testpool'
root@linux:~# TMPDIR='/var/tmp'
root@linux:~# mountpoint -q $TMPDIR || mount -t tmpfs tmpfs $TMPDIR
root@linux:~# zpool destroy -f $POOLNAME
root@linux:~# rm -f $TMPDIR/zpool.dat
root@linux:~# truncate -s 128m $TMPDIR/zpool.dat
root@linux:~# zpool create -f -O mountpoint=none $POOLNAME $TMPDIR/zpool.dat
root@linux:~# 
root@linux:~# zfs create $POOLNAME/fs
root@linux:~# zfs snap $POOLNAME/fs@snap1
root@linux:~# zfs snap $POOLNAME/fs@snap2
root@linux:~# zfs send -v -R -i $POOLNAME/fs@snap1 $POOLNAME/fs@snap2 > /dev/null
send from @snap1 to testpool/fs@snap2 estimated size is 624B
total estimated size is 624B
TIME        SENT   SNAPSHOT
root@linux:~# zfs send -v -R -i $POOLNAME/fs@does-not-exist $POOLNAME/fs@snap2 > /dev/null
full send of testpool/fs@snap2 estimated size is 12.6K
total estimated size is 12.6K
TIME        SENT   SNAPSHOT
root@linux:~# 

@loli10K loli10K reopened this Feb 16, 2019
@bunder2015
Copy link
Contributor

Interesting... in theory this should be zero right?

gazelle ~ # zfs create g-pool/temp/testing
gazelle ~ # zfs snap g-pool/temp/testing@snap1
gazelle ~ # zfs snap g-pool/temp/testing@snap2
gazelle ~ # zfs send -v -R -i g-pool/temp/testing@nonexist g-pool/temp/testing@snap2 | zstreamdump -v
BEGIN record
        hdrtype = 2
        features = 4
        magic = 2f5bacbac
        creation_time = 0
        type = 0
        flags = 0x0
        toguid = 0
        fromguid = 0
        toname = g-pool/temp/testing@snap2

nvlist version: 0
        fromsnap = nonexist
        tosnap = snap2
        fss = (embedded nvlist)
        nvlist version: 0
                0xc522601872dc7df0 = (embedded nvlist)
                nvlist version: 0
                        name = g-pool/temp/testing
                        parentfromsnap = 0x0
                        props = (embedded nvlist)
                        nvlist version: 0
                        (end props)

                        snaps = (embedded nvlist)
                        nvlist version: 0
                                snap1 = 0x4dfa86725aba5224
                                snap2 = 0x35dc3d0c098c6004
                        (end snaps)

                        snapprops = (embedded nvlist)
                        nvlist version: 0
                                snap1 = (embedded nvlist)
                                nvlist version: 0
                                (end snap1)

                                snap2 = (embedded nvlist)
                                nvlist version: 0
                                (end snap2)

                        (end snapprops)

                (end 0xc522601872dc7df0)

        (end fss)

END checksum = 15cc51958c/8dbec27e26c/258a07fe0f729/7d8be31f807151
full send of g-pool/temp/testing@snap2 estimated size is 45.1K
total estimated size is 45.1K
TIME        SENT   SNAPSHOT
BEGIN record
        hdrtype = 1
        features = 4
        magic = 2f5bacbac
        creation_time = 5c67c22e
        type = 2
        flags = 0x4
        toguid = 35dc3d0c098c6004
        fromguid = 0
        toname = g-pool/temp/testing@snap2

FREEOBJECTS firstobj = 0 numobjs = 1
OBJECT object = 1 type = 21 bonustype = 0 blksz = 512 bonuslen = 0 dn_slots = 1 raw_bonuslen = 0 flags = 0 maxblkid = 0 indblkshift = 0 nlevels = 0 nblkptr = 0
FREE object = 1 offset = 512 length = -1
FREEOBJECTS firstobj = 2 numobjs = 30
WRITE object = 1 type = 21 checksum type = 7 compression type = 0
    flags = 0 offset = 0 logical_size = 512 compressed_size = 0 payload_size = 512 props = 200000000 salt = 0000000000000000 iv = 000000000000000000000000 mac = 00000000000000000000000000000000
FREE object = 1 offset = 512 length = 1024
OBJECT object = 32 type = 45 bonustype = 0 blksz = 512 bonuslen = 0 dn_slots = 1 raw_bonuslen = 0 flags = 0 maxblkid = 0 indblkshift = 0 nlevels = 0 nblkptr = 0
FREE object = 32 offset = 512 length = -1
OBJECT object = 33 type = 22 bonustype = 0 blksz = 512 bonuslen = 0 dn_slots = 1 raw_bonuslen = 0 flags = 0 maxblkid = 0 indblkshift = 0 nlevels = 0 nblkptr = 0
FREE object = 33 offset = 512 length = -1
OBJECT object = 34 type = 20 bonustype = 44 blksz = 512 bonuslen = 168 dn_slots = 1 raw_bonuslen = 0 flags = 0 maxblkid = 0 indblkshift = 0 nlevels = 0 nblkptr = 0
FREE object = 34 offset = 512 length = -1
OBJECT object = 35 type = 46 bonustype = 0 blksz = 1536 bonuslen = 0 dn_slots = 1 raw_bonuslen = 0 flags = 0 maxblkid = 0 indblkshift = 0 nlevels = 0 nblkptr = 0
FREE object = 35 offset = 1536 length = -1
OBJECT object = 36 type = 47 bonustype = 0 blksz = 16384 bonuslen = 0 dn_slots = 1 raw_bonuslen = 0 flags = 0 maxblkid = 0 indblkshift = 0 nlevels = 0 nblkptr = 0
FREE object = 36 offset = 32768 length = -1
FREEOBJECTS firstobj = 37 numobjs = 27
WRITE object = 32 type = 45 checksum type = 7 compression type = 0
    flags = 0 offset = 0 logical_size = 512 compressed_size = 0 payload_size = 512 props = 200000000 salt = 0000000000000000 iv = 000000000000000000000000 mac = 00000000000000000000000000000000
FREE object = 32 offset = 512 length = 1024
WRITE object = 33 type = 22 checksum type = 7 compression type = 0
    flags = 0 offset = 0 logical_size = 512 compressed_size = 0 payload_size = 512 props = 200000000 salt = 0000000000000000 iv = 000000000000000000000000 mac = 00000000000000000000000000000000
FREE object = 33 offset = 512 length = 1024
WRITE object = 34 type = 20 checksum type = 7 compression type = 0
    flags = 0 offset = 0 logical_size = 512 compressed_size = 0 payload_size = 512 props = 200000000 salt = 0000000000000000 iv = 000000000000000000000000 mac = 00000000000000000000000000000000
WRITE object = 35 type = 46 checksum type = 7 compression type = 0
    flags = 0 offset = 0 logical_size = 1536 compressed_size = 0 payload_size = 1536 props = 200020002 salt = 0000000000000000 iv = 000000000000000000000000 mac = 00000000000000000000000000000000
FREE object = 35 offset = 1536 length = 3072
WRITE object = 36 type = 47 checksum type = 7 compression type = 0
    flags = 0 offset = 0 logical_size = 16384 compressed_size = 0 payload_size = 16384 props = f0007001f salt = 0000000000000000 iv = 000000000000000000000000 mac = 00000000000000000000000000000000
WRITE object = 36 type = 47 checksum type = 7 compression type = 0
    flags = 0 offset = 16384 logical_size = 16384 compressed_size = 0 payload_size = 16384 props = f0007001f salt = 0000000000000000 iv = 000000000000000000000000 mac = 00000000000000000000000000000000
FREE object = 36 offset = 32768 length = 16384
FREEOBJECTS firstobj = 64 numobjs = 0
FREEOBJECTS firstobj = 0 numobjs = 0
END checksum = 198d51e36e9/247a17ad6e271d/781f18ca837018e/21680d7954ed96e8
END checksum = 0/0/0/0
SUMMARY:
        Total DRR_BEGIN records = 2
        Total DRR_END records = 3
        Total DRR_OBJECT records = 6
        Total DRR_FREEOBJECTS records = 5
        Total DRR_WRITE records = 7
        Total DRR_WRITE_BYREF records = 0
        Total DRR_WRITE_EMBEDDED records = 0
        Total DRR_FREE records = 11
        Total DRR_SPILL records = 0
        Total records = 34
        Total write size = 36352 (0x8e00)
        Total stream length = 47712 (0xba60)

@loli10K
Copy link
Contributor

loli10K commented Feb 16, 2019

That is indeed the bug, zfs send is doing a full send stream (containing dataset-wide metadata) instead of a no-changes (almost 0 bytes) incremental.

root@linux:~# zfs send -v -R -i $POOLNAME/fs@does-not-exist $POOLNAME/fs@snap2 | zstreamdump -v | grep WRITE
full send of testpool/fs@snap2 estimated size is 12.6K
total estimated size is 12.6K
TIME        SENT   SNAPSHOT
WRITE object = 1 type = 21 checksum type = 7 compression type = 0
WRITE object = 32 type = 45 checksum type = 2 compression type = 0
WRITE object = 33 type = 22 checksum type = 2 compression type = 0
WRITE object = 34 type = 20 checksum type = 2 compression type = 0
WRITE object = 35 type = 46 checksum type = 7 compression type = 0
WRITE object = 36 type = 47 checksum type = 7 compression type = 0
WRITE object = 36 type = 47 checksum type = 7 compression type = 0
	Total DRR_WRITE records = 7
	Total DRR_WRITE_BYREF records = 0
	Total DRR_WRITE_EMBEDDED records = 0
root@linux:~# zdb -dd $POOLNAME/fs
Dataset testpool/fs [ZPL], ID 259, cr_txg 6, 24K, 6 objects

    Object  lvl   iblk   dblk  dsize  dnsize  lsize   %full  type
         0    6   128K    16K    13K     512    32K    9.38  DMU dnode
        -1    1   128K    512      0     512    512  100.00  ZFS user/group/project used
        -2    1   128K    512      0     512    512  100.00  ZFS user/group/project used
        -3    1   128K    512      0     512    512  100.00  ZFS user/group/project used
         1    1   128K    512     1K     512    512  100.00  ZFS master node
        32    1   128K    512      0     512    512  100.00  SA master node
        33    1   128K    512      0     512    512  100.00  ZFS delete queue
        34    1   128K    512      0     512    512  100.00  ZFS directory
        35    1   128K  1.50K     1K     512  1.50K  100.00  SA attr registration
        36    1   128K    16K     8K     512    32K  100.00  SA attr layouts

    Dnode slots:
	Total used:             6
	Max used:              36
	Percent empty:  83.333333

root@linux:~# 

@stale
Copy link

stale bot commented Aug 25, 2020

This issue has been automatically marked as "stale" because it has not had any activity for a while. It will be closed in 90 days if no further activity occurs. Thank you for your contributions.

@stale stale bot added the Status: Stale No recent activity for issue label Aug 25, 2020
@stale stale bot closed this as completed Nov 25, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Status: Inactive Not being actively updated Status: Stale No recent activity for issue
Projects
None yet
Development

No branches or pull requests

6 participants
@behlendorf @jvsalo @loli10K @bunder2015 @rageagainstthebugs and others