Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

cannot send/receive self-contained zfs dataset created with LXD with a copy of a deleted container #10935

Open
melato opened this issue Sep 16, 2020 · 4 comments
Labels
Bot: Not Stale Override for the stale bot Component: Send/Recv "zfs send/recv" feature Type: Defect Incorrect behavior (e.g. crash, hang)

Comments

@melato
Copy link

melato commented Sep 16, 2020

System information

Type | Version/Name
Distribution Name	|  Debian
Distribution Version	|  10.5
Linux Kernel	|  4.19.0-10-amd64 #1 SMP Debian 4.19.132-1 (2020-07-24) x86_64 GNU/Linux
Architecture	| x86_64
ZFS Version	| 0.8.4-2~bpo10+1
SPL Version	| 0.8.4-2~bpo10+1

I also verified it with Ubuntu 20.04 and Ubuntu 18.04.

Describe the problem you're observing

I used a few LXD commands that create a container, copy it, and delete the original (renamed in zfs).
I then tried to replicate the zfs dataset with zfs send/receive to another machine and got an error:
cannot receive: local origin for clone y/containers/a2@copy does not exist

Describe how to reproduce the problem

Start with a new Debian system, with backports, and ZFS installed, as per:
https://openzfs.github.io/openzfs-docs/Getting%20Started/Debian/index.html

create a new zpool "z"
install LXD:

apt-get install snapd
snap install core lxd

logout/login to get the lxd commands in your path

initalize LXD:
run "lxd init" and accept all the defaults, except create a new storage pool, and specify "z" as the zpool.
To avoid answering questions manually, you can run:

lxd init --preseed < preseed.txt"

where preseed.txt contains the following:

config: {}
networks:
- config:
    ipv4.address: auto
    ipv6.address: auto
  description: ""
  name: lxdbr0
  type: ""
storage_pools:
- config:
    source: z
  description: ""
  name: default
  driver: zfs
profiles:
- config: {}
  description: ""
  devices:
    eth0:
      name: eth0
      network: lxdbr0
      type: nic
    root:
      path: /
      pool: default
      type: disk
  name: default
cluster: null

Also attached here: preseed.txt

run the following commands:

lxc launch images:alpine/3.12 a1
lxc snapshot a1 s
lxc copy a1/s a2
lxc stop a1
lxc delete a1
zfs snapshot -r z@copy

Now try to replicate z@copy using zfs send/receive.

From a remote system with ssh access to the first system as "pin", run something like this:

ssh root@pin zfs send -R z@copy | zfs receive -F y

"y" is an empty zpool, but you can also use any test dataset"

Result:
cannot receive: local origin for clone y/containers/a2@copy does not exist

Expected Result:
should be able to replicate self-contained dataset.

I attach the output of "zfs list -r -t all -o name,origin z":
list.txt

See also:
#10135
I originally entered this issue at LXD, but they closed it:
https://github.com/lxc/lxd/issues/7854

@melato melato added Status: Triage Needed New issue which needs to be triaged Type: Defect Incorrect behavior (e.g. crash, hang) labels Sep 16, 2020
@melato
Copy link
Author

melato commented Sep 16, 2020

You don't need a separate system to reproduce. Modified instructions for a single system:

zfs send -R z@copy > z.img
snap remove lxd # just to make sure LXD does not do anything to the dataset.
zfs destroy -r z
zfs receive -F z < z.img

Result:

cannot receive: local origin for clone z/containers/a2@copy does not exist

@johanehnberg
Copy link

I can confirm this issue. In our case, it happens when sending clones to another host.

@stale
Copy link

stale bot commented Jan 10, 2022

This issue has been automatically marked as "stale" because it has not had any activity for a while. It will be closed in 90 days if no further activity occurs. Thank you for your contributions.

@stale stale bot added the Status: Stale No recent activity for issue label Jan 10, 2022
@stale stale bot closed this as completed Apr 10, 2022
@behlendorf behlendorf added Component: Send/Recv "zfs send/recv" feature Bot: Not Stale Override for the stale bot and removed Status: Stale No recent activity for issue Status: Triage Needed New issue which needs to be triaged labels Apr 11, 2022
@behlendorf behlendorf reopened this Apr 11, 2022
@hron84
Copy link

hron84 commented Sep 13, 2022

@behlendorf any update on this? It happened to me too, same symptoms as above.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Bot: Not Stale Override for the stale bot Component: Send/Recv "zfs send/recv" feature Type: Defect Incorrect behavior (e.g. crash, hang)
Projects
None yet
Development

No branches or pull requests

4 participants