Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cannot delete LXD zfs backed containers: dataset is busy #4656

Closed
Kramerican opened this issue Jun 14, 2018 · 129 comments
Closed

Cannot delete LXD zfs backed containers: dataset is busy #4656

Kramerican opened this issue Jun 14, 2018 · 129 comments
Labels
Bug Confirmed to be a bug Incomplete Waiting on more information from reporter

Comments

@Kramerican
Copy link

Kramerican commented Jun 14, 2018

Minty fresh Ubuntu 18.04 system
LXD v3.0.0 (latest from apt, how to get v3.0.1?)

Started seeing this beginning last week crop up arbitrarily across my infrastructure. Out of ~10 delete operations, I have seen this happen to 3 containers on 2 different systems.

~# lxc delete test1
Error: Failed to destroy ZFS filesystem: cannot destroy 'lxd/containers/test1': dataset is busy
~# lxc ls
+-------+---------+---------------------+-------------------------------+------------+-----------+
| NAME  |  STATE  |        IPV4         |             IPV6              |    TYPE    | SNAPSHOTS |
+-------+---------+---------------------+-------------------------------+------------+-----------+
| doxpl | RUNNING | 46.4.158.225 (eth0) | 2a01:4f8:221:1809::601 (eth0) | PERSISTENT | 0         |
+-------+---------+---------------------+-------------------------------+------------+-----------+
| test1 | STOPPED |                     |                               | PERSISTENT | 0         |
+-------+---------+---------------------+-------------------------------+------------+-----------+

Tried googling around a bit and I have tried the most common tips on figuring out what might be keeping the dataset busy: There are no snapshots or dependencies, dataset is unmounted i.e. zfs list reports

NAME                                                                          USED  AVAIL  REFER  MOUNTPOINT
lxd                                                                          3.51G   458G    24K  none
lxd/containers                                                               2.24G   458G    24K  none
lxd/containers/doxpl                                                         1.04G   766M  2.25G  /var/lib/lxd/storage-pools/lxd/containers/doxpl
lxd/containers/test1                                                         1.20G  6.80G  1.20G  none
lxd/custom                                                                     24K   458G    24K  none
lxd/deleted                                                                    24K   458G    24K  none
lxd/images                                                                   1.27G   458G    24K  none
lxd/images/7d4aa78fb18775e6c3aa2c8e5ffa6c88692791adda3e8735a835e0ba779204ec  1.27G   458G  1.27G  none
lxd/snapshots                                                                  24K   458G    24K  none

Could LXD still be holding the dataset? I see there are a number of zfs related fixes in v3.0.1 but I cannot do an apt upgrade to this version..?

Edit: issuing systemctl restart lxd does not resolve the issue, so maybe not lxd after all. Strange...

@stgraber
Copy link
Contributor

It's most likely some other process that's forked the mount table and is now preventing LXD from unmounting the container...

You can run grep containers/test1 /proc/*/mountinfo to find out what process that is.
You can then run nsenter -t <PID> -m -- umount /var/lib/lxd/storage-pools/lxd/containers/test1 to get rid of that mount, at which point lxc delete should work again...

@Kramerican
Copy link
Author

Kramerican commented Jun 14, 2018

You mean
cat /proc/*/mountinfo | grep containers/test1 ?
No hits ...

lxd delete still reports dataset is busy :(

Edit: grepping for the other, running, container results in lots of hits, so I think I have formatted that grep correctly. It seems there is nothing referencing test1 in proc/*/mountinfo. Any further ideas? :)

@stgraber
Copy link
Contributor

Hmm, then it's not mounted anywhere visible which would likely make it a kernel bug...
You can wait for a while hoping for the kernel to untangle whatever's going on or you can reboot the system which will fix it for sure...

Sorry I don't have a better answer for this.

@Kramerican
Copy link
Author

@stgraber Oh dear Cthulhu, that's bad. That also explains why I'm seeing it across several systems, as I keep my servers in synch w/regards to kernel/os/package versions.

I just checked on one of my other systems, and there I have a dataset which I still cannot destroy even after 48+ hours. So it does not seem this will go away on its own. There it is also "invisible".

If you want access to the server Stéphane and poke around a bit, let me know. Otherwise I guess I'll just have to mitigate this manually (sigh) and update my kernels when I get the chance, and hope that resolves the issue.

PS: I am not used to seeing grep issued that way, your command was of course correctly formatted, I just assumed it didn't since I didn't get any hits #n00b

@Kramerican
Copy link
Author

Should I report this as a bug somewhere, you think?

@Kramerican
Copy link
Author

If others stumble on this issue: There is a workaround in that it is possible to rename the dataset. So if your container is stopped, you can do:

zfs rename lxd/containers/test1 lxd/containers/test1failed

After which you can issue

lxc delete test1

However you then still have this dataset hanging around, which you will need to clean up at a later date, i.e. after a reboot I suppose. This pretty much sucks! :D

@stgraber
Copy link
Contributor

Yeah, that's pretty odd, I wonder what's keeping that active... You don't have any running zfs command for that dataset (zfs send, zfs snapshot, ...)?

Just run ps aux | grep test1 to be sure.

If not, then I'd be happy to take a look, see if anything stands out. There's a pretty good chance that it's a kernel bug, but we haven't seen reports of this before so it's intriguing.

@stgraber
Copy link
Contributor

(Note that I'm on vacation in Europe this week so not quite as around as usual :))

@Kramerican
Copy link
Author

Nope no zfs commands running. I have sent you access by mail :) - enjoy your vacation..!!

@stgraber
Copy link
Contributor

Very weird. I poked around for a bit, eventually restarting the lxd process which was apparently enough to get zfs to unstick and I could then delete the dataset just fine.

Now that we know that kicking lxd apparently unsticks zfs, can you let me know if you have another machine with the same issue (or can cause the one I already have access to to run into it again)?

I'd like to see what LXD shows as open prior to being killed, then if just killing it is enough to make zfs happy and if not, then why would lxd starting again somehow unstick zfs.

FWIW, what I tried before restarting lxd was:

  • Look through all open file descriptors
  • Looked at all mounts on the system
  • Checked for any obvious internal zfs holds

None of which showed anything relevant...

@stgraber
Copy link
Contributor

It could be an fd leak from a file that was read or written from the container by LXD and wasn't closed, but what's odd is that if it was the case, we should have seen an fd with the container path and there were none of them... Hopefully I can look at another instance of this problem and figure that part out.

@stgraber stgraber added the Incomplete Waiting on more information from reporter label Jun 18, 2018
@stgraber
Copy link
Contributor

Marking incomplete for now, @Kramerican let me know when you have another affected system.

@Kramerican
Copy link
Author

I sure will @stgraber I'm also on vacation and haven't had a chance to check if I can provoke this behaviour. I'll let you know.

@stgraber
Copy link
Contributor

@Kramerican still on vacation? :)

@Kramerican
Copy link
Author

@stgraber Yes until next week - but I will set some time aside then to try and force this behavior.

@Kramerican
Copy link
Author

@stgraber I have had little luck in forcing this behavior, but this has been cropping up all over the shop these last few days.

I had written some fallback code in our tools which simply renames the dataset, so that lxc delete could be run. These datasets are still "stuck" and zfs refuses to delete them. I have not restarted lxd in order to delete them - is it enough for you to get access to one of these systems to diagnose further? In which case let me know and I'll give you access. Thanks..!

@stgraber
Copy link
Contributor

stgraber commented Jul 4, 2018

@Kramerican yep, having access to one of the systems with such a stuck dataset should be enough to try to track down what LXD's got open that would explain the busy error.

@Kramerican
Copy link
Author

@stgraber Excellent. Mail with details sent.

@stgraber
Copy link
Contributor

stgraber commented Jul 4, 2018

@Kramerican so the only potential issue I'm seeing is a very large number of mapped /config files which is a leak that I believe has already been fixed with a combination of lxd and liblxc fix. Any chance you can upgrade your systems to 3.0.1 of both liblxc1 and lxd? Both have been available for about a week now.

@stgraber
Copy link
Contributor

stgraber commented Jul 4, 2018

If it's an option at all, at least on your Ubuntu 18.04 systems, I'd recommend considering moving to the lxd snap (--channel=3.0/stable in your case) as that would get you a much faster turnaround for fixes than we can do with the deb package.

@Kramerican
Copy link
Author

@stgraber Excellent. However on the system where you have access, I have apt upgrade hanging at 95% at Setting up lxd (3.0.1-0ubuntu1~18.04.1) ..

/var/log/lxd/lxd.log shows and entry which I think is responsible:
lvl=eror msg="Failed to cleanly shutdown daemon: raft did not shutdown within 10s" t=2018-07-04T21:31:27+0200

Is raft a process I can kill? Suggestions on how to unstick the upgrade?

@Kramerican
Copy link
Author

@stgraber Nevermind - it got unstuck after a while. Everything seems fine.

I will upgrade all systems and report back if the issue persists. Thanks..!

@stgraber
Copy link
Contributor

stgraber commented Jul 4, 2018

@Kramerican pretty sure it got unstuck because I logged in and ran systemctl stop lxd lxd.socket to unblock things. Looks like the RAFT database is hitting a timeout at startup.

It's actually a bug that 3.0.1 fixes but if your database has too many transactions prior to the upgrade, it still fails to start. The trick to unstick it is to temporarily move it to a very fast tmpfs which I'm doing on that system now.

@Kramerican
Copy link
Author

@stgraber ah yes I saw that lxc ls and other commands were not working. I won't mess around on that system anymore until you report back.

Series of commands which will help unstick lxd would be nice to have here, in case I see this happen on one of the other ~10 hosts I need to upgrade

@stgraber
Copy link
Contributor

stgraber commented Jul 4, 2018

@Kramerican all done, that system is good to go.

If you hit the same problem, you'll need to:

  • systemctl stop lxd lxd.socket

This will unstick the update. Once the update is all done, run again (for good measure):

  • systemctl stop lxd lxd.socket

That should ensure that LXD is fully stopped (containers are still running fine though).
Once that's done, do:

  • mv /var/lib/lxd/database /var/lib/lxd/database.good
  • mkdir /var/lib/lxd/database
  • mount -t tmpfs tmpfs /var/lib/lxd/database
  • cp -r /var/lib/lxd/database.good/* /var/lib/lxd/database
  • lxd --debug --group lxd

You'll see the daemon start, let it run until it hits "Done updating instance types" which is when it'll be ready for normal operation, then hit ctrl+c to stop it. Once done, do:

  • mkdir /var/lib/lxd/database.new
  • mv /var/lib/lxd/database/* /var/lib/lxd/database.new/
  • umount /var/lib/lxd/database
  • rmdir /var/lib/lxd/database
  • mv /var/lib/lxd/database.new /var/lib/lxd/database
  • chmod 700 /var/lib/lxd/database
  • systemctl start lxd

And you'll be back online with the newly compacted and much faster database.

@stgraber
Copy link
Contributor

stgraber commented Jul 4, 2018

This is only needed on systems where LXD isn't able to load its database within the 10s timeout so hopefully a majority of your systems will not need this trick. Once LXD successfully starts once on 3.0.1, the database gets compacted automatically in the background as well as on exit to prevent this problem from ever occurring again.

@Kramerican
Copy link
Author

@stgraber This is pure epic. Thanks so much, I'll get started cracks knuckles :)

@stgraber
Copy link
Contributor

stgraber commented Jul 4, 2018

@Kramerican I've also deleted the two failed datasets on sisko, so restarting LXD did the trick to unstick zfs, now the question is whether we'll be seeing the issue re-appear with 3.0.1.

@Kramerican
Copy link
Author

@stgraber ok so I completed the upgrade on all my hosts, only had to follow your steps here on one other system 👍

In the process however it turns out that one of my systems was already all up to date with 3.0.1 and here the failure with a stuck dataset happened today

I have just sent a mail with details and access to the system

@ColinIanKing
Copy link

I'm going to see where the work tasks are being added that kick off the umount, that will give some insight.

@ColinIanKing
Copy link

The exit of the lxc process via do_exit calls exit_task_namespaces -> switch_task_namespaces -> free_nsproxy -> put_mnt_ns -> drop_collected_mounts -> namespace_unlock ->
mntput_no_expire.

mntput_no_expire adds __cleanup_mnt() to process' task work list, which in turn gets called asynchronously at the death of the lxc process. And this unmounts the zfs file system.

@stgraber
Copy link
Contributor

We've now rolled out a fix for some, maybe even all of those issues.

The issue that was clearly identified and now resolved in the current stable snap (latest or 3.18 track) was related to lxc delete -f being run as a non-root user while using the snap package.

In this case, the way snapd manages mount namespaces was causing the zfs mount to be held by a forked of the mount namespace, preventing it from fully going away until all lxc commands run by that user would exit, after which another delete attempt would succeed.

The fix is effectively a whole bunch more mount table magic now happening in the LXD snap.
While I've put some effort into trying to sort existing setups on snap refresh, this may not be perfect for everyone, so if refreshing your lxd snap doesn't fully resolve the issue, reboot your system and see if it's behaving properly then.

As part of this, I've also significantly improved logging for the mount logic in the snap package, so failures to reshuffle the mount table on snap startup will now result in errors being logged in journalctl -b0 -u snap.lxd.daemon.

@paride
Copy link

paride commented Oct 16, 2019

Thanks @stgraber. I updated the LXD snap to version 3.18 (12181) and rebooted a couple of machines. In the first couple of days the problem seemed fixed, however here it is again:

$ lxc launch ubuntu:bionic b ; sleep 2 ; lxc delete b --force
Creating b
Starting b
Error: Failed to destroy ZFS dataset: Failed to run: zfs destroy -r mypool/lxd-dataset/containers/b: cannot destroy 'mypool/lxd-dataset/containers/b': dataset is busy

This is on my up-to-date Eoan laptop, but happens also on Bionic (GA kernel):

$ lxc launch ubuntu:bionic b ; sleep 2 ; lxc delete b --force
Creating b
Starting b
Error: Failed to destroy ZFS dataset: Failed to run: zfs destroy -r lxd/containers/b: cannot destroy 'lxd/containers/b': dataset is busy

Both the systems were rebooted after refreshing the LXD snap:

$ snap info lxd
[...]
refresh-date: 2 days ago, at 08:49 UTC
channels:
  stable:         3.18        2019-10-12 (12181) 57MB -

$ uptime
 10:08:26 up 2 days,  1:08,  3 users,  load average: 2,28, 2,71, 3,56

This is now happening every time, as before. As I wrote in the Launchpad bug, the behavior I observe is the system switching from a "good state" (containers always get deleted successfully) to a "bad state" (the "dateset is busy" error is always hit). I can't tell what triggers the switch.

[1] https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1779156/comments/9

@paride
Copy link

paride commented Oct 16, 2019

I collected some logs, as you mentioned that logging should now be more informative. So after doing

sudo snap set lxd daemon.debug=true
sudo systemctl reload snap.lxd.daemon

Here is the output of journalctl -b0 -u snap.lxd.daemon after running lxc launch ubuntu:bionic b on my Eoan laptop:

Oct 16 11:19:26 stramonio lxd.daemon[30691]: t=2019-10-16T11:19:26+0100 lvl=dbug msg=Handling ip=@ method=GET url=/1.0 user=
Oct 16 11:19:26 stramonio lxd.daemon[30691]: t=2019-10-16T11:19:26+0100 lvl=dbug msg="Disconnected event listener: 1d724172-5006-495a-bd9c-61f16a20d374"
Oct 16 11:19:26 stramonio lxd.daemon[30691]: t=2019-10-16T11:19:26+0100 lvl=dbug msg="\n\t{\n\t\t\"type\": \"sync\",\n\t\t\"status\": \"Success\",\n\t\t\"status_code\": 200,\n\t\t\"operation\": \"\",\n\t\t\"error_code\": 0,\n\t\t\"error\": \"\",\n\t\t\"metadata\": {\n\t\t\t\"config\": {\n\t\t\t\t\"core.https_address\": \"[::]\"\n\t\t\t},\n\t\t\t\"api_extensions\": [\n\t\t\t\t\"storage_zfs_remove_snapshots\",\n\t\t\t\t\"container_host_shutdown_timeout\",\n\t\t\t\t\"container_stop_priority\",\n\t\t\t\t\"container_syscall_filtering\",\n\t\t\t\t\"auth_pki\",\n\t\t\t\t\"container_last_used_at\",\n\t\t\t\t\"etag\",\n\t\t\t\t\"patch\",\n\t\t\t\t\"usb_devices\",\n\t\t\t\t\"https_allowed_credentials\",\n\t\t\t\t\"image_compression_algorithm\",\n\t\t\t\t\"directory_manipulation\",\n\t\t\t\t\"container_cpu_time\",\n\t\t\t\t\"storage_zfs_use_refquota\",\n\t\t\t\t\"storage_lvm_mount_options\",\n\t\t\t\t\"network\",\n\t\t\t\t\"profile_usedby\",\n\t\t\t\t\"container_push\",\n\t\t\t\t\"container_exec_recording\",\n\t\t\t\t\"certificate_update\",\n\t\t\t\t\"container_exec_signal_handling\",\n\t\t\t\t\"gpu_devices\",\n\t\t\t\t\"container_image_properties\",\n\t\t\t\t\"migration_progress\",\n\t\t\t\t\"id_map\",\n\t\t\t\t\"network_firewall_filtering\",\n\t\t\t\t\"network_routes\",\n\t\t\t\t\"storage\",\n\t\t\t\t\"file_delete\",\n\t\t\t\t\"file_append\",\n\t\t\t\t\"network_dhcp_expiry\",\n\t\t\t\t\"storage_lvm_vg_rename\",\n\t\t\t\t\"storage_lvm_thinpool_rename\",\n\t\t\t\t\"network_vlan\",\n\t\t\t\t\"image_create_aliases\",\n\t\t\t\t\"container_stateless_copy\",\n\t\t\t\t\"container_only_migration\",\n\t\t\t\t\"storage_zfs_clone_copy\",\n\t\t\t\t\"unix_device_rename\",\n\t\t\t\t\"storage_lvm_use_thinpool\",\n\t\t\t\t\"storage_rsync_bwlimit\",\n\t\t\t\t\"network_vxlan_interface\",\n\t\t\t\t\"storage_btrfs_mount_options\",\n\t\t\t\t\"entity_description\",\n\t\t\t\t\"image_force_refresh\",\n\t\t\t\t\"storage_lvm_lv_resizing\",\n\t\t\t\t\"id_map_base\",\n\t\t\t\t\"file_symlinks\",\n\t\t\t\t\"container_push_target\",\n\t\t\t\t\"network_vlan_physical\",\n\t\t\t\t\"storage_images_delete\",\n\t\t\t\t\"container_edit_metadata\",\n\t\t\t\t\"container_snapshot_stateful_migration\",\n\t\t\t\t\"storage_driver_ceph\",\n\t\t\t\t\"storage_ceph_user_name\",\n\t\t\t\t\"resource_limits\",\n\t\t\t\t\"storage_volatile_initial_source\",\n\t\t\t\t\"storage_ceph_force_osd_reuse\",\n\t\t\t\t\"storage_block_filesystem_btrfs\",\n\t\t\t\t\"resources\",\n\t\t\t\t\"kernel_limits\",\n\t\t\t\t\"storage_api_volume_rename\",\n\t\t\t\t\"macaroon_authentication\",\n\t\t\t\t\"network_sriov\",\n\t\t\t\t\"console\",\n\t\t\t\t\"restrict_devlxd\",\n\t\t\t\t\"migration_pre_copy\",\n\t\t\t\t\"infiniband\",\n\t\t\t\t\"maas_network\",\n\t\t\t\t\"devlxd_events\",\n\t\t\t\t\"proxy\",\n\t\t\t\t\"network_dhcp_gateway\",\n\t\t\t\t\"file_get_symlink\",\n\t\t\t\t\"network_leases\",\n\t\t\t\t\"unix_device_hotplug\",\n\t\t\t\t\"storage_api_local_volume_handling\",\n\t\t\t\t\"operation_description\",\n\t\t\t\t\"clustering\",\n\t\t\t\t\"event_lifecycle\",\n\t\t\t\t\"storage_api_remote_volume_handling\",\n\t\t\t\t\"nvidia_runtime\",\n\t\t\t\t\"container_mount_propagation\",\n\t\t\t\t\"container_backup\",\n\t\t\t\t\"devlxd_images\",\n\t\t\t\t\"container_local_cross_pool_handling\",\n\t\t\t\t\"proxy_unix\",\n\t\t\t\t\"proxy_udp\",\n\t\t\t\t\"clustering_join\",\n\t\t\t\t\"proxy_tcp_udp_multi_port_handling\",\n\t\t\t\t\"network_state\",\n\t\t\t\t\"proxy_unix_dac_properties\",\n\t\t\t\t\"container_protection_delete\",\n\t\t\t\t\"unix_priv_drop\",\n\t\t\t\t\"pprof_http\",\n\t\t\t\t\"proxy_haproxy_protocol\",\n\t\t\t\t\"network_hwaddr\",\n\t\t\t\t\"proxy_nat\",\n\t\t\t\t\"network_nat_order\",\n\t\t\t\t\"container_full\",\n\t\t\t\t\"candid_authentication\",\n\t\t\t\t\"backup_compression\",\n\t\t\t\t\"candid_config\",\n\t\t\t\t\"nvidia_runtime_config\",\n\t\t\t\t\"storage_api_volume_snapshots\",\n\t\t\t\t\"storage_unmapped\",\n\t\t\t\t\"projects\",\n\t\t\t\t\"candid_config_key\",\n\t\t\t\t\"network_vxlan_ttl\",\n\t\t\t\t\"container_incremental_copy\",\n\t\t\t\t\"usb_optional_vendorid\",\n\t\t\t\t\"snapshot_scheduling\",\n\t\t\t\t\"container_copy_project\",\n\t\t\t\t\"clustering_server_address\",\n\t\t\t\t\"clustering_image_replication\",\n\t\t\t\t\"container_protection_shift\",\n\t\t\t\t\"snapshot_expiry\",\n\t\t\t\t\"container_backup_override_pool\",\n\t\t\t\t\"snapshot_expiry_creation\",\n\t\t\t\t\"network_leases_location\",\n\t\t\t\t\"resources_cpu_socket\",\n\t\t\t\t\"resources_gpu\",\n\t\t\t\t\"resources_numa\",\n\t\t\t\t\"kernel_features\",\n\t\t\t\t\"id_map_current\",\n\t\t\t\t\"event_location\",\n\t\t\t\t\"storage_api_remote_volume_snapshots\",\n\t\t\t\t\"network_nat_address\",\n\t\t\t\t\"container_nic_routes\",\n\t\t\t\t\"rbac\",\n\t\t\t\t\"cluster_internal_copy\",\n\t\t\t\t\"seccomp_notify\",\n\t\t\t\t\"lxc_features\",\n\t\t\t\t\"container_nic_ipvlan\",\n\t\t\t\t\"network_vlan_sriov\",\n\t\t\t\t\"storage_cephfs\",\n\t\t\t\t\"container_nic_ipfilter\",\n\t\t\t\t\"resources_v2\",\n\t\t\t\t\"container_exec_user_group_cwd\",\n\t\t\t\t\"container_syscall_intercept\",\n\t\t\t\t\"container_disk_shift\",\n\t\t\t\t\"storage_shifted\",\n\t\t\t\t\"resources_infiniband\",\n\t\t\t\t\"daemon_storage\",\n\t\t\t\t\"instances\",\n\t\t\t\t\"image_types\",\n\t\t\t\t\"resources_disk_sata\",\n\t\t\t\t\"clustering_roles\",\n\t\t\t\t\"images_expiry\"\n\t\t\t],\n\t\t\t\"api_status\": \"stable\",\n\t\t\t\"api_version\": \"1.0\",\n\t\t\t\"auth\": \"trusted\",\n\t\t\t\"public\": false,\n\t\t\t\"auth_methods\": [\n\t\t\t\t\"tls\"\n\t\t\t],\n\t\t\t\"environment\": {\n\t\t\t\t\"addresses\": [\n\t\t\t\t\t\"10.20.66.63:8443\",\n\t\t\t\t\t\"[2001:67c:1560:a003:c7d:f968:7a02:d7c5]:8443\",\n\t\t\t\t\t\"192.168.122.1:8443\",\n\t\t\t\t\t\"10.123.80.1:8443\",\n\t\t\t\t\t\"10.143.111.1:8443\",\n\t\t\t\t\t\"[fd42:e078:9b4f:e259::1]:8443\",\n\t\t\t\t\t\"10.172.194.133:8443\",\n\t\t\t\t\t\"[2001:67c:1560:8007::aac:c285]:8443\"\n\t\t\t\t],\n\t\t\t\t\"architectures\": [\n\t\t\t\t\t\"x86_64\",\n\t\t\t\t\t\"i686\"\n\t\t\t\t],\n\t\t\t\t\"certificate\": \"-----BEGIN CERTIFICATE-----\\nMIIB+zCCAYGgAwIBAgIRAL+3DExX8gusgiUaquMaplswCgYIKoZIzj0EAwMwNzEc\\nMBoGA1UEChMTbGludXhjb250YWluZXJzLm9yZzEXMBUGA1UEAwwOcm9vdEBzdHJh\\nbW9uaW8wHhcNMTkwMjI3MjI1MzU3WhcNMjkwMjI0MjI1MzU3WjA3MRwwGgYDVQQK\\nExNsaW51eGNvbnRhaW5lcnMub3JnMRcwFQYDVQQDDA5yb290QHN0cmFtb25pbzB2\\nMBAGByqGSM49AgEGBSuBBAAiA2IABKZigkKf0QJX/b9jUsixh7lZiZ9ZYY9qwvU/\\nOyb3PIPx2RQFKxFBuIobRm+Z/BbCBXIw0pUSknAFcp3HfuqlfcwpAvAEMR9/jURF\\nXJ+C2B4ktIsEYfGe3SrAQArskkoJ76NRME8wDgYDVR0PAQH/BAQDAgWgMBMGA1Ud\\nJQQMMAoGCCsGAQUFBwMBMAwGA1UdEwEB/wQCMAAwGgYDVR0RBBMwEYIJc3RyYW1v\\nbmlvhwTAqAJcMAoGCCqGSM49BAMDA2gAMGUCMDo6essz4MgOCz4Nj/RtvDzD1WTn\\nfkybdl47rwWm4merS9DP8suQ9Z0gsHiRAdhTDQIxAJGYALm21Vn1xTI+gwmejdv6\\nFfEf/5W/kgXD85Se2uy3KeslY5NhpipBK1T5m8qyJA==\\n-----END CERTIFICATE-----\\n\",\n\t\t\t\t\"certificate_fingerprint\": \"e4f9e1c5d74db2140be59d0c69a0146596ea0f37c616208f26fd6033701fc2d0\",\n\t\t\t\t\"driver\": \"lxc\",\n\t\t\t\t\"driver_version\": \"3.2.1\",\n\t\t\t\t\"kernel\": \"Linux\",\n\t\t\t\t\"kernel_architecture\": \"x86_64\",\n\t\t\t\t\"kernel_features\": {\n\t\t\t\t\t\"netnsid_getifaddrs\": \"true\",\n\t\t\t\t\t\"seccomp_listener\": \"true\",\n\t\t\t\t\t\"shiftfs\": \"false\",\n\t\t\t\t\t\"uevent_injection\": \"true\",\n\t\t\t\t\t\"unpriv_fscaps\": \"true\"\n\t\t\t\t},\n\t\t\t\t\"kernel_version\": \"5.3.0-18-generic\",\n\t\t\t\t\"lxc_features\": {\n\t\t\t\t\t\"mount_injection_file\": \"true\",\n\t\t\t\t\t\"network_gateway_device_route\": \"true\",\n\t\t\t\t\t\"network_ipvlan\": \"true\",\n\t\t\t\t\t\"network_l2proxy\": \"true\",\n\t\t\t\t\t\"network_phys_macvlan_mtu\": \"true\",\n\t\t\t\t\t\"seccomp_notify\": \"true\"\n\t\t\t\t},\n\t\t\t\t\"project\": \"default\",\n\t\t\t\t\"server\": \"lxd\",\n\t\t\t\t\"server_clustered\": false,\n\t\t\t\t\"server_name\": \"stramonio\",\n\t\t\t\t\"server_pid\": 30791,\n\t\t\t\t\"server_version\": \"3.18\",\n\t\t\t\t\"storage\": \"zfs\",\n\t\t\t\t\"storage_version\": \"0.8.1-1ubuntu12\"\n\t\t\t}\n\t\t}\n\t}"
Oct 16 11:19:26 stramonio lxd.daemon[30691]: t=2019-10-16T11:19:26+0100 lvl=dbug msg=Handling ip=@ method=GET url=/1.0/events user=
Oct 16 11:19:26 stramonio lxd.daemon[30691]: t=2019-10-16T11:19:26+0100 lvl=dbug msg="New event listener: 69da7640-9c93-4052-9fc0-13345d4a09cd"
Oct 16 11:19:26 stramonio lxd.daemon[30691]: t=2019-10-16T11:19:26+0100 lvl=dbug msg=Handling ip=@ method=POST url=/1.0/instances user=
Oct 16 11:19:26 stramonio lxd.daemon[30691]: t=2019-10-16T11:19:26+0100 lvl=dbug msg="\n\t{\n\t\t\"architecture\": \"\",\n\t\t\"config\": {},\n\t\t\"devices\": {},\n\t\t\"ephemeral\": false,\n\t\t\"profiles\": null,\n\t\t\"stateful\": false,\n\t\t\"description\": \"\",\n\t\t\"name\": \"b\",\n\t\t\"source\": {\n\t\t\t\"type\": \"image\",\n\t\t\t\"certificate\": \"\",\n\t\t\t\"alias\": \"bionic\",\n\t\t\t\"server\": \"https://cloud-images.ubuntu.com/releases\",\n\t\t\t\"protocol\": \"simplestreams\",\n\t\t\t\"mode\": \"pull\"\n\t\t},\n\t\t\"instance_type\": \"\",\n\t\t\"type\": \"\"\n\t}"
Oct 16 11:19:26 stramonio lxd.daemon[30691]: t=2019-10-16T11:19:26+0100 lvl=dbug msg="Responding to container create"
Oct 16 11:19:26 stramonio lxd.daemon[30691]: t=2019-10-16T11:19:26+0100 lvl=dbug msg="New task Operation: 8d40f78f-e6ed-45d8-a3b2-e13a3d5c3a36"
Oct 16 11:19:26 stramonio lxd.daemon[30691]: t=2019-10-16T11:19:26+0100 lvl=dbug msg="Started task operation: 8d40f78f-e6ed-45d8-a3b2-e13a3d5c3a36"
Oct 16 11:19:26 stramonio lxd.daemon[30691]: t=2019-10-16T11:19:26+0100 lvl=dbug msg="\n\t{\n\t\t\"type\": \"async\",\n\t\t\"status\": \"Operation created\",\n\t\t\"status_code\": 100,\n\t\t\"operation\": \"/1.0/operations/8d40f78f-e6ed-45d8-a3b2-e13a3d5c3a36\",\n\t\t\"error_code\": 0,\n\t\t\"error\": \"\",\n\t\t\"metadata\": {\n\t\t\t\"id\": \"8d40f78f-e6ed-45d8-a3b2-e13a3d5c3a36\",\n\t\t\t\"class\": \"task\",\n\t\t\t\"description\": \"Creating container\",\n\t\t\t\"created_at\": \"2019-10-16T11:19:26.85536851+01:00\",\n\t\t\t\"updated_at\": \"2019-10-16T11:19:26.85536851+01:00\",\n\t\t\t\"status\": \"Running\",\n\t\t\t\"status_code\": 103,\n\t\t\t\"resources\": {\n\t\t\t\t\"containers\": [\n\t\t\t\t\t\"/1.0/containers/b\"\n\t\t\t\t]\n\t\t\t},\n\t\t\t\"metadata\": null,\n\t\t\t\"may_cancel\": false,\n\t\t\t\"err\": \"\",\n\t\t\t\"location\": \"none\"\n\t\t}\n\t}"
Oct 16 11:19:26 stramonio lxd.daemon[30691]: t=2019-10-16T11:19:26+0100 lvl=dbug msg=Handling ip=@ method=GET url=/1.0/operations/8d40f78f-e6ed-45d8-a3b2-e13a3d5c3a36 user=
Oct 16 11:19:26 stramonio lxd.daemon[30691]: t=2019-10-16T11:19:26+0100 lvl=dbug msg="\n\t{\n\t\t\"type\": \"sync\",\n\t\t\"status\": \"Success\",\n\t\t\"status_code\": 200,\n\t\t\"operation\": \"\",\n\t\t\"error_code\": 0,\n\t\t\"error\": \"\",\n\t\t\"metadata\": {\n\t\t\t\"id\": \"8d40f78f-e6ed-45d8-a3b2-e13a3d5c3a36\",\n\t\t\t\"class\": \"task\",\n\t\t\t\"description\": \"Creating container\",\n\t\t\t\"created_at\": \"2019-10-16T11:19:26.85536851+01:00\",\n\t\t\t\"updated_at\": \"2019-10-16T11:19:26.85536851+01:00\",\n\t\t\t\"status\": \"Running\",\n\t\t\t\"status_code\": 103,\n\t\t\t\"resources\": {\n\t\t\t\t\"containers\": [\n\t\t\t\t\t\"/1.0/containers/b\"\n\t\t\t\t]\n\t\t\t},\n\t\t\t\"metadata\": null,\n\t\t\t\"may_cancel\": false,\n\t\t\t\"err\": \"\",\n\t\t\t\"location\": \"none\"\n\t\t}\n\t}"
Oct 16 11:19:26 stramonio lxd.daemon[30691]: t=2019-10-16T11:19:26+0100 lvl=dbug msg="Using SimpleStreams cache entry" expiry=2019-10-16T13:00:47+0200 server=https://cloud-images.ubuntu.com/releases
Oct 16 11:19:26 stramonio lxd.daemon[30691]: t=2019-10-16T11:19:26+0100 lvl=dbug msg="Connecting to a remote simplestreams server"
Oct 16 11:19:26 stramonio lxd.daemon[30691]: t=2019-10-16T11:19:26+0100 lvl=dbug msg="Image already exists in the db" image=4a89a29d4db20988ee3f0bcec5e054d1cde3d4255d2129208ef87d35cff8b464
Oct 16 11:19:26 stramonio lxd.daemon[30691]: t=2019-10-16T11:19:26+0100 lvl=info msg="Creating container" ephemeral=false name=b project=default
Oct 16 11:19:26 stramonio lxd.daemon[30691]: t=2019-10-16T11:19:26+0100 lvl=info msg="Created container" ephemeral=false name=b project=default
Oct 16 11:19:26 stramonio lxd.daemon[30691]: t=2019-10-16T11:19:26+0100 lvl=dbug msg="Creating ZFS storage volume for container \"b\" on storage pool \"default\""
Oct 16 11:19:27 stramonio lxd.daemon[30691]: t=2019-10-16T11:19:27+0100 lvl=dbug msg="Mounting ZFS storage volume for container \"b\" on storage pool \"default\""
Oct 16 11:19:27 stramonio lxd.daemon[30691]: t=2019-10-16T11:19:27+0100 lvl=dbug msg="Mounted ZFS storage volume for container \"b\" on storage pool \"default\""
Oct 16 11:19:27 stramonio lxd.daemon[30691]: t=2019-10-16T11:19:27+0100 lvl=dbug msg="Created ZFS storage volume for container \"b\" on storage pool \"default\""
Oct 16 11:19:27 stramonio lxd.daemon[30691]: t=2019-10-16T11:19:27+0100 lvl=dbug msg="Unmounting ZFS storage volume for container \"b\" on storage pool \"default\""
Oct 16 11:19:27 stramonio lxd.daemon[30691]: t=2019-10-16T11:19:27+0100 lvl=dbug msg="Unmounted ZFS storage volume for container \"b\" on storage pool \"default\""
Oct 16 11:19:27 stramonio lxd.daemon[30691]: t=2019-10-16T11:19:27+0100 lvl=dbug msg="Mounting ZFS storage volume for container \"b\" on storage pool \"default\""
Oct 16 11:19:27 stramonio lxd.daemon[30691]: t=2019-10-16T11:19:27+0100 lvl=dbug msg="Mounted ZFS storage volume for container \"b\" on storage pool \"default\""
Oct 16 11:19:27 stramonio lxd.daemon[30691]: t=2019-10-16T11:19:27+0100 lvl=dbug msg="Unmounting ZFS storage volume for container \"b\" on storage pool \"default\""
Oct 16 11:19:27 stramonio lxd.daemon[30691]: t=2019-10-16T11:19:27+0100 lvl=dbug msg="Unmounted ZFS storage volume for container \"b\" on storage pool \"default\""
Oct 16 11:19:27 stramonio lxd.daemon[30691]: t=2019-10-16T11:19:27+0100 lvl=dbug msg="Success for task operation: 8d40f78f-e6ed-45d8-a3b2-e13a3d5c3a36"
Oct 16 11:19:27 stramonio lxd.daemon[30691]: t=2019-10-16T11:19:27+0100 lvl=dbug msg=Handling ip=@ method=GET url=/1.0/instances/b user=
Oct 16 11:19:27 stramonio lxd.daemon[30691]: t=2019-10-16T11:19:27+0100 lvl=dbug msg="\n\t{\n\t\t\"type\": \"sync\",\n\t\t\"status\": \"Success\",\n\t\t\"status_code\": 200,\n\t\t\"operation\": \"\",\n\t\t\"error_code\": 0,\n\t\t\"error\": \"\",\n\t\t\"metadata\": {\n\t\t\t\"architecture\": \"x86_64\",\n\t\t\t\"config\": {\n\t\t\t\t\"image.architecture\": \"amd64\",\n\t\t\t\t\"image.description\": \"ubuntu 18.04 LTS amd64 (release) (20191008)\",\n\t\t\t\t\"image.label\": \"release\",\n\t\t\t\t\"image.os\": \"ubuntu\",\n\t\t\t\t\"image.release\": \"bionic\",\n\t\t\t\t\"image.serial\": \"20191008\",\n\t\t\t\t\"image.type\": \"squashfs\",\n\t\t\t\t\"image.version\": \"18.04\",\n\t\t\t\t\"volatile.apply_template\": \"create\",\n\t\t\t\t\"volatile.base_image\": \"4a89a29d4db20988ee3f0bcec5e054d1cde3d4255d2129208ef87d35cff8b464\",\n\t\t\t\t\"volatile.eth0.hwaddr\": \"00:16:3e:fe:89:10\",\n\t\t\t\t\"volatile.idmap.base\": \"0\",\n\t\t\t\t\"volatile.idmap.next\": \"[{\\\"Isuid\\\":true,\\\"Isgid\\\":false,\\\"Hostid\\\":1000000,\\\"Nsid\\\":0,\\\"Maprange\\\":1000000000},{\\\"Isuid\\\":false,\\\"Isgid\\\":true,\\\"Hostid\\\":1000000,\\\"Nsid\\\":0,\\\"Maprange\\\":1000000000}]\",\n\t\t\t\t\"volatile.last_state.idmap\": \"[]\"\n\t\t\t},\n\t\t\t\"devices\": {},\n\t\t\t\"ephemeral\": false,\n\t\t\t\"profiles\": [\n\t\t\t\t\"default\"\n\t\t\t],\n\t\t\t\"stateful\": false,\n\t\t\t\"description\": \"\",\n\t\t\t\"created_at\": \"2019-10-16T11:19:26.935987885+01:00\",\n\t\t\t\"expanded_config\": {\n\t\t\t\t\"image.architecture\": \"amd64\",\n\t\t\t\t\"image.description\": \"ubuntu 18.04 LTS amd64 (release) (20191008)\",\n\t\t\t\t\"image.label\": \"release\",\n\t\t\t\t\"image.os\": \"ubuntu\",\n\t\t\t\t\"image.release\": \"bionic\",\n\t\t\t\t\"image.serial\": \"20191008\",\n\t\t\t\t\"image.type\": \"squashfs\",\n\t\t\t\t\"image.version\": \"18.04\",\n\t\t\t\t\"volatile.apply_template\": \"create\",\n\t\t\t\t\"volatile.base_image\": \"4a89a29d4db20988ee3f0bcec5e054d1cde3d4255d2129208ef87d35cff8b464\",\n\t\t\t\t\"volatile.eth0.hwaddr\": \"00:16:3e:fe:89:10\",\n\t\t\t\t\"volatile.idmap.base\": \"0\",\n\t\t\t\t\"volatile.idmap.next\": \"[{\\\"Isuid\\\":true,\\\"Isgid\\\":false,\\\"Hostid\\\":1000000,\\\"Nsid\\\":0,\\\"Maprange\\\":1000000000},{\\\"Isuid\\\":false,\\\"Isgid\\\":true,\\\"Hostid\\\":1000000,\\\"Nsid\\\":0,\\\"Maprange\\\":1000000000}]\",\n\t\t\t\t\"volatile.last_state.idmap\": \"[]\"\n\t\t\t},\n\t\t\t\"expanded_devices\": {\n\t\t\t\t\"eth0\": {\n\t\t\t\t\t\"name\": \"eth0\",\n\t\t\t\t\t\"nictype\": \"bridged\",\n\t\t\t\t\t\"parent\": \"lxdbr0\",\n\t\t\t\t\t\"type\": \"nic\"\n\t\t\t\t},\n\t\t\t\t\"root\": {\n\t\t\t\t\t\"path\": \"/\",\n\t\t\t\t\t\"pool\": \"default\",\n\t\t\t\t\t\"type\": \"disk\"\n\t\t\t\t}\n\t\t\t},\n\t\t\t\"name\": \"b\",\n\t\t\t\"status\": \"Stopped\",\n\t\t\t\"status_code\": 102,\n\t\t\t\"last_used_at\": \"1970-01-01T01:00:00+01:00\",\n\t\t\t\"location\": \"none\",\n\t\t\t\"type\": \"container\"\n\t\t}\n\t}"
Oct 16 11:19:27 stramonio lxd.daemon[30691]: t=2019-10-16T11:19:27+0100 lvl=dbug msg=Handling ip=@ method=PUT url=/1.0/instances/b/state user=
Oct 16 11:19:27 stramonio lxd.daemon[30691]: t=2019-10-16T11:19:27+0100 lvl=dbug msg="\n\t{\n\t\t\"action\": \"start\",\n\t\t\"timeout\": -1,\n\t\t\"force\": false,\n\t\t\"stateful\": false\n\t}"
Oct 16 11:19:27 stramonio lxd.daemon[30691]: t=2019-10-16T11:19:27+0100 lvl=dbug msg="New task Operation: 9cf66aeb-05de-491e-9eaa-348a21eb8ec1"
Oct 16 11:19:27 stramonio lxd.daemon[30691]: t=2019-10-16T11:19:27+0100 lvl=dbug msg="Started task operation: 9cf66aeb-05de-491e-9eaa-348a21eb8ec1"
Oct 16 11:19:27 stramonio lxd.daemon[30691]: t=2019-10-16T11:19:27+0100 lvl=dbug msg="\n\t{\n\t\t\"type\": \"async\",\n\t\t\"status\": \"Operation created\",\n\t\t\"status_code\": 100,\n\t\t\"operation\": \"/1.0/operations/9cf66aeb-05de-491e-9eaa-348a21eb8ec1\",\n\t\t\"error_code\": 0,\n\t\t\"error\": \"\",\n\t\t\"metadata\": {\n\t\t\t\"id\": \"9cf66aeb-05de-491e-9eaa-348a21eb8ec1\",\n\t\t\t\"class\": \"task\",\n\t\t\t\"description\": \"Starting container\",\n\t\t\t\"created_at\": \"2019-10-16T11:19:27.190104849+01:00\",\n\t\t\t\"updated_at\": \"2019-10-16T11:19:27.190104849+01:00\",\n\t\t\t\"status\": \"Running\",\n\t\t\t\"status_code\": 103,\n\t\t\t\"resources\": {\n\t\t\t\t\"containers\": [\n\t\t\t\t\t\"/1.0/containers/b\"\n\t\t\t\t]\n\t\t\t},\n\t\t\t\"metadata\": null,\n\t\t\t\"may_cancel\": false,\n\t\t\t\"err\": \"\",\n\t\t\t\"location\": \"none\"\n\t\t}\n\t}"
Oct 16 11:19:27 stramonio lxd.daemon[30691]: t=2019-10-16T11:19:27+0100 lvl=dbug msg="Container idmap changed, remapping"
Oct 16 11:19:27 stramonio lxd.daemon[30691]: t=2019-10-16T11:19:27+0100 lvl=dbug msg="Updated metadata for task Operation: 9cf66aeb-05de-491e-9eaa-348a21eb8ec1"
Oct 16 11:19:27 stramonio lxd.daemon[30691]: t=2019-10-16T11:19:27+0100 lvl=dbug msg=Handling ip=@ method=GET url=/1.0/operations/9cf66aeb-05de-491e-9eaa-348a21eb8ec1 user=
Oct 16 11:19:27 stramonio lxd.daemon[30691]: t=2019-10-16T11:19:27+0100 lvl=dbug msg="\n\t{\n\t\t\"type\": \"sync\",\n\t\t\"status\": \"Success\",\n\t\t\"status_code\": 200,\n\t\t\"operation\": \"\",\n\t\t\"error_code\": 0,\n\t\t\"error\": \"\",\n\t\t\"metadata\": {\n\t\t\t\"id\": \"9cf66aeb-05de-491e-9eaa-348a21eb8ec1\",\n\t\t\t\"class\": \"task\",\n\t\t\t\"description\": \"Starting container\",\n\t\t\t\"created_at\": \"2019-10-16T11:19:27.190104849+01:00\",\n\t\t\t\"updated_at\": \"2019-10-16T11:19:27.196621049+01:00\",\n\t\t\t\"status\": \"Running\",\n\t\t\t\"status_code\": 103,\n\t\t\t\"resources\": {\n\t\t\t\t\"containers\": [\n\t\t\t\t\t\"/1.0/containers/b\"\n\t\t\t\t]\n\t\t\t},\n\t\t\t\"metadata\": {\n\t\t\t\t\"container_progress\": \"Remapping container filesystem\"\n\t\t\t},\n\t\t\t\"may_cancel\": false,\n\t\t\t\"err\": \"\",\n\t\t\t\"location\": \"none\"\n\t\t}\n\t}"
Oct 16 11:19:27 stramonio lxd.daemon[30691]: t=2019-10-16T11:19:27+0100 lvl=dbug msg="Mounting ZFS storage volume for container \"b\" on storage pool \"default\""
Oct 16 11:19:27 stramonio lxd.daemon[30691]: t=2019-10-16T11:19:27+0100 lvl=dbug msg="Mounted ZFS storage volume for container \"b\" on storage pool \"default\""
Oct 16 11:19:29 stramonio lxd.daemon[30691]: t=2019-10-16T11:19:29+0100 lvl=dbug msg="Updated metadata for task Operation: 9cf66aeb-05de-491e-9eaa-348a21eb8ec1"
Oct 16 11:19:29 stramonio lxd.daemon[30691]: t=2019-10-16T11:19:29+0100 lvl=dbug msg="Scheduler: network: vethfeb37c17 has been added: updating network priorities"
Oct 16 11:19:29 stramonio lxd.daemon[30691]: t=2019-10-16T11:19:29+0100 lvl=dbug msg="Scheduler: network: veth97e1e9d2 has been added: updating network priorities"
Oct 16 11:19:29 stramonio lxd.daemon[30691]: t=2019-10-16T11:19:29+0100 lvl=dbug msg="Mounting ZFS storage volume for container \"b\" on storage pool \"default\""
Oct 16 11:19:29 stramonio lxd.daemon[30691]: t=2019-10-16T11:19:29+0100 lvl=dbug msg="Mounted ZFS storage volume for container \"b\" on storage pool \"default\""
Oct 16 11:19:29 stramonio lxd.daemon[30691]: t=2019-10-16T11:19:29+0100 lvl=dbug msg="Mounting ZFS storage volume for container \"b\" on storage pool \"default\""
Oct 16 11:19:29 stramonio lxd.daemon[30691]: t=2019-10-16T11:19:29+0100 lvl=dbug msg="Mounted ZFS storage volume for container \"b\" on storage pool \"default\""
Oct 16 11:19:29 stramonio lxd.daemon[30691]: t=2019-10-16T11:19:29+0100 lvl=info msg="Starting container" action=start created=2019-10-16T11:19:26+0100 ephemeral=false name=b project=default stateful=false used=1970-01-01T01:00:00+0100
Oct 16 11:19:29 stramonio lxd.daemon[30691]: t=2019-10-16T11:19:29+0100 lvl=dbug msg=Handling ip=@ method=GET url=/internal/containers/219/onstart user=
Oct 16 11:19:29 stramonio lxd.daemon[30691]: t=2019-10-16T11:19:29+0100 lvl=dbug msg="Mounting ZFS storage volume for container \"b\" on storage pool \"default\""
Oct 16 11:19:29 stramonio lxd.daemon[30691]: t=2019-10-16T11:19:29+0100 lvl=dbug msg="Mounted ZFS storage volume for container \"b\" on storage pool \"default\""
Oct 16 11:19:29 stramonio lxd.daemon[30691]: t=2019-10-16T11:19:29+0100 lvl=dbug msg="Scheduler: container b started: re-balancing"
Oct 16 11:19:29 stramonio lxd.daemon[30691]: t=2019-10-16T11:19:29+0100 lvl=dbug msg="\n\t{\n\t\t\"type\": \"sync\",\n\t\t\"status\": \"Success\",\n\t\t\"status_code\": 200,\n\t\t\"operation\": \"\",\n\t\t\"error_code\": 0,\n\t\t\"error\": \"\",\n\t\t\"metadata\": {}\n\t}"
Oct 16 11:19:29 stramonio lxd.daemon[30691]: t=2019-10-16T11:19:29+0100 lvl=info msg="Started container" action=start created=2019-10-16T11:19:26+0100 ephemeral=false name=b project=default stateful=false used=1970-01-01T01:00:00+0100
Oct 16 11:19:29 stramonio lxd.daemon[30691]: t=2019-10-16T11:19:29+0100 lvl=dbug msg="Success for task operation: 9cf66aeb-05de-491e-9eaa-348a21eb8ec1"

And this after running lxc delete b --force:

Oct 16 11:19:43 stramonio lxd.daemon[30691]: t=2019-10-16T11:19:43+0100 lvl=dbug msg=Handling ip=@ method=GET url=/1.0 user=
Oct 16 11:19:43 stramonio lxd.daemon[30691]: t=2019-10-16T11:19:43+0100 lvl=dbug msg="Disconnected event listener: 69da7640-9c93-4052-9fc0-13345d4a09cd"
Oct 16 11:19:44 stramonio lxd.daemon[30691]: t=2019-10-16T11:19:43+0100 lvl=dbug msg="\n\t{\n\t\t\"type\": \"sync\",\n\t\t\"status\": \"Success\",\n\t\t\"status_code\": 200,\n\t\t\"operation\": \"\",\n\t\t\"error_code\": 0,\n\t\t\"error\": \"\",\n\t\t\"metadata\": {\n\t\t\t\"config\": {\n\t\t\t\t\"core.https_address\": \"[::]\"\n\t\t\t},\n\t\t\t\"api_extensions\": [\n\t\t\t\t\"storage_zfs_remove_snapshots\",\n\t\t\t\t\"container_host_shutdown_timeout\",\n\t\t\t\t\"container_stop_priority\",\n\t\t\t\t\"container_syscall_filtering\",\n\t\t\t\t\"auth_pki\",\n\t\t\t\t\"container_last_used_at\",\n\t\t\t\t\"etag\",\n\t\t\t\t\"patch\",\n\t\t\t\t\"usb_devices\",\n\t\t\t\t\"https_allowed_credentials\",\n\t\t\t\t\"image_compression_algorithm\",\n\t\t\t\t\"directory_manipulation\",\n\t\t\t\t\"container_cpu_time\",\n\t\t\t\t\"storage_zfs_use_refquota\",\n\t\t\t\t\"storage_lvm_mount_options\",\n\t\t\t\t\"network\",\n\t\t\t\t\"profile_usedby\",\n\t\t\t\t\"container_push\",\n\t\t\t\t\"container_exec_recording\",\n\t\t\t\t\"certificate_update\",\n\t\t\t\t\"container_exec_signal_handling\",\n\t\t\t\t\"gpu_devices\",\n\t\t\t\t\"container_image_properties\",\n\t\t\t\t\"migration_progress\",\n\t\t\t\t\"id_map\",\n\t\t\t\t\"network_firewall_filtering\",\n\t\t\t\t\"network_routes\",\n\t\t\t\t\"storage\",\n\t\t\t\t\"file_delete\",\n\t\t\t\t\"file_append\",\n\t\t\t\t\"network_dhcp_expiry\",\n\t\t\t\t\"storage_lvm_vg_rename\",\n\t\t\t\t\"storage_lvm_thinpool_rename\",\n\t\t\t\t\"network_vlan\",\n\t\t\t\t\"image_create_aliases\",\n\t\t\t\t\"container_stateless_copy\",\n\t\t\t\t\"container_only_migration\",\n\t\t\t\t\"storage_zfs_clone_copy\",\n\t\t\t\t\"unix_device_rename\",\n\t\t\t\t\"storage_lvm_use_thinpool\",\n\t\t\t\t\"storage_rsync_bwlimit\",\n\t\t\t\t\"network_vxlan_interface\",\n\t\t\t\t\"storage_btrfs_mount_options\",\n\t\t\t\t\"entity_description\",\n\t\t\t\t\"image_force_refresh\",\n\t\t\t\t\"storage_lvm_lv_resizing\",\n\t\t\t\t\"id_map_base\",\n\t\t\t\t\"file_symlinks\",\n\t\t\t\t\"container_push_target\",\n\t\t\t\t\"network_vlan_physical\",\n\t\t\t\t\"storage_images_delete\",\n\t\t\t\t\"container_edit_metadata\",\n\t\t\t\t\"container_snapshot_stateful_migration\",\n\t\t\t\t\"storage_driver_ceph\",\n\t\t\t\t\"storage_ceph_user_name\",\n\t\t\t\t\"resource_limits\",\n\t\t\t\t\"storage_volatile_initial_source\",\n\t\t\t\t\"storage_ceph_force_osd_reuse\",\n\t\t\t\t\"storage_block_filesystem_btrfs\",\n\t\t\t\t\"resources\",\n\t\t\t\t\"kernel_limits\",\n\t\t\t\t\"storage_api_volume_rename\",\n\t\t\t\t\"macaroon_authentication\",\n\t\t\t\t\"network_sriov\",\n\t\t\t\t\"console\",\n\t\t\t\t\"restrict_devlxd\",\n\t\t\t\t\"migration_pre_copy\",\n\t\t\t\t\"infiniband\",\n\t\t\t\t\"maas_network\",\n\t\t\t\t\"devlxd_events\",\n\t\t\t\t\"proxy\",\n\t\t\t\t\"network_dhcp_gateway\",\n\t\t\t\t\"file_get_symlink\",\n\t\t\t\t\"network_leases\",\n\t\t\t\t\"unix_device_hotplug\",\n\t\t\t\t\"storage_api_local_volume_handling\",\n\t\t\t\t\"operation_description\",\n\t\t\t\t\"clustering\",\n\t\t\t\t\"event_lifecycle\",\n\t\t\t\t\"storage_api_remote_volume_handling\",\n\t\t\t\t\"nvidia_runtime\",\n\t\t\t\t\"container_mount_propagation\",\n\t\t\t\t\"container_backup\",\n\t\t\t\t\"devlxd_images\",\n\t\t\t\t\"container_local_cross_pool_handling\",\n\t\t\t\t\"proxy_unix\",\n\t\t\t\t\"proxy_udp\",\n\t\t\t\t\"clustering_join\",\n\t\t\t\t\"proxy_tcp_udp_multi_port_handling\",\n\t\t\t\t\"network_state\",\n\t\t\t\t\"proxy_unix_dac_properties\",\n\t\t\t\t\"container_protection_delete\",\n\t\t\t\t\"unix_priv_drop\",\n\t\t\t\t\"pprof_http\",\n\t\t\t\t\"proxy_haproxy_protocol\",\n\t\t\t\t\"network_hwaddr\",\n\t\t\t\t\"proxy_nat\",\n\t\t\t\t\"network_nat_order\",\n\t\t\t\t\"container_full\",\n\t\t\t\t\"candid_authentication\",\n\t\t\t\t\"backup_compression\",\n\t\t\t\t\"candid_config\",\n\t\t\t\t\"nvidia_runtime_config\",\n\t\t\t\t\"storage_api_volume_snapshots\",\n\t\t\t\t\"storage_unmapped\",\n\t\t\t\t\"projects\",\n\t\t\t\t\"candid_config_key\",\n\t\t\t\t\"network_vxlan_ttl\",\n\t\t\t\t\"container_incremental_copy\",\n\t\t\t\t\"usb_optional_vendorid\",\n\t\t\t\t\"snapshot_scheduling\",\n\t\t\t\t\"container_copy_project\",\n\t\t\t\t\"clustering_server_address\",\n\t\t\t\t\"clustering_image_replication\",\n\t\t\t\t\"container_protection_shift\",\n\t\t\t\t\"snapshot_expiry\",\n\t\t\t\t\"container_backup_override_pool\",\n\t\t\t\t\"snapshot_expiry_creation\",\n\t\t\t\t\"network_leases_location\",\n\t\t\t\t\"resources_cpu_socket\",\n\t\t\t\t\"resources_gpu\",\n\t\t\t\t\"resources_numa\",\n\t\t\t\t\"kernel_features\",\n\t\t\t\t\"id_map_current\",\n\t\t\t\t\"event_location\",\n\t\t\t\t\"storage_api_remote_volume_snapshots\",\n\t\t\t\t\"network_nat_address\",\n\t\t\t\t\"container_nic_routes\",\n\t\t\t\t\"rbac\",\n\t\t\t\t\"cluster_internal_copy\",\n\t\t\t\t\"seccomp_notify\",\n\t\t\t\t\"lxc_features\",\n\t\t\t\t\"container_nic_ipvlan\",\n\t\t\t\t\"network_vlan_sriov\",\n\t\t\t\t\"storage_cephfs\",\n\t\t\t\t\"container_nic_ipfilter\",\n\t\t\t\t\"resources_v2\",\n\t\t\t\t\"container_exec_user_group_cwd\",\n\t\t\t\t\"container_syscall_intercept\",\n\t\t\t\t\"container_disk_shift\",\n\t\t\t\t\"storage_shifted\",\n\t\t\t\t\"resources_infiniband\",\n\t\t\t\t\"daemon_storage\",\n\t\t\t\t\"instances\",\n\t\t\t\t\"image_types\",\n\t\t\t\t\"resources_disk_sata\",\n\t\t\t\t\"clustering_roles\",\n\t\t\t\t\"images_expiry\"\n\t\t\t],\n\t\t\t\"api_status\": \"stable\",\n\t\t\t\"api_version\": \"1.0\",\n\t\t\t\"auth\": \"trusted\",\n\t\t\t\"public\": false,\n\t\t\t\"auth_methods\": [\n\t\t\t\t\"tls\"\n\t\t\t],\n\t\t\t\"environment\": {\n\t\t\t\t\"addresses\": [\n\t\t\t\t\t\"10.20.66.63:8443\",\n\t\t\t\t\t\"[2001:67c:1560:a003:c7d:f968:7a02:d7c5]:8443\",\n\t\t\t\t\t\"192.168.122.1:8443\",\n\t\t\t\t\t\"10.123.80.1:8443\",\n\t\t\t\t\t\"10.143.111.1:8443\",\n\t\t\t\t\t\"[fd42:e078:9b4f:e259::1]:8443\",\n\t\t\t\t\t\"10.172.194.133:8443\",\n\t\t\t\t\t\"[2001:67c:1560:8007::aac:c285]:8443\"\n\t\t\t\t],\n\t\t\t\t\"architectures\": [\n\t\t\t\t\t\"x86_64\",\n\t\t\t\t\t\"i686\"\n\t\t\t\t],\n\t\t\t\t\"certificate\": \"-----BEGIN CERTIFICATE-----\\nMIIB+zCCAYGgAwIBAgIRAL+3DExX8gusgiUaquMaplswCgYIKoZIzj0EAwMwNzEc\\nMBoGA1UEChMTbGludXhjb250YWluZXJzLm9yZzEXMBUGA1UEAwwOcm9vdEBzdHJh\\nbW9uaW8wHhcNMTkwMjI3MjI1MzU3WhcNMjkwMjI0MjI1MzU3WjA3MRwwGgYDVQQK\\nExNsaW51eGNvbnRhaW5lcnMub3JnMRcwFQYDVQQDDA5yb290QHN0cmFtb25pbzB2\\nMBAGByqGSM49AgEGBSuBBAAiA2IABKZigkKf0QJX/b9jUsixh7lZiZ9ZYY9qwvU/\\nOyb3PIPx2RQFKxFBuIobRm+Z/BbCBXIw0pUSknAFcp3HfuqlfcwpAvAEMR9/jURF\\nXJ+C2B4ktIsEYfGe3SrAQArskkoJ76NRME8wDgYDVR0PAQH/BAQDAgWgMBMGA1Ud\\nJQQMMAoGCCsGAQUFBwMBMAwGA1UdEwEB/wQCMAAwGgYDVR0RBBMwEYIJc3RyYW1v\\nbmlvhwTAqAJcMAoGCCqGSM49BAMDA2gAMGUCMDo6essz4MgOCz4Nj/RtvDzD1WTn\\nfkybdl47rwWm4merS9DP8suQ9Z0gsHiRAdhTDQIxAJGYALm21Vn1xTI+gwmejdv6\\nFfEf/5W/kgXD85Se2uy3KeslY5NhpipBK1T5m8qyJA==\\n-----END CERTIFICATE-----\\n\",\n\t\t\t\t\"certificate_fingerprint\": \"e4f9e1c5d74db2140be59d0c69a0146596ea0f37c616208f26fd6033701fc2d0\",\n\t\t\t\t\"driver\": \"lxc\",\n\t\t\t\t\"driver_version\": \"3.2.1\",\n\t\t\t\t\"kernel\": \"Linux\",\n\t\t\t\t\"kernel_architecture\": \"x86_64\",\n\t\t\t\t\"kernel_features\": {\n\t\t\t\t\t\"netnsid_getifaddrs\": \"true\",\n\t\t\t\t\t\"seccomp_listener\": \"true\",\n\t\t\t\t\t\"shiftfs\": \"false\",\n\t\t\t\t\t\"uevent_injection\": \"true\",\n\t\t\t\t\t\"unpriv_fscaps\": \"true\"\n\t\t\t\t},\n\t\t\t\t\"kernel_version\": \"5.3.0-18-generic\",\n\t\t\t\t\"lxc_features\": {\n\t\t\t\t\t\"mount_injection_file\": \"true\",\n\t\t\t\t\t\"network_gateway_device_route\": \"true\",\n\t\t\t\t\t\"network_ipvlan\": \"true\",\n\t\t\t\t\t\"network_l2proxy\": \"true\",\n\t\t\t\t\t\"network_phys_macvlan_mtu\": \"true\",\n\t\t\t\t\t\"seccomp_notify\": \"true\"\n\t\t\t\t},\n\t\t\t\t\"project\": \"default\",\n\t\t\t\t\"server\": \"lxd\",\n\t\t\t\t\"server_clustered\": false,\n\t\t\t\t\"server_name\": \"stramonio\",\n\t\t\t\t\"server_pid\": 30791,\n\t\t\t\t\"server_version\": \"3.18\",\n\t\t\t\t\"storage\": \"zfs\",\n\t\t\t\t\"storage_version\": \"0.8.1-1ubuntu12\"\n\t\t\t}\n\t\t}\n\t}"
Oct 16 11:19:44 stramonio lxd.daemon[30691]: t=2019-10-16T11:19:44+0100 lvl=dbug msg=Handling ip=@ method=GET url=/1.0/instances/b user=
Oct 16 11:19:44 stramonio lxd.daemon[30691]: t=2019-10-16T11:19:44+0100 lvl=dbug msg="\n\t{\n\t\t\"type\": \"sync\",\n\t\t\"status\": \"Success\",\n\t\t\"status_code\": 200,\n\t\t\"operation\": \"\",\n\t\t\"error_code\": 0,\n\t\t\"error\": \"\",\n\t\t\"metadata\": {\n\t\t\t\"architecture\": \"x86_64\",\n\t\t\t\"config\": {\n\t\t\t\t\"image.architecture\": \"amd64\",\n\t\t\t\t\"image.description\": \"ubuntu 18.04 LTS amd64 (release) (20191008)\",\n\t\t\t\t\"image.label\": \"release\",\n\t\t\t\t\"image.os\": \"ubuntu\",\n\t\t\t\t\"image.release\": \"bionic\",\n\t\t\t\t\"image.serial\": \"20191008\",\n\t\t\t\t\"image.type\": \"squashfs\",\n\t\t\t\t\"image.version\": \"18.04\",\n\t\t\t\t\"volatile.base_image\": \"4a89a29d4db20988ee3f0bcec5e054d1cde3d4255d2129208ef87d35cff8b464\",\n\t\t\t\t\"volatile.eth0.host_name\": \"veth97e1e9d2\",\n\t\t\t\t\"volatile.eth0.hwaddr\": \"00:16:3e:fe:89:10\",\n\t\t\t\t\"volatile.idmap.base\": \"0\",\n\t\t\t\t\"volatile.idmap.current\": \"[{\\\"Isuid\\\":true,\\\"Isgid\\\":false,\\\"Hostid\\\":1000000,\\\"Nsid\\\":0,\\\"Maprange\\\":1000000000},{\\\"Isuid\\\":false,\\\"Isgid\\\":true,\\\"Hostid\\\":1000000,\\\"Nsid\\\":0,\\\"Maprange\\\":1000000000}]\",\n\t\t\t\t\"volatile.idmap.next\": \"[{\\\"Isuid\\\":true,\\\"Isgid\\\":false,\\\"Hostid\\\":1000000,\\\"Nsid\\\":0,\\\"Maprange\\\":1000000000},{\\\"Isuid\\\":false,\\\"Isgid\\\":true,\\\"Hostid\\\":1000000,\\\"Nsid\\\":0,\\\"Maprange\\\":1000000000}]\",\n\t\t\t\t\"volatile.last_state.idmap\": \"[{\\\"Isuid\\\":true,\\\"Isgid\\\":false,\\\"Hostid\\\":1000000,\\\"Nsid\\\":0,\\\"Maprange\\\":1000000000},{\\\"Isuid\\\":false,\\\"Isgid\\\":true,\\\"Hostid\\\":1000000,\\\"Nsid\\\":0,\\\"Maprange\\\":1000000000}]\",\n\t\t\t\t\"volatile.last_state.power\": \"RUNNING\"\n\t\t\t},\n\t\t\t\"devices\": {},\n\t\t\t\"ephemeral\": false,\n\t\t\t\"profiles\": [\n\t\t\t\t\"default\"\n\t\t\t],\n\t\t\t\"stateful\": false,\n\t\t\t\"description\": \"\",\n\t\t\t\"created_at\": \"2019-10-16T11:19:26.935987885+01:00\",\n\t\t\t\"expanded_config\": {\n\t\t\t\t\"image.architecture\": \"amd64\",\n\t\t\t\t\"image.description\": \"ubuntu 18.04 LTS amd64 (release) (20191008)\",\n\t\t\t\t\"image.label\": \"release\",\n\t\t\t\t\"image.os\": \"ubuntu\",\n\t\t\t\t\"image.release\": \"bionic\",\n\t\t\t\t\"image.serial\": \"20191008\",\n\t\t\t\t\"image.type\": \"squashfs\",\n\t\t\t\t\"image.version\": \"18.04\",\n\t\t\t\t\"volatile.base_image\": \"4a89a29d4db20988ee3f0bcec5e054d1cde3d4255d2129208ef87d35cff8b464\",\n\t\t\t\t\"volatile.eth0.host_name\": \"veth97e1e9d2\",\n\t\t\t\t\"volatile.eth0.hwaddr\": \"00:16:3e:fe:89:10\",\n\t\t\t\t\"volatile.idmap.base\": \"0\",\n\t\t\t\t\"volatile.idmap.current\": \"[{\\\"Isuid\\\":true,\\\"Isgid\\\":false,\\\"Hostid\\\":1000000,\\\"Nsid\\\":0,\\\"Maprange\\\":1000000000},{\\\"Isuid\\\":false,\\\"Isgid\\\":true,\\\"Hostid\\\":1000000,\\\"Nsid\\\":0,\\\"Maprange\\\":1000000000}]\",\n\t\t\t\t\"volatile.idmap.next\": \"[{\\\"Isuid\\\":true,\\\"Isgid\\\":false,\\\"Hostid\\\":1000000,\\\"Nsid\\\":0,\\\"Maprange\\\":1000000000},{\\\"Isuid\\\":false,\\\"Isgid\\\":true,\\\"Hostid\\\":1000000,\\\"Nsid\\\":0,\\\"Maprange\\\":1000000000}]\",\n\t\t\t\t\"volatile.last_state.idmap\": \"[{\\\"Isuid\\\":true,\\\"Isgid\\\":false,\\\"Hostid\\\":1000000,\\\"Nsid\\\":0,\\\"Maprange\\\":1000000000},{\\\"Isuid\\\":false,\\\"Isgid\\\":true,\\\"Hostid\\\":1000000,\\\"Nsid\\\":0,\\\"Maprange\\\":1000000000}]\",\n\t\t\t\t\"volatile.last_state.power\": \"RUNNING\"\n\t\t\t},\n\t\t\t\"expanded_devices\": {\n\t\t\t\t\"eth0\": {\n\t\t\t\t\t\"name\": \"eth0\",\n\t\t\t\t\t\"nictype\": \"bridged\",\n\t\t\t\t\t\"parent\": \"lxdbr0\",\n\t\t\t\t\t\"type\": \"nic\"\n\t\t\t\t},\n\t\t\t\t\"root\": {\n\t\t\t\t\t\"path\": \"/\",\n\t\t\t\t\t\"pool\": \"default\",\n\t\t\t\t\t\"type\": \"disk\"\n\t\t\t\t}\n\t\t\t},\n\t\t\t\"name\": \"b\",\n\t\t\t\"status\": \"Running\",\n\t\t\t\"status_code\": 103,\n\t\t\t\"last_used_at\": \"2019-10-16T11:19:29.738924951+01:00\",\n\t\t\t\"location\": \"none\",\n\t\t\t\"type\": \"container\"\n\t\t}\n\t}"
Oct 16 11:19:44 stramonio lxd.daemon[30691]: t=2019-10-16T11:19:44+0100 lvl=dbug msg=Handling ip=@ method=GET url=/1.0/events user=
Oct 16 11:19:44 stramonio lxd.daemon[30691]: t=2019-10-16T11:19:44+0100 lvl=dbug msg="New event listener: 91a6dd2a-59f1-44f8-9e5b-72833facf84f"
Oct 16 11:19:44 stramonio lxd.daemon[30691]: t=2019-10-16T11:19:44+0100 lvl=dbug msg=Handling ip=@ method=PUT url=/1.0/instances/b/state user=
Oct 16 11:19:44 stramonio lxd.daemon[30691]: t=2019-10-16T11:19:44+0100 lvl=dbug msg="\n\t{\n\t\t\"action\": \"stop\",\n\t\t\"timeout\": -1,\n\t\t\"force\": true,\n\t\t\"stateful\": false\n\t}"
Oct 16 11:19:44 stramonio lxd.daemon[30691]: t=2019-10-16T11:19:44+0100 lvl=dbug msg="New task Operation: 8a2655f8-ce6b-438a-a240-eef5be47f50a"
Oct 16 11:19:44 stramonio lxd.daemon[30691]: t=2019-10-16T11:19:44+0100 lvl=dbug msg="Started task operation: 8a2655f8-ce6b-438a-a240-eef5be47f50a"
Oct 16 11:19:44 stramonio lxd.daemon[30691]: t=2019-10-16T11:19:44+0100 lvl=info msg="Stopping container" action=stop created=2019-10-16T11:19:26+0100 ephemeral=false name=b project=default stateful=false used=2019-10-16T11:19:29+0100
Oct 16 11:19:44 stramonio lxd.daemon[30691]: t=2019-10-16T11:19:44+0100 lvl=dbug msg="\n\t{\n\t\t\"type\": \"async\",\n\t\t\"status\": \"Operation created\",\n\t\t\"status_code\": 100,\n\t\t\"operation\": \"/1.0/operations/8a2655f8-ce6b-438a-a240-eef5be47f50a\",\n\t\t\"error_code\": 0,\n\t\t\"error\": \"\",\n\t\t\"metadata\": {\n\t\t\t\"id\": \"8a2655f8-ce6b-438a-a240-eef5be47f50a\",\n\t\t\t\"class\": \"task\",\n\t\t\t\"description\": \"Stopping container\",\n\t\t\t\"created_at\": \"2019-10-16T11:19:44.006637764+01:00\",\n\t\t\t\"updated_at\": \"2019-10-16T11:19:44.006637764+01:00\",\n\t\t\t\"status\": \"Running\",\n\t\t\t\"status_code\": 103,\n\t\t\t\"resources\": {\n\t\t\t\t\"containers\": [\n\t\t\t\t\t\"/1.0/containers/b\"\n\t\t\t\t]\n\t\t\t},\n\t\t\t\"metadata\": null,\n\t\t\t\"may_cancel\": false,\n\t\t\t\"err\": \"\",\n\t\t\t\"location\": \"none\"\n\t\t}\n\t}"
Oct 16 11:19:44 stramonio lxd.daemon[30691]: t=2019-10-16T11:19:44+0100 lvl=dbug msg=Handling ip=@ method=GET url=/1.0/operations/8a2655f8-ce6b-438a-a240-eef5be47f50a user=
Oct 16 11:19:44 stramonio lxd.daemon[30691]: t=2019-10-16T11:19:44+0100 lvl=dbug msg="\n\t{\n\t\t\"type\": \"sync\",\n\t\t\"status\": \"Success\",\n\t\t\"status_code\": 200,\n\t\t\"operation\": \"\",\n\t\t\"error_code\": 0,\n\t\t\"error\": \"\",\n\t\t\"metadata\": {\n\t\t\t\"id\": \"8a2655f8-ce6b-438a-a240-eef5be47f50a\",\n\t\t\t\"class\": \"task\",\n\t\t\t\"description\": \"Stopping container\",\n\t\t\t\"created_at\": \"2019-10-16T11:19:44.006637764+01:00\",\n\t\t\t\"updated_at\": \"2019-10-16T11:19:44.006637764+01:00\",\n\t\t\t\"status\": \"Running\",\n\t\t\t\"status_code\": 103,\n\t\t\t\"resources\": {\n\t\t\t\t\"containers\": [\n\t\t\t\t\t\"/1.0/containers/b\"\n\t\t\t\t]\n\t\t\t},\n\t\t\t\"metadata\": null,\n\t\t\t\"may_cancel\": false,\n\t\t\t\"err\": \"\",\n\t\t\t\"location\": \"none\"\n\t\t}\n\t}"
Oct 16 11:19:44 stramonio lxd.daemon[30691]: t=2019-10-16T11:19:44+0100 lvl=dbug msg="Scheduler: network: eth0 has been added: updating network priorities"
Oct 16 11:19:44 stramonio lxd.daemon[30691]: t=2019-10-16T11:19:44+0100 lvl=dbug msg=Handling ip=@ method=GET url="/internal/containers/219/onstopns?target=stop&netns=/proc/4493/fd/11" user=
Oct 16 11:19:44 stramonio lxd.daemon[30691]: t=2019-10-16T11:19:44+0100 lvl=dbug msg="\n\t{\n\t\t\"type\": \"sync\",\n\t\t\"status\": \"Success\",\n\t\t\"status_code\": 200,\n\t\t\"operation\": \"\",\n\t\t\"error_code\": 0,\n\t\t\"error\": \"\",\n\t\t\"metadata\": {}\n\t}"
Oct 16 11:19:44 stramonio lxd.daemon[30691]: t=2019-10-16T11:19:44+0100 lvl=dbug msg=Handling ip=@ method=GET url="/internal/containers/219/onstop?target=stop" user=
Oct 16 11:19:44 stramonio lxd.daemon[30691]: t=2019-10-16T11:19:44+0100 lvl=dbug msg="Unmounting ZFS storage volume for container \"b\" on storage pool \"default\""
Oct 16 11:19:44 stramonio lxd.daemon[30691]: t=2019-10-16T11:19:44+0100 lvl=dbug msg="Unmounted ZFS storage volume for container \"b\" on storage pool \"default\""
Oct 16 11:19:44 stramonio lxd.daemon[30691]: t=2019-10-16T11:19:44+0100 lvl=dbug msg="\n\t{\n\t\t\"type\": \"sync\",\n\t\t\"status\": \"Success\",\n\t\t\"status_code\": 200,\n\t\t\"operation\": \"\",\n\t\t\"error_code\": 0,\n\t\t\"error\": \"\",\n\t\t\"metadata\": {}\n\t}"
Oct 16 11:19:45 stramonio lxd.daemon[30691]: t=2019-10-16T11:19:45+0100 lvl=info msg="Stopped container" action=stop created=2019-10-16T11:19:26+0100 ephemeral=false name=b project=default stateful=false used=2019-10-16T11:19:29+0100
Oct 16 11:19:45 stramonio lxd.daemon[30691]: t=2019-10-16T11:19:45+0100 lvl=dbug msg="Scheduler: container b stopped: re-balancing"
Oct 16 11:19:45 stramonio lxd.daemon[30691]: t=2019-10-16T11:19:45+0100 lvl=dbug msg="Success for task operation: 8a2655f8-ce6b-438a-a240-eef5be47f50a"
Oct 16 11:19:45 stramonio lxd.daemon[30691]: t=2019-10-16T11:19:45+0100 lvl=dbug msg=Handling ip=@ method=DELETE url=/1.0/instances/b user=
Oct 16 11:19:45 stramonio lxd.daemon[30691]: t=2019-10-16T11:19:45+0100 lvl=dbug msg="New task Operation: 4199050d-3b80-48f7-b81f-53caa3403ffc"
Oct 16 11:19:45 stramonio lxd.daemon[30691]: t=2019-10-16T11:19:45+0100 lvl=dbug msg="Started task operation: 4199050d-3b80-48f7-b81f-53caa3403ffc"
Oct 16 11:19:45 stramonio lxd.daemon[30691]: t=2019-10-16T11:19:45+0100 lvl=info msg="Deleting container" created=2019-10-16T11:19:26+0100 ephemeral=false name=b project=default used=2019-10-16T11:19:29+0100
Oct 16 11:19:45 stramonio lxd.daemon[30691]: t=2019-10-16T11:19:45+0100 lvl=dbug msg="\n\t{\n\t\t\"type\": \"async\",\n\t\t\"status\": \"Operation created\",\n\t\t\"status_code\": 100,\n\t\t\"operation\": \"/1.0/operations/4199050d-3b80-48f7-b81f-53caa3403ffc\",\n\t\t\"error_code\": 0,\n\t\t\"error\": \"\",\n\t\t\"metadata\": {\n\t\t\t\"id\": \"4199050d-3b80-48f7-b81f-53caa3403ffc\",\n\t\t\t\"class\": \"task\",\n\t\t\t\"description\": \"Deleting container\",\n\t\t\t\"created_at\": \"2019-10-16T11:19:45.01379536+01:00\",\n\t\t\t\"updated_at\": \"2019-10-16T11:19:45.01379536+01:00\",\n\t\t\t\"status\": \"Running\",\n\t\t\t\"status_code\": 103,\n\t\t\t\"resources\": {\n\t\t\t\t\"containers\": [\n\t\t\t\t\t\"/1.0/containers/b\"\n\t\t\t\t]\n\t\t\t},\n\t\t\t\"metadata\": null,\n\t\t\t\"may_cancel\": false,\n\t\t\t\"err\": \"\",\n\t\t\t\"location\": \"none\"\n\t\t}\n\t}"
Oct 16 11:19:45 stramonio lxd.daemon[30691]: t=2019-10-16T11:19:45+0100 lvl=dbug msg=Handling ip=@ method=GET url=/1.0/operations/4199050d-3b80-48f7-b81f-53caa3403ffc user=
Oct 16 11:19:45 stramonio lxd.daemon[30691]: t=2019-10-16T11:19:45+0100 lvl=dbug msg="\n\t{\n\t\t\"type\": \"sync\",\n\t\t\"status\": \"Success\",\n\t\t\"status_code\": 200,\n\t\t\"operation\": \"\",\n\t\t\"error_code\": 0,\n\t\t\"error\": \"\",\n\t\t\"metadata\": {\n\t\t\t\"id\": \"4199050d-3b80-48f7-b81f-53caa3403ffc\",\n\t\t\t\"class\": \"task\",\n\t\t\t\"description\": \"Deleting container\",\n\t\t\t\"created_at\": \"2019-10-16T11:19:45.01379536+01:00\",\n\t\t\t\"updated_at\": \"2019-10-16T11:19:45.01379536+01:00\",\n\t\t\t\"status\": \"Running\",\n\t\t\t\"status_code\": 103,\n\t\t\t\"resources\": {\n\t\t\t\t\"containers\": [\n\t\t\t\t\t\"/1.0/containers/b\"\n\t\t\t\t]\n\t\t\t},\n\t\t\t\"metadata\": null,\n\t\t\t\"may_cancel\": false,\n\t\t\t\"err\": \"\",\n\t\t\t\"location\": \"none\"\n\t\t}\n\t}"
Oct 16 11:19:45 stramonio lxd.daemon[30691]: t=2019-10-16T11:19:45+0100 lvl=dbug msg="Deleting ZFS storage volume for container \"b\" on storage pool \"default\""

[ pause of a few seconds with no output ]

Oct 16 11:19:55 stramonio lxd.daemon[30691]: t=2019-10-16T11:19:55+0100 lvl=eror msg="Failed deleting container storage" err="Failed to destroy ZFS dataset: Failed to run: zfs destroy -r mypool/lxd-dataset/containers/b: cannot destroy 'mypool/lxd-dataset/containers/b': dataset is busy" name=b
Oct 16 11:19:55 stramonio lxd.daemon[30691]: t=2019-10-16T11:19:55+0100 lvl=dbug msg="Failure for task operation: 4199050d-3b80-48f7-b81f-53caa3403ffc: Failed to destroy ZFS dataset: Failed to run: zfs destroy -r mypool/lxd-dataset/containers/b: cannot destroy 'mypool/lxd-dataset/containers/b': dataset is busy"

@stgraber
Copy link
Contributor

@paride Can you get me:

  • Pastebin of the full output of journalctl -b0 -u snap.lxd.daemon
  • cat /proc/self/mountinfo
  • cat /proc/$(pgrep daemon.start)/mountinfo

I want to make sure that the mount table entries all match what we'd expect.
Make sure that you have at least one freshly created container before capturing those mountinfo ones.

@paride
Copy link

paride commented Oct 16, 2019

Sure. Pastebin: https://paste.ubuntu.com/p/kRcbMR6XP6/ (but note that when I bootedI still had to set daemon.debug=true)

In what follows the container named b was created right before capturing the mountinfo, and did show the "dataset is busy" when I later tried to delete it.

/proc/self/mountinfo:

23 30 0:22 / /sys rw,nosuid,nodev,noexec,relatime shared:7 - sysfs sysfs rw
24 30 0:5 / /proc rw,nosuid,nodev,noexec,relatime shared:15 - proc proc rw
25 30 0:6 / /dev rw,nosuid,relatime shared:2 - devtmpfs udev rw,size=12206772k,nr_inodes=3051693,mode=755
26 25 0:23 / /dev/pts rw,nosuid,noexec,relatime shared:3 - devpts devpts rw,gid=5,mode=620,ptmxmode=000
27 30 0:24 / /run rw,nosuid,noexec,relatime shared:5 - tmpfs tmpfs rw,size=2450252k,mode=755
30 1 259:2 / / rw,relatime shared:1 - ext4 /dev/nvme0n1p2 rw,errors=remount-ro
31 23 0:7 / /sys/kernel/security rw,nosuid,nodev,noexec,relatime shared:8 - securityfs securityfs rw
32 25 0:27 / /dev/shm rw,nosuid,nodev shared:4 - tmpfs tmpfs rw
33 27 0:28 / /run/lock rw,nosuid,nodev,noexec,relatime shared:6 - tmpfs tmpfs rw,size=5120k
34 23 0:29 / /sys/fs/cgroup ro,nosuid,nodev,noexec shared:9 - tmpfs tmpfs ro,mode=755
35 34 0:30 / /sys/fs/cgroup/unified rw,nosuid,nodev,noexec,relatime shared:10 - cgroup2 cgroup2 rw
36 34 0:31 / /sys/fs/cgroup/systemd rw,nosuid,nodev,noexec,relatime shared:11 - cgroup cgroup rw,xattr,name=systemd
37 23 0:32 / /sys/fs/pstore rw,nosuid,nodev,noexec,relatime shared:12 - pstore pstore rw
38 23 0:33 / /sys/firmware/efi/efivars rw,nosuid,nodev,noexec,relatime shared:13 - efivarfs efivarfs rw
39 23 0:34 / /sys/fs/bpf rw,nosuid,nodev,noexec,relatime shared:14 - bpf bpf rw,mode=700
40 34 0:35 / /sys/fs/cgroup/freezer rw,nosuid,nodev,noexec,relatime shared:16 - cgroup cgroup rw,freezer
41 34 0:36 / /sys/fs/cgroup/net_cls,net_prio rw,nosuid,nodev,noexec,relatime shared:17 - cgroup cgroup rw,net_cls,net_prio
42 34 0:37 / /sys/fs/cgroup/pids rw,nosuid,nodev,noexec,relatime shared:18 - cgroup cgroup rw,pids
43 34 0:38 / /sys/fs/cgroup/devices rw,nosuid,nodev,noexec,relatime shared:19 - cgroup cgroup rw,devices
44 34 0:39 / /sys/fs/cgroup/blkio rw,nosuid,nodev,noexec,relatime shared:20 - cgroup cgroup rw,blkio
45 34 0:40 / /sys/fs/cgroup/perf_event rw,nosuid,nodev,noexec,relatime shared:21 - cgroup cgroup rw,perf_event
46 34 0:41 / /sys/fs/cgroup/memory rw,nosuid,nodev,noexec,relatime shared:22 - cgroup cgroup rw,memory
47 34 0:42 / /sys/fs/cgroup/hugetlb rw,nosuid,nodev,noexec,relatime shared:23 - cgroup cgroup rw,hugetlb
48 34 0:43 / /sys/fs/cgroup/rdma rw,nosuid,nodev,noexec,relatime shared:24 - cgroup cgroup rw,rdma
49 34 0:44 / /sys/fs/cgroup/cpuset rw,nosuid,nodev,noexec,relatime shared:25 - cgroup cgroup rw,cpuset,clone_children
50 34 0:45 / /sys/fs/cgroup/cpu,cpuacct rw,nosuid,nodev,noexec,relatime shared:26 - cgroup cgroup rw,cpu,cpuacct
51 24 0:46 / /proc/sys/fs/binfmt_misc rw,relatime shared:27 - autofs systemd-1 rw,fd=25,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=17995
52 23 0:8 / /sys/kernel/debug rw,nosuid,nodev,noexec,relatime shared:28 - debugfs debugfs rw
53 25 0:47 / /dev/hugepages rw,relatime shared:29 - hugetlbfs hugetlbfs rw,pagesize=2M
54 25 0:20 / /dev/mqueue rw,nosuid,nodev,noexec,relatime shared:30 - mqueue mqueue rw
55 23 0:48 / /sys/fs/fuse/connections rw,nosuid,nodev,noexec,relatime shared:31 - fusectl fusectl rw
56 23 0:21 / /sys/kernel/config rw,nosuid,nodev,noexec,relatime shared:32 - configfs configfs rw
124 30 7:2 / /snap/code/17 ro,nodev,relatime shared:67 - squashfs /dev/loop2 ro
127 30 7:1 / /snap/snapcraft/3440 ro,nodev,relatime shared:69 - squashfs /dev/loop1 ro
201 30 7:4 / /snap/multipass/1125 ro,nodev,relatime shared:71 - squashfs /dev/loop4 ro
205 30 7:5 / /snap/testflinger-cli/54 ro,nodev,relatime shared:73 - squashfs /dev/loop5 ro
209 30 7:3 / /snap/git-ubuntu/457 ro,nodev,relatime shared:75 - squashfs /dev/loop3 ro
130 30 7:6 / /snap/gnome-3-28-1804/71 ro,nodev,relatime shared:77 - squashfs /dev/loop6 ro
136 30 7:7 / /snap/git-ubuntu/458 ro,nodev,relatime shared:81 - squashfs /dev/loop7 ro
139 30 7:0 / /snap/ubuntu-bug-triage/100 ro,nodev,relatime shared:83 - squashfs /dev/loop0 ro
142 30 7:9 / /snap/ustriage/82 ro,nodev,relatime shared:85 - squashfs /dev/loop9 ro
145 30 7:10 / /snap/gtk-common-themes/1313 ro,nodev,relatime shared:87 - squashfs /dev/loop10 ro
148 30 7:11 / /snap/ubuntu-bug-triage/106 ro,nodev,relatime shared:89 - squashfs /dev/loop11 ro
154 30 7:13 / /snap/gtk-common-themes/1353 ro,nodev,relatime shared:93 - squashfs /dev/loop13 ro
157 30 7:14 / /snap/gnome-3-28-1804/67 ro,nodev,relatime shared:95 - squashfs /dev/loop14 ro
160 30 7:15 / /snap/testflinger-cli/48 ro,nodev,relatime shared:97 - squashfs /dev/loop15 ro
163 30 7:16 / /snap/chromium/861 ro,nodev,relatime shared:99 - squashfs /dev/loop16 ro
166 30 7:18 / /snap/core18/1192 ro,nodev,relatime shared:101 - squashfs /dev/loop18 ro
169 30 7:17 / /snap/chromium/881 ro,nodev,relatime shared:103 - squashfs /dev/loop17 ro
172 30 7:19 / /snap/snapcraft/3308 ro,nodev,relatime shared:105 - squashfs /dev/loop19 ro
175 30 7:20 / /snap/ustriage/88 ro,nodev,relatime shared:107 - squashfs /dev/loop20 ro
178 30 7:21 / /snap/lxd/12178 ro,nodev,relatime shared:109 - squashfs /dev/loop21 ro
181 30 259:1 / /boot/efi rw,relatime shared:111 - vfat /dev/nvme0n1p1 rw,fmask=0077,dmask=0077,codepage=437,iocharset=iso8859-1,shortname=mixed,errors=remount-ro
184 30 7:22 / /snap/core/7917 ro,nodev,relatime shared:113 - squashfs /dev/loop22 ro
187 30 7:23 / /snap/lxd/12181 ro,nodev,relatime shared:115 - squashfs /dev/loop23 ro
190 30 7:24 / /snap/core/7713 ro,nodev,relatime shared:117 - squashfs /dev/loop24 ro
196 30 0:49 / /srv/images rw,noatime shared:121 - zfs mypool/curtin-vmtest-images rw,xattr,noacl
199 30 0:50 / /home rw,noatime shared:123 - zfs mypool/home rw,xattr,noacl
207 30 0:51 / /var/cache/apt-cacher-ng rw,noatime shared:125 - zfs mypool/apt-cacher-ng rw,xattr,noacl
214 199 0:52 / /home/paride rw,noatime shared:127 - zfs mypool/home/paride rw,xattr,noacl
217 199 0:53 / /home/test rw,noatime shared:129 - zfs mypool/home/test rw,xattr,noacl
220 51 0:54 / /proc/sys/fs/binfmt_misc rw,nosuid,nodev,noexec,relatime shared:131 - binfmt_misc binfmt_misc rw
1754 27 0:24 /snapd/ns /run/snapd/ns rw,nosuid,noexec,relatime - tmpfs tmpfs rw,size=2450252k,mode=755
1790 1754 0:4 mnt:[4026532597] /run/snapd/ns/lxd.mnt rw shared:120 - nsfs nsfs rw
2063 30 0:62 / /var/snap/lxd/common/ns rw,relatime shared:360 - tmpfs tmpfs rw,size=1024k,mode=700
2092 2063 0:4 mnt:[4026532737] /var/snap/lxd/common/ns/shmounts rw shared:431 - nsfs nsfs rw
2035 2063 0:4 mnt:[4026532597] /var/snap/lxd/common/ns/mntns rw shared:497 - nsfs nsfs rw
1506 27 0:57 / /run/user/1000 rw,nosuid,nodev,relatime shared:805 - tmpfs tmpfs rw,size=2450248k,mode=700,uid=1000,gid=1000
810 30 7:26 / /snap/signal-desktop/189 ro,nodev,relatime shared:720 - squashfs /dev/loop26 ro
1495 1754 0:4 mnt:[4026533320] /run/snapd/ns/signal-desktop.mnt rw shared:203 - nsfs nsfs rw
805 30 7:27 / /snap/core18/1223 ro,nodev,relatime shared:735 - squashfs /dev/loop27 ro
3262 1754 0:4 mnt:[4026532761] /run/snapd/ns/chromium.mnt rw shared:286 - nsfs nsfs rw
3591 1506 0:68 / /run/user/1000/gvfs rw,nosuid,nodev,relatime shared:1489 - fuse.gvfsd-fuse gvfsd-fuse rw,user_id=1000,group_id=1000
1545 1506 0:93 / /run/user/1000/doc rw,nosuid,nodev,relatime shared:753 - fuse /dev/fuse rw,user_id=1000,group_id=1000
925 30 7:25 / /snap/multipass/1230 ro,nodev,relatime shared:119 - squashfs /dev/loop25 ro
106 30 7:12 / /snap/code/18 ro,nodev,relatime shared:91 - squashfs /dev/loop12 ro

/proc/$(pgrep daemon.start)/mountinfo:

1786 1957 259:2 / /var/lib/snapd/hostfs rw,relatime master:1 - ext4 /dev/nvme0n1p2 rw,errors=remount-ro
1792 1786 0:24 / /var/lib/snapd/hostfs/run rw,nosuid,noexec,relatime master:5 - tmpfs tmpfs rw,size=2450252k,mode=755
1793 1792 0:28 / /var/lib/snapd/hostfs/run/lock rw,nosuid,nodev,noexec,relatime master:6 - tmpfs tmpfs rw,size=5120k
1794 1792 0:24 /snapd/ns /var/lib/snapd/hostfs/run/snapd/ns rw,nosuid,noexec,relatime - tmpfs tmpfs rw,size=2450252k,mode=755
1820 1786 7:2 / /var/lib/snapd/hostfs/snap/code/17 ro,nodev,relatime master:67 - squashfs /dev/loop2 ro
1821 1786 7:1 / /var/lib/snapd/hostfs/snap/snapcraft/3440 ro,nodev,relatime master:69 - squashfs /dev/loop1 ro
1822 1786 7:4 / /var/lib/snapd/hostfs/snap/multipass/1125 ro,nodev,relatime master:71 - squashfs /dev/loop4 ro
1823 1786 7:5 / /var/lib/snapd/hostfs/snap/testflinger-cli/54 ro,nodev,relatime master:73 - squashfs /dev/loop5 ro
1824 1786 7:3 / /var/lib/snapd/hostfs/snap/git-ubuntu/457 ro,nodev,relatime master:75 - squashfs /dev/loop3 ro
1825 1786 7:6 / /var/lib/snapd/hostfs/snap/gnome-3-28-1804/71 ro,nodev,relatime master:77 - squashfs /dev/loop6 ro
1827 1786 7:7 / /var/lib/snapd/hostfs/snap/git-ubuntu/458 ro,nodev,relatime master:81 - squashfs /dev/loop7 ro
1828 1786 7:0 / /var/lib/snapd/hostfs/snap/ubuntu-bug-triage/100 ro,nodev,relatime master:83 - squashfs /dev/loop0 ro
1829 1786 7:9 / /var/lib/snapd/hostfs/snap/ustriage/82 ro,nodev,relatime master:85 - squashfs /dev/loop9 ro
1830 1786 7:10 / /var/lib/snapd/hostfs/snap/gtk-common-themes/1313 ro,nodev,relatime master:87 - squashfs /dev/loop10 ro
1831 1786 7:11 / /var/lib/snapd/hostfs/snap/ubuntu-bug-triage/106 ro,nodev,relatime master:89 - squashfs /dev/loop11 ro
1833 1786 7:13 / /var/lib/snapd/hostfs/snap/gtk-common-themes/1353 ro,nodev,relatime master:93 - squashfs /dev/loop13 ro
1834 1786 7:14 / /var/lib/snapd/hostfs/snap/gnome-3-28-1804/67 ro,nodev,relatime master:95 - squashfs /dev/loop14 ro
1835 1786 7:15 / /var/lib/snapd/hostfs/snap/testflinger-cli/48 ro,nodev,relatime master:97 - squashfs /dev/loop15 ro
1836 1786 7:16 / /var/lib/snapd/hostfs/snap/chromium/861 ro,nodev,relatime master:99 - squashfs /dev/loop16 ro
1837 1786 7:18 / /var/lib/snapd/hostfs/snap/core18/1192 ro,nodev,relatime master:101 - squashfs /dev/loop18 ro
1838 1786 7:17 / /var/lib/snapd/hostfs/snap/chromium/881 ro,nodev,relatime master:103 - squashfs /dev/loop17 ro
1839 1786 7:19 / /var/lib/snapd/hostfs/snap/snapcraft/3308 ro,nodev,relatime master:105 - squashfs /dev/loop19 ro
1840 1786 7:20 / /var/lib/snapd/hostfs/snap/ustriage/88 ro,nodev,relatime master:107 - squashfs /dev/loop20 ro
1841 1786 7:21 / /var/lib/snapd/hostfs/snap/lxd/12178 ro,nodev,relatime master:109 - squashfs /dev/loop21 ro
1842 1786 259:1 / /var/lib/snapd/hostfs/boot/efi rw,relatime master:111 - vfat /dev/nvme0n1p1 rw,fmask=0077,dmask=0077,codepage=437,iocharset=iso8859-1,shortname=mixed,errors=remount-ro
1843 1786 7:22 / /var/lib/snapd/hostfs/snap/core/7917 ro,nodev,relatime master:113 - squashfs /dev/loop22 ro
1844 1786 7:23 / /var/lib/snapd/hostfs/snap/lxd/12181 ro,nodev,relatime master:115 - squashfs /dev/loop23 ro
1845 1786 7:24 / /var/lib/snapd/hostfs/snap/core/7713 ro,nodev,relatime master:117 - squashfs /dev/loop24 ro
1847 1786 0:49 / /var/lib/snapd/hostfs/srv/images rw,noatime master:121 - zfs mypool/curtin-vmtest-images rw,xattr,noacl
1848 1786 0:50 / /var/lib/snapd/hostfs/home rw,noatime master:123 - zfs mypool/home rw,xattr,noacl
1849 1848 0:52 / /var/lib/snapd/hostfs/home/paride rw,noatime master:127 - zfs mypool/home/paride rw,xattr,noacl
1850 1848 0:53 / /var/lib/snapd/hostfs/home/test rw,noatime master:129 - zfs mypool/home/test rw,xattr,noacl
1851 1786 0:51 / /var/lib/snapd/hostfs/var/cache/apt-cacher-ng rw,noatime master:125 - zfs mypool/apt-cacher-ng rw,xattr,noacl
1878 1785 7:22 / / ro,nodev,relatime master:113 - squashfs /dev/loop22 ro
1879 1878 0:6 / /dev rw,nosuid,relatime master:2 - devtmpfs udev rw,size=12206772k,nr_inodes=3051693,mode=755
1880 1879 0:23 / /dev/pts rw,nosuid,noexec,relatime master:3 - devpts devpts rw,gid=5,mode=620,ptmxmode=000
1881 1879 0:27 / /dev/shm rw,nosuid,nodev master:4 - tmpfs tmpfs rw
1882 1879 0:47 / /dev/hugepages rw,relatime master:29 - hugetlbfs hugetlbfs rw,pagesize=2M
1883 1879 0:20 / /dev/mqueue rw,nosuid,nodev,noexec,relatime master:30 - mqueue mqueue rw
1885 1878 0:50 / /home rw,noatime master:123 - zfs mypool/home rw,xattr,noacl
1886 1885 0:52 / /home/paride rw,noatime master:127 - zfs mypool/home/paride rw,xattr,noacl
1887 1885 0:53 / /home/test rw,noatime master:129 - zfs mypool/home/test rw,xattr,noacl
1888 1878 259:2 /root /root rw,relatime master:1 - ext4 /dev/nvme0n1p2 rw,errors=remount-ro
1889 1878 0:5 / /proc rw,nosuid,nodev,noexec,relatime master:15 - proc proc rw
1890 1889 0:46 / /proc/sys/fs/binfmt_misc rw,relatime master:27 - autofs systemd-1 rw,fd=25,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=17995
1891 1890 0:54 / /proc/sys/fs/binfmt_misc rw,nosuid,nodev,noexec,relatime master:131 - binfmt_misc binfmt_misc rw
1892 1878 0:22 / /sys rw,nosuid,nodev,noexec,relatime master:7 - sysfs sysfs rw
1893 1892 0:7 / /sys/kernel/security rw,nosuid,nodev,noexec,relatime master:8 - securityfs securityfs rw
1894 1892 0:29 / /sys/fs/cgroup ro,nosuid,nodev,noexec master:9 - tmpfs tmpfs ro,mode=755
1895 1894 0:30 / /sys/fs/cgroup/unified rw,nosuid,nodev,noexec,relatime master:10 - cgroup2 cgroup2 rw
1896 1894 0:31 / /sys/fs/cgroup/systemd rw,nosuid,nodev,noexec,relatime master:11 - cgroup cgroup rw,xattr,name=systemd
1897 1894 0:35 / /sys/fs/cgroup/freezer rw,nosuid,nodev,noexec,relatime master:16 - cgroup cgroup rw,freezer
1898 1894 0:36 / /sys/fs/cgroup/net_cls,net_prio rw,nosuid,nodev,noexec,relatime master:17 - cgroup cgroup rw,net_cls,net_prio
1899 1894 0:37 / /sys/fs/cgroup/pids rw,nosuid,nodev,noexec,relatime master:18 - cgroup cgroup rw,pids
1900 1894 0:38 / /sys/fs/cgroup/devices rw,nosuid,nodev,noexec,relatime master:19 - cgroup cgroup rw,devices
1901 1894 0:39 / /sys/fs/cgroup/blkio rw,nosuid,nodev,noexec,relatime master:20 - cgroup cgroup rw,blkio
1902 1894 0:40 / /sys/fs/cgroup/perf_event rw,nosuid,nodev,noexec,relatime master:21 - cgroup cgroup rw,perf_event
1903 1894 0:41 / /sys/fs/cgroup/memory rw,nosuid,nodev,noexec,relatime master:22 - cgroup cgroup rw,memory
1904 1894 0:42 / /sys/fs/cgroup/hugetlb rw,nosuid,nodev,noexec,relatime master:23 - cgroup cgroup rw,hugetlb
1905 1894 0:43 / /sys/fs/cgroup/rdma rw,nosuid,nodev,noexec,relatime master:24 - cgroup cgroup rw,rdma
1906 1894 0:44 / /sys/fs/cgroup/cpuset rw,nosuid,nodev,noexec,relatime master:25 - cgroup cgroup rw,cpuset,clone_children
1907 1894 0:45 / /sys/fs/cgroup/cpu,cpuacct rw,nosuid,nodev,noexec,relatime master:26 - cgroup cgroup rw,cpu,cpuacct
1908 1892 0:32 / /sys/fs/pstore rw,nosuid,nodev,noexec,relatime master:12 - pstore pstore rw
1909 1892 0:33 / /sys/firmware/efi/efivars rw,nosuid,nodev,noexec,relatime master:13 - efivarfs efivarfs rw
1910 1892 0:34 / /sys/fs/bpf rw,nosuid,nodev,noexec,relatime master:14 - bpf bpf rw,mode=700
1911 1892 0:8 / /sys/kernel/debug rw,nosuid,nodev,noexec,relatime master:28 - debugfs debugfs rw
1912 1892 0:48 / /sys/fs/fuse/connections rw,nosuid,nodev,noexec,relatime master:31 - fusectl fusectl rw
1913 1892 0:21 / /sys/kernel/config rw,nosuid,nodev,noexec,relatime master:32 - configfs configfs rw
1914 1878 259:2 /tmp /tmp rw,relatime master:1 - ext4 /dev/nvme0n1p2 rw,errors=remount-ro
1915 1878 259:2 /var/snap /var/snap rw,relatime master:1 - ext4 /dev/nvme0n1p2 rw,errors=remount-ro
1916 1878 259:2 /var/lib/snapd /var/lib/snapd rw,relatime master:1 - ext4 /dev/nvme0n1p2 rw,errors=remount-ro
1917 1878 259:2 /var/tmp /var/tmp rw,relatime master:1 - ext4 /dev/nvme0n1p2 rw,errors=remount-ro
1921 1878 259:2 /usr/lib/modules /lib/modules rw,relatime master:1 - ext4 /dev/nvme0n1p2 rw,errors=remount-ro
1922 1878 259:2 /usr/lib/firmware /lib/firmware rw,relatime master:1 - ext4 /dev/nvme0n1p2 rw,errors=remount-ro
1923 1878 259:2 /usr/src /usr/src rw,relatime master:1 - ext4 /dev/nvme0n1p2 rw,errors=remount-ro
1924 1878 259:2 /var/log /var/log rw,relatime master:1 - ext4 /dev/nvme0n1p2 rw,errors=remount-ro
1925 1878 259:2 /media /media rw,relatime shared:1 - ext4 /dev/nvme0n1p2 rw,errors=remount-ro
1927 1878 259:2 /mnt /mnt rw,relatime master:1 - ext4 /dev/nvme0n1p2 rw,errors=remount-ro
1930 1878 259:2 /snap /snap rw,relatime master:1 - ext4 /dev/nvme0n1p2 rw,errors=remount-ro
1931 1930 7:2 / /snap/code/17 ro,nodev,relatime master:67 - squashfs /dev/loop2 ro
1932 1930 7:1 / /snap/snapcraft/3440 ro,nodev,relatime master:69 - squashfs /dev/loop1 ro
1933 1930 7:4 / /snap/multipass/1125 ro,nodev,relatime master:71 - squashfs /dev/loop4 ro
1934 1930 7:5 / /snap/testflinger-cli/54 ro,nodev,relatime master:73 - squashfs /dev/loop5 ro
1935 1930 7:3 / /snap/git-ubuntu/457 ro,nodev,relatime master:75 - squashfs /dev/loop3 ro
1936 1930 7:6 / /snap/gnome-3-28-1804/71 ro,nodev,relatime master:77 - squashfs /dev/loop6 ro
1938 1930 7:7 / /snap/git-ubuntu/458 ro,nodev,relatime master:81 - squashfs /dev/loop7 ro
1939 1930 7:0 / /snap/ubuntu-bug-triage/100 ro,nodev,relatime master:83 - squashfs /dev/loop0 ro
1940 1930 7:9 / /snap/ustriage/82 ro,nodev,relatime master:85 - squashfs /dev/loop9 ro
1941 1930 7:10 / /snap/gtk-common-themes/1313 ro,nodev,relatime master:87 - squashfs /dev/loop10 ro
1942 1930 7:11 / /snap/ubuntu-bug-triage/106 ro,nodev,relatime master:89 - squashfs /dev/loop11 ro
1944 1930 7:13 / /snap/gtk-common-themes/1353 ro,nodev,relatime master:93 - squashfs /dev/loop13 ro
1945 1930 7:14 / /snap/gnome-3-28-1804/67 ro,nodev,relatime master:95 - squashfs /dev/loop14 ro
1946 1930 7:15 / /snap/testflinger-cli/48 ro,nodev,relatime master:97 - squashfs /dev/loop15 ro
1947 1930 7:16 / /snap/chromium/861 ro,nodev,relatime master:99 - squashfs /dev/loop16 ro
1948 1930 7:18 / /snap/core18/1192 ro,nodev,relatime master:101 - squashfs /dev/loop18 ro
1949 1930 7:17 / /snap/chromium/881 ro,nodev,relatime master:103 - squashfs /dev/loop17 ro
1950 1930 7:19 / /snap/snapcraft/3308 ro,nodev,relatime master:105 - squashfs /dev/loop19 ro
1951 1930 7:20 / /snap/ustriage/88 ro,nodev,relatime master:107 - squashfs /dev/loop20 ro
1952 1930 7:21 / /snap/lxd/12178 ro,nodev,relatime master:109 - squashfs /dev/loop21 ro
1953 1930 7:22 / /snap/core/7917 ro,nodev,relatime master:113 - squashfs /dev/loop22 ro
1954 1930 7:23 / /snap/lxd/12181 ro,nodev,relatime master:115 - squashfs /dev/loop23 ro
1955 1930 7:24 / /snap/core/7713 ro,nodev,relatime master:117 - squashfs /dev/loop24 ro
1957 1916 259:2 /var/lib/snapd/hostfs /var/lib/snapd/hostfs rw,relatime - ext4 /dev/nvme0n1p2 rw,errors=remount-ro
1787 1914 259:2 /tmp/snap.lxd/tmp /tmp rw,relatime - ext4 /dev/nvme0n1p2 rw,errors=remount-ro
1788 1880 0:59 / /dev/pts rw,relatime - devpts devpts rw,gid=5,mode=620,ptmxmode=666
1789 1879 0:59 /ptmx /dev/ptmx rw,relatime - devpts devpts rw,gid=5,mode=620,ptmxmode=666
1964 1915 259:2 /var/snap/lxd/common/lxd/storage-pools /var/snap/lxd/common/lxd/storage-pools rw,relatime - ext4 /dev/nvme0n1p2 rw,errors=remount-ro
1965 1915 259:2 /var/snap/lxd/common/lxd/devices /var/snap/lxd/common/lxd/devices rw,relatime shared:986 master:1 - ext4 /dev/nvme0n1p2 rw,errors=remount-ro
2091 1915 0:62 / /var/snap/lxd/common/ns rw,relatime master:999 - tmpfs tmpfs rw,size=1024k,mode=700
2065 1786 0:62 / /var/lib/snapd/hostfs/var/snap/lxd/common/ns rw,relatime master:999 - tmpfs tmpfs rw,size=1024k,mode=700
2094 1915 0:63 / /var/snap/lxd/common/shmounts rw,relatime shared:1012 - tmpfs tmpfs rw,size=1024k,mode=711
2036 1878 7:23 /wrappers/kmod /bin/kmod ro,nodev,relatime master:115 - squashfs /dev/loop23 ro
791 1878 259:2 /usr/share/ca-certificates /usr/share/ca-certificates rw,relatime master:1 - ext4 /dev/nvme0n1p2 rw,errors=remount-ro
792 1786 0:6 / /var/lib/snapd/hostfs/dev rw,nosuid,relatime master:2 - devtmpfs udev rw,size=12206772k,nr_inodes=3051693,mode=755
793 1786 0:5 / /var/lib/snapd/hostfs/proc rw,nosuid,nodev,noexec,relatime master:15 - proc proc rw
794 1786 0:22 / /var/lib/snapd/hostfs/sys rw,nosuid,nodev,noexec,relatime master:7 - sysfs sysfs rw
796 2094 0:67 / /var/snap/lxd/common/shmounts/lxcfs rw,nosuid,nodev,relatime shared:719 - fuse.lxcfs lxcfs rw,user_id=0,group_id=0,allow_other
2078 2094 0:69 / /var/snap/lxd/common/shmounts/containers rw,relatime shared:995 - tmpfs tmpfs rw,size=100k,mode=711
2093 1915 0:70 / /var/snap/lxd/common/lxd/devlxd rw,relatime - tmpfs tmpfs rw,size=100k,mode=755
2545 1964 0:71 / /var/snap/lxd/common/lxd/storage-pools/default/containers/bionic rw,relatime - zfs mypool/lxd-dataset/containers/bionic rw,xattr,posixacl
1508 1792 0:57 / /var/lib/snapd/hostfs/run/user/1000 rw,nosuid,nodev,relatime master:805 - tmpfs tmpfs rw,size=2450248k,mode=700,uid=1000,gid=1000
841 1930 7:26 / /snap/signal-desktop/189 ro,nodev,relatime master:720 - squashfs /dev/loop26 ro
812 1786 7:26 / /var/lib/snapd/hostfs/snap/signal-desktop/189 ro,nodev,relatime master:720 - squashfs /dev/loop26 ro
2962 1930 7:27 / /snap/core18/1223 ro,nodev,relatime master:735 - squashfs /dev/loop27 ro
2929 1786 7:27 / /var/lib/snapd/hostfs/snap/core18/1223 ro,nodev,relatime master:735 - squashfs /dev/loop27 ro
3639 1508 0:68 / /var/lib/snapd/hostfs/run/user/1000/gvfs rw,nosuid,nodev,relatime master:1489 - fuse.gvfsd-fuse gvfsd-fuse rw,user_id=1000,group_id=1000
1576 1508 0:93 / /var/lib/snapd/hostfs/run/user/1000/doc rw,nosuid,nodev,relatime master:753 - fuse /dev/fuse rw,user_id=1000,group_id=1000
838 2036 7:23 /wrappers/kmod /bin/kmod ro,nodev,relatime master:115 - squashfs /dev/loop23 ro
854 1878 259:2 /boot /boot rw,relatime master:1 - ext4 /dev/nvme0n1p2 rw,errors=remount-ro
1532 1878 0:86 / /run rw,nosuid,nodev,relatime - tmpfs tmpfs rw,mode=755
1533 1878 0:65 / /etc rw,relatime - tmpfs tmpfs rw,mode=755
1784 1930 7:25 / /snap/multipass/1230 ro,nodev,relatime master:119 - squashfs /dev/loop25 ro
927 1786 7:25 / /var/lib/snapd/hostfs/snap/multipass/1230 ro,nodev,relatime master:119 - squashfs /dev/loop25 ro
1918 1930 7:12 / /snap/code/18 ro,nodev,relatime master:91 - squashfs /dev/loop12 ro
206 1786 7:12 / /var/lib/snapd/hostfs/snap/code/18 ro,nodev,relatime master:91 - squashfs /dev/loop12 ro
2330 1964 0:87 / /var/snap/lxd/common/lxd/storage-pools/default/containers/b rw,relatime - zfs mypool/lxd-dataset/containers/b rw,xattr,posixacl

@stgraber
Copy link
Contributor

Now that's weird, there are no errors around failure to setup the mount propagation bits and the intermediate mounts are present, but the propagation settings aren't there...

Can you show:

  • snap version
  • snap list
  • uname -a
  • lsb_release -a

So hopefully I can get a test VM which mostly matches this to see why we would have the kernel tell us propagation was applied but then not have it actually be set.

@paride
Copy link

paride commented Oct 16, 2019

On the Eoan machine:

snap version
snap    2.42
snapd   2.42
series  16
ubuntu  19.10
kernel  5.3.0-18-generic
$ snap list
Name               Version                     Rev    Tracking  Publisher     Notes
chromium           77.0.3865.90                881    stable    canonical✓    -
code               6ab59852                    18     stable    vscode✓       classic
core               16-2.42                     7917   stable    canonical✓    core
core18             20191010                    1223   stable    canonical✓    base
git-ubuntu         0.7.4+git185.06e9485        458    edge      canonical✓    classic
gnome-3-28-1804    3.28.0-10-gaa70833.aa70833  71     stable    canonical✓    -
gtk-common-themes  0.1-25-gcc83164             1353   stable    canonical✓    -
lxd                3.18                        12181  stable    canonical✓    -
multipass          0.8.1                       1230   beta      canonical✓    classic
signal-desktop     1.27.4                      189    stable    snapcrafters  -
snapcraft          3.8                         3440   stable    canonical✓    classic
testflinger-cli    0.1                         54     stable    pwlars        -
ubuntu-bug-triage  341c4f7                     106    stable    powersj       -
ustriage           ba803f2                     88     stable    powers
$ uname -a
Linux stramonio 5.3.0-18-generic #19-Ubuntu SMP Tue Oct 8 20:14:06 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
$ lsb_release -a
No LSB modules are available.
Distributor ID:	Ubuntu
Description:	Ubuntu 19.10
Release:	19.10
Codename:	eoan

On the Bionic machine:

$ snap version
snap    2.42
snapd   2.42
series  16
ubuntu  18.04
kernel  4.15.0-65-generic
$ snap list
Name               Version  Rev    Tracking  Publisher   Notes
core               16-2.42  7917   stable    canonical✓  core
lxd                3.18     12181  stable    canonical✓  -
testflinger-cli    0.1      54     stable    pwlars      -
ubuntu-bug-triage  341c4f7  106    edge      powersj     -
$ uname -a
Linux torkoal 4.15.0-65-generic #74-Ubuntu SMP Tue Sep 17 17:06:04 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
$ lsb_release -a
No LSB modules are available.
Distributor ID:	Ubuntu
Description:	Ubuntu 18.04.3 LTS
Release:	18.04
Codename:	bionic

@stgraber
Copy link
Contributor

@paride ok, the current stable snap should have a fix for that one.

@paride
Copy link

paride commented Oct 17, 2019

I'm testing it, so far so good, but let's give it a few days before declaring the issue fully solved.

@stgraber
Copy link
Contributor

@paride still good on your side?

@smoser
Copy link
Contributor

smoser commented Oct 22, 2019

@stgraber,
Well, this is still failing for me. I'm on latest stable lxd snap (3.18/12211)
I have attempted to grab all the information you had requested from paride and attached lxc-info.txt. I'm not sure the debug settings actually stuck though, as journalctl doesn't look like paride's did.

let me know if you need anything else.

@stgraber
Copy link
Contributor

@smoser the mount propagation intermediate mount appears missing in your case which would explain the behavior. Did you reboot your system at some point after Oct 16th?

If not, could you reboot, start a few containers and get me the output of cat /proc/$(pgrep daemon.start)/mountinfo?

In theory, stopping all containers and restarting LXD may also achieve most of the same effect, though having a clear starting point with the new logic in place would certainly be better.

@smoser
Copy link
Contributor

smoser commented Oct 23, 2019

I have not rebooted since the 16th. So I'll try rebooting tomorrow and try to remember to report back here. thanks for taking a look.

@paride
Copy link

paride commented Oct 23, 2019

@stgraber after a week, still good. From my point of view the issue looks fixed. Thanks a lot!

@stgraber
Copy link
Contributor

@paride good to hear, will wait to confirm that @smoser's setup looks good after reboot and then we can hopefully close this.

@smoser
Copy link
Contributor

smoser commented Oct 23, 2019

Well... i rebooted, and survived a

lxc launch ubuntu-daily:eoan e1 && lxc delete --force e1

which was failing yesterday. It wasn't 100% recreate previously, but when it got into a bad state it would stay there. I'll keep fingers crossed.

@stgraber
Copy link
Contributor

@smoser can you show that mountinfo data? (cat /proc/$(pgrep daemon.start)/mountinfo)

@smoser
Copy link
Contributor

smoser commented Oct 23, 2019

$ cat /proc/$(pgrep daemon.start)/mountinfo
1198 1382 253:0 / /var/lib/snapd/hostfs rw,relatime master:1 - ext4 /dev/mapper/nvme0n1p6_crypt rw,errors=remount-ro,data=ordered
1204 1198 0:23 / /var/lib/snapd/hostfs/run rw,nosuid,noexec,relatime master:5 - tmpfs tmpfs rw,size=1630164k,mode=755
1205 1204 0:27 / /var/lib/snapd/hostfs/run/lock rw,nosuid,nodev,noexec,relatime master:6 - tmpfs tmpfs rw,size=5120k
1206 1204 0:54 / /var/lib/snapd/hostfs/run/schroot/mount/bionic-amd64-0be49444-dd8d-4a70-9739-73d0ad97aba6 rw,relatime master:488 - overlay bionic-amd64 rw,lowerdir=/var/lib/schroot/union/underlay/bionic-amd64-0be49444-dd8d-4a70-9739-73d0ad97aba6,upperdir=/var/lib/schroot/union/overlay/bionic-amd64-0be49444-dd8d-4a70-9739-73d0ad97aba6/upper,workdir=/var/lib/schroot/union/overlay/bionic-amd64-0be49444-dd8d-4a70-9739-73d0ad97aba6/work
1207 1206 0:4 / /var/lib/snapd/hostfs/run/schroot/mount/bionic-amd64-0be49444-dd8d-4a70-9739-73d0ad97aba6/proc rw,nosuid,nodev,noexec,relatime - proc proc rw
1208 1206 0:21 / /var/lib/snapd/hostfs/run/schroot/mount/bionic-amd64-0be49444-dd8d-4a70-9739-73d0ad97aba6/sys rw,nosuid,nodev,noexec,relatime - sysfs sysfs rw
1209 1206 0:22 / /var/lib/snapd/hostfs/run/schroot/mount/bionic-amd64-0be49444-dd8d-4a70-9739-73d0ad97aba6/dev/pts rw,nosuid,noexec,relatime - devpts devpts rw,gid=5,mode=620,ptmxmode=000
1210 1206 0:56 / /var/lib/snapd/hostfs/run/schroot/mount/bionic-amd64-0be49444-dd8d-4a70-9739-73d0ad97aba6/dev/shm rw,relatime master:517 - tmpfs tmpfs rw
1211 1206 253:0 /var/lib/sbuild/build /var/lib/snapd/hostfs/run/schroot/mount/bionic-amd64-0be49444-dd8d-4a70-9739-73d0ad97aba6/build rw,relatime - ext4 /dev/mapper/nvme0n1p6_crypt rw,errors=remount-ro,data=ordered
1212 1204 0:57 / /var/lib/snapd/hostfs/run/schroot/mount/eoan-amd64-63fe38b4-5c66-4151-90b4-f7c8810d8669 rw,relatime master:614 - overlay eoan-amd64 rw,lowerdir=/var/lib/schroot/union/underlay/eoan-amd64-63fe38b4-5c66-4151-90b4-f7c8810d8669,upperdir=/var/lib/schroot/union/overlay/eoan-amd64-63fe38b4-5c66-4151-90b4-f7c8810d8669/upper,workdir=/var/lib/schroot/union/overlay/eoan-amd64-63fe38b4-5c66-4151-90b4-f7c8810d8669/work
1213 1212 0:4 / /var/lib/snapd/hostfs/run/schroot/mount/eoan-amd64-63fe38b4-5c66-4151-90b4-f7c8810d8669/proc rw,nosuid,nodev,noexec,relatime - proc proc rw
1214 1212 0:21 / /var/lib/snapd/hostfs/run/schroot/mount/eoan-amd64-63fe38b4-5c66-4151-90b4-f7c8810d8669/sys rw,nosuid,nodev,noexec,relatime - sysfs sysfs rw
1215 1212 0:22 / /var/lib/snapd/hostfs/run/schroot/mount/eoan-amd64-63fe38b4-5c66-4151-90b4-f7c8810d8669/dev/pts rw,nosuid,noexec,relatime - devpts devpts rw,gid=5,mode=620,ptmxmode=000
1216 1212 0:59 / /var/lib/snapd/hostfs/run/schroot/mount/eoan-amd64-63fe38b4-5c66-4151-90b4-f7c8810d8669/dev/shm rw,relatime master:647 - tmpfs tmpfs rw
1217 1212 253:0 /var/lib/sbuild/build /var/lib/snapd/hostfs/run/schroot/mount/eoan-amd64-63fe38b4-5c66-4151-90b4-f7c8810d8669/build rw,relatime - ext4 /dev/mapper/nvme0n1p6_crypt rw,errors=remount-ro,data=ordered
1218 1204 0:60 / /var/lib/snapd/hostfs/run/schroot/mount/eoan-amd64-84e4a6ec-58c6-45c1-9baa-9f4f27dd7c11 rw,relatime master:672 - overlay eoan-amd64 rw,lowerdir=/var/lib/schroot/union/underlay/eoan-amd64-84e4a6ec-58c6-45c1-9baa-9f4f27dd7c11,upperdir=/var/lib/schroot/union/overlay/eoan-amd64-84e4a6ec-58c6-45c1-9baa-9f4f27dd7c11/upper,workdir=/var/lib/schroot/union/overlay/eoan-amd64-84e4a6ec-58c6-45c1-9baa-9f4f27dd7c11/work
1219 1218 0:4 / /var/lib/snapd/hostfs/run/schroot/mount/eoan-amd64-84e4a6ec-58c6-45c1-9baa-9f4f27dd7c11/proc rw,nosuid,nodev,noexec,relatime - proc proc rw
1220 1218 0:21 / /var/lib/snapd/hostfs/run/schroot/mount/eoan-amd64-84e4a6ec-58c6-45c1-9baa-9f4f27dd7c11/sys rw,nosuid,nodev,noexec,relatime - sysfs sysfs rw
1221 1218 0:22 / /var/lib/snapd/hostfs/run/schroot/mount/eoan-amd64-84e4a6ec-58c6-45c1-9baa-9f4f27dd7c11/dev/pts rw,nosuid,noexec,relatime - devpts devpts rw,gid=5,mode=620,ptmxmode=000
1222 1218 0:62 / /var/lib/snapd/hostfs/run/schroot/mount/eoan-amd64-84e4a6ec-58c6-45c1-9baa-9f4f27dd7c11/dev/shm rw,relatime master:705 - tmpfs tmpfs rw
1223 1218 253:0 /var/lib/sbuild/build /var/lib/snapd/hostfs/run/schroot/mount/eoan-amd64-84e4a6ec-58c6-45c1-9baa-9f4f27dd7c11/build rw,relatime - ext4 /dev/mapper/nvme0n1p6_crypt rw,errors=remount-ro,data=ordered
1224 1204 0:63 / /var/lib/snapd/hostfs/run/user/121 rw,nosuid,nodev,relatime master:722 - tmpfs tmpfs rw,size=1630160k,mode=700,uid=121,gid=125
1225 1204 0:64 / /var/lib/snapd/hostfs/run/schroot/mount/eoan-amd64-b1b3a809-49bd-4da0-ac12-3abbfdc20cea rw,relatime master:740 - overlay eoan-amd64 rw,lowerdir=/var/lib/schroot/union/underlay/eoan-amd64-b1b3a809-49bd-4da0-ac12-3abbfdc20cea,upperdir=/var/lib/schroot/union/overlay/eoan-amd64-b1b3a809-49bd-4da0-ac12-3abbfdc20cea/upper,workdir=/var/lib/schroot/union/overlay/eoan-amd64-b1b3a809-49bd-4da0-ac12-3abbfdc20cea/work
1226 1225 0:4 / /var/lib/snapd/hostfs/run/schroot/mount/eoan-amd64-b1b3a809-49bd-4da0-ac12-3abbfdc20cea/proc rw,nosuid,nodev,noexec,relatime - proc proc rw
1227 1225 0:21 / /var/lib/snapd/hostfs/run/schroot/mount/eoan-amd64-b1b3a809-49bd-4da0-ac12-3abbfdc20cea/sys rw,nosuid,nodev,noexec,relatime - sysfs sysfs rw
1228 1225 0:22 / /var/lib/snapd/hostfs/run/schroot/mount/eoan-amd64-b1b3a809-49bd-4da0-ac12-3abbfdc20cea/dev/pts rw,nosuid,noexec,relatime - devpts devpts rw,gid=5,mode=620,ptmxmode=000
1229 1225 0:66 / /var/lib/snapd/hostfs/run/schroot/mount/eoan-amd64-b1b3a809-49bd-4da0-ac12-3abbfdc20cea/dev/shm rw,relatime master:773 - tmpfs tmpfs rw
1230 1225 253:0 /var/lib/sbuild/build /var/lib/snapd/hostfs/run/schroot/mount/eoan-amd64-b1b3a809-49bd-4da0-ac12-3abbfdc20cea/build rw,relatime - ext4 /dev/mapper/nvme0n1p6_crypt rw,errors=remount-ro,data=ordered
1231 1204 0:23 /snapd/ns /var/lib/snapd/hostfs/run/snapd/ns rw,nosuid,noexec,relatime - tmpfs tmpfs rw,size=1630164k,mode=755
1257 1198 7:0 / /var/lib/snapd/hostfs/snap/core/7917 ro,nodev,relatime master:33 - squashfs /dev/loop0 ro
1258 1198 7:1 / /var/lib/snapd/hostfs/snap/mumble/665 ro,nodev,relatime master:34 - squashfs /dev/loop1 ro
1259 1198 7:2 / /var/lib/snapd/hostfs/snap/lxd/12211 ro,nodev,relatime master:35 - squashfs /dev/loop2 ro
1260 1198 259:5 / /var/lib/snapd/hostfs/boot rw,relatime master:36 - ext4 /dev/nvme0n1p5 rw,data=ordered
1261 1260 259:2 / /var/lib/snapd/hostfs/boot/efi rw,relatime master:38 - vfat /dev/nvme0n1p2 rw,fmask=0077,dmask=0077,codepage=437,iocharset=iso8859-1,shortname=mixed,errors=remount-ro
1262 1198 7:3 / /var/lib/snapd/hostfs/snap/pdftk/9 ro,nodev,relatime master:37 - squashfs /dev/loop3 ro
1263 1198 7:4 / /var/lib/snapd/hostfs/snap/go/4520 ro,nodev,relatime master:39 - squashfs /dev/loop4 ro
1264 1198 7:5 / /var/lib/snapd/hostfs/snap/git-ubuntu/457 ro,nodev,relatime master:40 - squashfs /dev/loop5 ro
1265 1198 7:6 / /var/lib/snapd/hostfs/snap/go/4668 ro,nodev,relatime master:41 - squashfs /dev/loop6 ro
1266 1198 7:7 / /var/lib/snapd/hostfs/snap/core18/1192 ro,nodev,relatime master:42 - squashfs /dev/loop7 ro
1267 1198 7:8 / /var/lib/snapd/hostfs/snap/mumble/635 ro,nodev,relatime master:43 - squashfs /dev/loop8 ro
1268 1198 7:9 / /var/lib/snapd/hostfs/snap/git-ubuntu/458 ro,nodev,relatime master:44 - squashfs /dev/loop9 ro
1269 1198 7:10 / /var/lib/snapd/hostfs/snap/emoj/53 ro,nodev,relatime master:45 - squashfs /dev/loop10 ro
1270 1198 7:11 / /var/lib/snapd/hostfs/snap/lxd/12181 ro,nodev,relatime master:46 - squashfs /dev/loop11 ro
1271 1198 7:12 / /var/lib/snapd/hostfs/snap/core18/1223 ro,nodev,relatime master:47 - squashfs /dev/loop12 ro
1272 1198 7:13 / /var/lib/snapd/hostfs/snap/core/7713 ro,nodev,relatime master:48 - squashfs /dev/loop13 ro
1273 1198 0:52 / /var/lib/snapd/hostfs/var/lib/lxcfs rw,nosuid,nodev,relatime master:422 - fuse.lxcfs lxcfs rw,user_id=0,group_id=0,allow_other
1274 1198 253:0 /var/lib/schroot/chroot/bionic-amd64 /var/lib/snapd/hostfs/var/lib/schroot/union/underlay/bionic-amd64-0be49444-dd8d-4a70-9739-73d0ad97aba6 rw,relatime - ext4 /dev/mapper/nvme0n1p6_crypt rw,errors=remount-ro,data=ordered
1275 1198 253:0 /var/lib/schroot/chroot/eoan-amd64 /var/lib/snapd/hostfs/var/lib/schroot/union/underlay/eoan-amd64-63fe38b4-5c66-4151-90b4-f7c8810d8669 rw,relatime - ext4 /dev/mapper/nvme0n1p6_crypt rw,errors=remount-ro,data=ordered
1276 1198 253:0 /var/lib/schroot/chroot/eoan-amd64 /var/lib/snapd/hostfs/var/lib/schroot/union/underlay/eoan-amd64-84e4a6ec-58c6-45c1-9baa-9f4f27dd7c11 rw,relatime - ext4 /dev/mapper/nvme0n1p6_crypt rw,errors=remount-ro,data=ordered
1277 1198 253:0 /var/lib/schroot/chroot/eoan-amd64 /var/lib/snapd/hostfs/var/lib/schroot/union/underlay/eoan-amd64-b1b3a809-49bd-4da0-ac12-3abbfdc20cea rw,relatime - ext4 /dev/mapper/nvme0n1p6_crypt rw,errors=remount-ro,data=ordered
1292 1197 7:0 / / ro,nodev,relatime master:33 - squashfs /dev/loop0 ro
1293 1292 0:6 / /dev rw,nosuid,relatime master:2 - devtmpfs udev rw,size=8118128k,nr_inodes=2029532,mode=755
1294 1293 0:22 / /dev/pts rw,nosuid,noexec,relatime master:3 - devpts devpts rw,gid=5,mode=620,ptmxmode=000
1295 1293 0:26 / /dev/shm rw,nosuid,nodev master:4 - tmpfs tmpfs rw
1296 1293 0:45 / /dev/hugepages rw,relatime master:27 - hugetlbfs hugetlbfs rw,pagesize=2M
1297 1293 0:19 / /dev/mqueue rw,relatime master:28 - mqueue mqueue rw
1299 1292 253:0 /home /home rw,relatime master:1 - ext4 /dev/mapper/nvme0n1p6_crypt rw,errors=remount-ro,data=ordered
1300 1292 253:0 /root /root rw,relatime master:1 - ext4 /dev/mapper/nvme0n1p6_crypt rw,errors=remount-ro,data=ordered
1301 1292 0:4 / /proc rw,nosuid,nodev,noexec,relatime master:14 - proc proc rw
1302 1301 0:44 / /proc/sys/fs/binfmt_misc rw,relatime master:26 - autofs systemd-1 rw,fd=27,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=18637
1303 1302 0:47 / /proc/sys/fs/binfmt_misc rw,relatime master:49 - binfmt_misc binfmt_misc rw
1304 1292 0:21 / /sys rw,nosuid,nodev,noexec,relatime master:7 - sysfs sysfs rw
1305 1304 0:7 / /sys/kernel/security rw,nosuid,nodev,noexec,relatime master:8 - securityfs securityfs rw
1306 1304 0:28 / /sys/fs/cgroup ro,nosuid,nodev,noexec master:9 - tmpfs tmpfs ro,mode=755
1307 1306 0:29 / /sys/fs/cgroup/unified rw,nosuid,nodev,noexec,relatime master:10 - cgroup2 cgroup rw
1308 1306 0:30 / /sys/fs/cgroup/systemd rw,nosuid,nodev,noexec,relatime master:11 - cgroup cgroup rw,xattr,name=systemd
1309 1306 0:33 / /sys/fs/cgroup/net_cls,net_prio rw,nosuid,nodev,noexec,relatime master:15 - cgroup cgroup rw,net_cls,net_prio
1310 1306 0:34 / /sys/fs/cgroup/cpuset rw,nosuid,nodev,noexec,relatime master:16 - cgroup cgroup rw,cpuset,clone_children
1311 1306 0:35 / /sys/fs/cgroup/rdma rw,nosuid,nodev,noexec,relatime master:17 - cgroup cgroup rw,rdma
1312 1306 0:36 / /sys/fs/cgroup/cpu,cpuacct rw,nosuid,nodev,noexec,relatime master:18 - cgroup cgroup rw,cpu,cpuacct
1313 1306 0:37 / /sys/fs/cgroup/blkio rw,nosuid,nodev,noexec,relatime master:19 - cgroup cgroup rw,blkio
1314 1306 0:38 / /sys/fs/cgroup/hugetlb rw,nosuid,nodev,noexec,relatime master:20 - cgroup cgroup rw,hugetlb
1315 1306 0:39 / /sys/fs/cgroup/devices rw,nosuid,nodev,noexec,relatime master:21 - cgroup cgroup rw,devices
1316 1306 0:40 / /sys/fs/cgroup/pids rw,nosuid,nodev,noexec,relatime master:22 - cgroup cgroup rw,pids
1317 1306 0:41 / /sys/fs/cgroup/perf_event rw,nosuid,nodev,noexec,relatime master:23 - cgroup cgroup rw,perf_event
1318 1306 0:42 / /sys/fs/cgroup/freezer rw,nosuid,nodev,noexec,relatime master:24 - cgroup cgroup rw,freezer
1319 1306 0:43 / /sys/fs/cgroup/memory rw,nosuid,nodev,noexec,relatime master:25 - cgroup cgroup rw,memory
1320 1304 0:31 / /sys/fs/pstore rw,nosuid,nodev,noexec,relatime master:12 - pstore pstore rw
1321 1304 0:32 / /sys/firmware/efi/efivars rw,nosuid,nodev,noexec,relatime master:13 - efivarfs efivarfs rw
1322 1304 0:8 / /sys/kernel/debug rw,relatime master:29 - debugfs debugfs rw
1323 1322 0:11 / /sys/kernel/debug/tracing rw,relatime master:30 - tracefs tracefs rw
1324 1304 0:20 / /sys/kernel/config rw,relatime master:31 - configfs configfs rw
1325 1304 0:46 / /sys/fs/fuse/connections rw,relatime master:32 - fusectl fusectl rw
1326 1292 253:0 /tmp /tmp rw,relatime master:1 - ext4 /dev/mapper/nvme0n1p6_crypt rw,errors=remount-ro,data=ordered
1327 1292 253:0 /var/snap /var/snap rw,relatime master:1 - ext4 /dev/mapper/nvme0n1p6_crypt rw,errors=remount-ro,data=ordered
1328 1292 253:0 /var/lib/snapd /var/lib/snapd rw,relatime master:1 - ext4 /dev/mapper/nvme0n1p6_crypt rw,errors=remount-ro,data=ordered
1329 1292 253:0 /var/tmp /var/tmp rw,relatime master:1 - ext4 /dev/mapper/nvme0n1p6_crypt rw,errors=remount-ro,data=ordered
1358 1292 253:0 /lib/modules /lib/modules rw,relatime master:1 - ext4 /dev/mapper/nvme0n1p6_crypt rw,errors=remount-ro,data=ordered
1359 1292 253:0 /lib/firmware /lib/firmware rw,relatime master:1 - ext4 /dev/mapper/nvme0n1p6_crypt rw,errors=remount-ro,data=ordered
1360 1292 253:0 /usr/src /usr/src rw,relatime master:1 - ext4 /dev/mapper/nvme0n1p6_crypt rw,errors=remount-ro,data=ordered
1361 1292 253:0 /var/log /var/log rw,relatime master:1 - ext4 /dev/mapper/nvme0n1p6_crypt rw,errors=remount-ro,data=ordered
1362 1292 253:0 /media /media rw,relatime shared:1 - ext4 /dev/mapper/nvme0n1p6_crypt rw,errors=remount-ro,data=ordered
1364 1292 253:0 /mnt /mnt rw,relatime master:1 - ext4 /dev/mapper/nvme0n1p6_crypt rw,errors=remount-ro,data=ordered
1367 1292 253:0 /snap /snap rw,relatime master:1 - ext4 /dev/mapper/nvme0n1p6_crypt rw,errors=remount-ro,data=ordered
1368 1367 7:0 / /snap/core/7917 ro,nodev,relatime master:33 - squashfs /dev/loop0 ro
1369 1367 7:1 / /snap/mumble/665 ro,nodev,relatime master:34 - squashfs /dev/loop1 ro
1370 1367 7:2 / /snap/lxd/12211 ro,nodev,relatime master:35 - squashfs /dev/loop2 ro
1371 1367 7:3 / /snap/pdftk/9 ro,nodev,relatime master:37 - squashfs /dev/loop3 ro
1372 1367 7:4 / /snap/go/4520 ro,nodev,relatime master:39 - squashfs /dev/loop4 ro
1373 1367 7:5 / /snap/git-ubuntu/457 ro,nodev,relatime master:40 - squashfs /dev/loop5 ro
1374 1367 7:6 / /snap/go/4668 ro,nodev,relatime master:41 - squashfs /dev/loop6 ro
1375 1367 7:7 / /snap/core18/1192 ro,nodev,relatime master:42 - squashfs /dev/loop7 ro
1376 1367 7:8 / /snap/mumble/635 ro,nodev,relatime master:43 - squashfs /dev/loop8 ro
1377 1367 7:9 / /snap/git-ubuntu/458 ro,nodev,relatime master:44 - squashfs /dev/loop9 ro
1378 1367 7:10 / /snap/emoj/53 ro,nodev,relatime master:45 - squashfs /dev/loop10 ro
1379 1367 7:11 / /snap/lxd/12181 ro,nodev,relatime master:46 - squashfs /dev/loop11 ro
1380 1367 7:12 / /snap/core18/1223 ro,nodev,relatime master:47 - squashfs /dev/loop12 ro
1381 1367 7:13 / /snap/core/7713 ro,nodev,relatime master:48 - squashfs /dev/loop13 ro
1382 1328 253:0 /var/lib/snapd/hostfs /var/lib/snapd/hostfs rw,relatime - ext4 /dev/mapper/nvme0n1p6_crypt rw,errors=remount-ro,data=ordered
1199 1326 253:0 /tmp/snap.lxd/tmp /tmp rw,relatime - ext4 /dev/mapper/nvme0n1p6_crypt rw,errors=remount-ro,data=ordered
1200 1294 0:67 / /dev/pts rw,relatime - devpts devpts rw,gid=5,mode=620,ptmxmode=666
1201 1293 0:67 /ptmx /dev/ptmx rw,relatime - devpts devpts rw,gid=5,mode=620,ptmxmode=666
1232 1198 253:0 /var/lib/schroot/chroot/eoan-amd64 /var/lib/snapd/hostfs/var/lib/schroot/union/underlay/eoan-amd64-d0fdda70-f078-4931-a699-9c8142c8b1a4 rw,relatime master:1 - ext4 /dev/mapper/nvme0n1p6_crypt rw,errors=remount-ro,data=ordered
1248 1204 0:68 / /var/lib/snapd/hostfs/run/schroot/mount/eoan-amd64-d0fdda70-f078-4931-a699-9c8142c8b1a4 rw,relatime master:806 - overlay eoan-amd64 rw,lowerdir=/var/lib/schroot/union/underlay/eoan-amd64-d0fdda70-f078-4931-a699-9c8142c8b1a4,upperdir=/var/lib/schroot/union/overlay/eoan-amd64-d0fdda70-f078-4931-a699-9c8142c8b1a4/upper,workdir=/var/lib/schroot/union/overlay/eoan-amd64-d0fdda70-f078-4931-a699-9c8142c8b1a4/work
1388 1248 0:4 / /var/lib/snapd/hostfs/run/schroot/mount/eoan-amd64-d0fdda70-f078-4931-a699-9c8142c8b1a4/proc rw,nosuid,nodev,noexec,relatime master:14 - proc proc rw
1403 1248 0:21 / /var/lib/snapd/hostfs/run/schroot/mount/eoan-amd64-d0fdda70-f078-4931-a699-9c8142c8b1a4/sys rw,nosuid,nodev,noexec,relatime master:7 - sysfs sysfs rw
1418 1248 0:22 / /var/lib/snapd/hostfs/run/schroot/mount/eoan-amd64-d0fdda70-f078-4931-a699-9c8142c8b1a4/dev/pts rw,nosuid,noexec,relatime master:3 - devpts devpts rw,gid=5,mode=620,ptmxmode=000
1433 1248 0:70 / /var/lib/snapd/hostfs/run/schroot/mount/eoan-amd64-d0fdda70-f078-4931-a699-9c8142c8b1a4/dev/shm rw,relatime master:839 - tmpfs tmpfs rw
1448 1248 253:0 /var/lib/sbuild/build /var/lib/snapd/hostfs/run/schroot/mount/eoan-amd64-d0fdda70-f078-4931-a699-9c8142c8b1a4/build rw,relatime master:1 - ext4 /dev/mapper/nvme0n1p6_crypt rw,errors=remount-ro,data=ordered
1449 1327 253:0 /var/snap/lxd/common/lxd/storage-pools /var/snap/lxd/common/lxd/storage-pools rw,relatime shared:856 - ext4 /dev/mapper/nvme0n1p6_crypt rw,errors=remount-ro,data=ordered
1450 1327 253:0 /var/snap/lxd/common/lxd/devices /var/snap/lxd/common/lxd/devices rw,relatime shared:857 master:1 - ext4 /dev/mapper/nvme0n1p6_crypt rw,errors=remount-ro,data=ordered
1571 1327 0:71 / /var/snap/lxd/common/ns rw,relatime master:866 - tmpfs tmpfs rw,size=1024k,mode=700
1558 1198 0:71 / /var/lib/snapd/hostfs/var/snap/lxd/common/ns rw,relatime master:866 - tmpfs tmpfs rw,size=1024k,mode=700
1574 1327 0:72 / /var/snap/lxd/common/shmounts rw,relatime shared:875 - tmpfs tmpfs rw,size=1024k,mode=711
1540 1292 7:2 /wrappers/kmod /bin/kmod ro,nodev,relatime master:35 - squashfs /dev/loop2 ro
1541 1292 259:5 / /boot rw,relatime master:36 - ext4 /dev/nvme0n1p5 rw,data=ordered
1249 1292 0:73 / /run rw,nosuid,nodev,relatime - tmpfs tmpfs rw,mode=755
1298 1292 0:74 / /etc rw,relatime - tmpfs tmpfs rw,mode=755
1616 1292 253:0 /usr/share/ca-certificates /usr/share/ca-certificates rw,relatime master:1 - ext4 /dev/mapper/nvme0n1p6_crypt rw,errors=remount-ro,data=ordered
1617 1198 0:6 / /var/lib/snapd/hostfs/dev rw,nosuid,relatime master:2 - devtmpfs udev rw,size=8118128k,nr_inodes=2029532,mode=755
1618 1198 0:4 / /var/lib/snapd/hostfs/proc rw,nosuid,nodev,noexec,relatime master:14 - proc proc rw
1619 1198 0:21 / /var/lib/snapd/hostfs/sys rw,nosuid,nodev,noexec,relatime master:7 - sysfs sysfs rw
1621 1574 0:76 / /var/snap/lxd/common/shmounts/lxcfs rw,nosuid,nodev,relatime shared:958 - fuse.lxcfs lxcfs rw,user_id=0,group_id=0,allow_other
1623 1574 0:77 / /var/snap/lxd/common/shmounts/containers rw,relatime shared:959 - tmpfs tmpfs rw,size=100k,mode=711
1625 1327 0:78 / /var/snap/lxd/common/lxd/devlxd rw,relatime - tmpfs tmpfs rw,size=100k,mode=755
1731 1449 0:80 / /var/snap/lxd/common/lxd/storage-pools/default/containers/actest rw,relatime shared:1064 - zfs default/containers/actest rw,xattr,posixacl
1736 1449 0:86 / /var/snap/lxd/common/lxd/storage-pools/default/containers/btest rw,relatime shared:1065 - zfs default/containers/btest rw,xattr,posixacl
1774 1449 0:96 / /var/snap/lxd/common/lxd/storage-pools/default/containers/build1 rw,relatime shared:1066 - zfs default/containers/build1 rw,xattr,posixacl
2300 1449 0:116 / /var/snap/lxd/common/lxd/storage-pools/default/containers/nm-test rw,relatime shared:1185 - zfs default/containers/nm-test rw,xattr,posixacl
2305 1449 0:125 / /var/snap/lxd/common/lxd/storage-pools/default/containers/oc-test rw,relatime shared:1186 - zfs default/containers/oc-test rw,xattr,posixacl
3277 1204 0:146 / /var/lib/snapd/hostfs/run/user/1000 rw,nosuid,nodev,relatime master:1413 - tmpfs tmpfs rw,size=1630160k,mode=700,uid=1000,gid=1000
3310 3277 0:147 / /var/lib/snapd/hostfs/run/user/1000/gvfs rw,nosuid,nodev,relatime master:1425 - fuse.gvfsd-fuse gvfsd-fuse rw,user_id=1000,group_id=1000

@stgraber
Copy link
Contributor

@smoser good, so you do have the intermediate mount in there now and your containers are all properly underneath it, so that should work as expected.

Tentatively closing this issue.

@ColinIanKing
Copy link

This is excellent news. Thanks for solving this @stgraber.

paride referenced this issue in paride/pycloudlib Sep 2, 2020
The two-step (stop, delete --force) LXD container deletion was
introduced in 3ec4cc2 to workaround the
"ZFS dataset is busy" LXD bug [1] which was triggered by force-deleting
running containers.

We now drop the workaround because:

1. It's not needed anymore (the bug got fixed).
2. It was itself buggy: ephemeral containers are deleted when stopped,
   thus the "lxc delete" step failed because the container is gone
   already.

[1] https://github.com/lxc/lxd/issues/4656
@matanox
Copy link

matanox commented Jul 2, 2021

Getting the same error with LXC 4.15.

Error: Error deleting storage volume: Failed to run: zfs destroy -r lxd-storage-pool/virtual-machines/uv-vm-1.block: cannot destroy 'lxd-storage-pool/virtual-machines/uv-vm-1.block': dataset is busy

The VM is stopped before I try the delete command.
Is 4.15 an old version? How should I troubleshoot?

@YosuCadilla
Copy link

Having a similar issue with: lxd 5.12-c63881f 24643

@justinclift
Copy link

For anyone else experiencing this with LXD 5.x, this might be helpful. It shows what's likely to be where the problem stems from (useful for an lxd dev maybe?), and shows how to fix it when it occurs: #11168 (comment)

At least, that's what worked for me. 😄

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Bug Confirmed to be a bug Incomplete Waiting on more information from reporter
Projects
None yet
Development

No branches or pull requests