Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Multi-architecture clustering #6380

Open
stgraber opened this issue Nov 1, 2019 · 26 comments
Assignees
Milestone

Comments

@stgraber
Copy link
Member

@stgraber stgraber commented Nov 1, 2019

LXD clustering can be used to turn multiple LXD servers into one large instance.
Right now, this assumes that all servers in the cluster are of the same architecture.

While that's certainly the common case, there are times where it would be useful to have a single LXD cluster which supports multiple architectures, usually a mix of Intel and Arm hardware.

To make this possible, we'd need to:

  • Register the native architecture of each server in the nodes database table
  • Update the generated /1.0 output to advertise all architectures supported by the cluster
  • Update the default placement rule such that if asked to deploy an image specific to a given architecture, we find the least busy cluster member that supports that architecture
@DBaum1

This comment has been minimized.

Copy link

@DBaum1 DBaum1 commented Nov 5, 2019

Hi, another UT Austin student checking in. I'd like claim this issue.

@stgraber

This comment has been minimized.

Copy link
Member Author

@stgraber stgraber commented Nov 5, 2019

@DBaum1 assigned to you now, thanks!

@stgraber stgraber added this to the soon milestone Nov 9, 2019
@DBaum1

This comment has been minimized.

Copy link

@DBaum1 DBaum1 commented Nov 13, 2019

I have been having trouble running the tests for lxd.

My information:
OS: Ubuntu 18.04
Kernel: 4.15.0-69-generic
LXD version: 3.18
LXC version: 3.18
I have successfully generated the lxc and lxd binaries.

Whenever I try to run integration tests with sudo -E ./main.sh from the test directory, I receive the message Missing dependencies: lxd lxc
Running sudo -E make check from the repository root it fails with

--- PASS: TestVersionTestSuite/TestString (0.00s)
PASS
ok  	github.com/lxc/lxd/shared/version	(cached)
?   	github.com/lxc/lxd/test/deps	[no test files]
?   	github.com/lxc/lxd/test/macaroon-identity	[no test files]
Makefile:142: recipe for target 'check' failed
make: *** [check] Error 1

I have not made any modification to lxd source, this is a clean copy.

@freeekanayaka

This comment has been minimized.

Copy link
Member

@freeekanayaka freeekanayaka commented Nov 13, 2019

You might need to add ~/go/bin to your $PATH. At least that's what I have.

@stgraber

This comment has been minimized.

Copy link
Member Author

@stgraber stgraber commented Nov 13, 2019

sudo can be a weird beast, sudo -E abc doesn't act as the same as running sudo -E -s and then running abc, at least it doesn't for me.

I usually run sudo -E -s and then run make check from there. It also makes it easier for you to check if the environment variables are indeed properly applied.

@DBaum1

This comment has been minimized.

Copy link

@DBaum1 DBaum1 commented Nov 13, 2019

Thank you for your suggestions, I am now able to run the tests. However, I am still experiencing problems with sudo -E ./main.sh since the result is:

==> TEST DONE: static analysis
==> Test result: failure

I also get these warnings:

WARN[11-13|22:23:32] Couldn't find the CGroup blkio.weight, I/O weight limits will be ignored. 
WARN[11-13|22:23:32] CGroup memory swap accounting is disabled, swap limits will be ignored. 

And notifications that various things are undefined:

# _/home/dinah/lxd/lxd/db_test
lxd/db/containers_test.go:453:14: tx.Tx undefined (type *db.ClusterTx has no field or method Tx, but does have db.tx)
lxd/db/migration_test.go:124:38: too many errors
...
lxd/cluster/gateway_test.go:130:17: undefined: cluster.TLSClientConfig
lxd/cluster/gateway_test.go:160:23: gateway.RaftNodes undefined (type *cluster.Gateway has no field or method RaftNodes)
lxd/cluster/heartbeat_test.go:150:43: target.Cert undefined (type *cluster.Gateway has no field or method Cert, but does have cluster.cert)
lxd/cluster/heartbeat_test.go:164:28: gateway.IsLeader undefined (type *cluster.Gateway has no field or method IsLeader, but does have cluster.isLeader)
lxd/cluster/raft_test.go:20:19: too many errors
# _/home/dinah/lxd/lxd/db/query_test
lxd/db/query/dump_test.go:54:12: undefined: query.DumpParseSchema
lxd/db/query/dump_test.go:56:15: undefined: query.DumpTable
lxd/db/query/dump_test.go:169:19: undefined: query.DumpSchemaTable
# _/home/dinah/lxd/lxd/endpoints_test
lxd/endpoints/cluster_test.go:18:30: endpoints.Up undefined (type *endpoints.Endpoints has no field or method Up, but does have endpoints.up)
lxd/endpoints/cluster_test.go:20:40: not enough arguments in call to httpGetOverTLSSocket
lxd/endpoints/cluster_test.go:20:50: endpoints.NetworkAddressAndCert undefined (type *endpoints.Endpoints has no field or method NetworkAddressAndCert)
@stgraber

This comment has been minimized.

Copy link
Member Author

@stgraber stgraber commented Nov 13, 2019

Ah yeah, this tends to happen if you're not storing your code in ~/go/src/github.com/lxc/lxd, Go is weirdly picky about it.

@stgraber

This comment has been minimized.

Copy link
Member Author

@stgraber stgraber commented Nov 13, 2019

In most cases, it's really much easier to just give up on storing things where you want and instead just having your working copy of LXD be at the expected spot in the Go PATH (~/go/src/github.com/lxc/lxd) and running tests from there too.

@DBaum1

This comment has been minimized.

Copy link

@DBaum1 DBaum1 commented Nov 15, 2019

Thank you, I am now running it from ~/go/src/github.com/lxc/lxd and the Go PATH is /home/dinah/go
I have also added ~/go/bin to my $PATH, and that solved the problems with undefined things. However I am still getting an error from

ok  	github.com/lxc/lxd/shared	(cached)
?   	github.com/lxc/lxd/shared/api	[no test files]
?   	github.com/lxc/lxd/shared/cancel	[no test files]
?   	github.com/lxc/lxd/shared/cmd	[no test files]
?   	github.com/lxc/lxd/shared/containerwriter	[no test files]
?   	github.com/lxc/lxd/shared/dnsutil	[no test files]
?   	github.com/lxc/lxd/shared/eagain	[no test files]
?   	github.com/lxc/lxd/shared/generate	[no test files]
...
?   	github.com/lxc/lxd/test/deps	[no test files]
?   	github.com/lxc/lxd/test/macaroon-identity	[no test files]
make: *** [Makefile:152: check] Error 1

It seems the script is not able to locate some test files, even though I have confirmed that there are files (I guess not test ones?) located at those paths.
It also fails TestConvertNetworkConfig, where it receives an "unexpected error: creating the container failed."
github.com/lxc/lxd/lxc-to-lxd also fails.
Is there a certain lxd configuration I should be using before running the tests?

@stgraber

This comment has been minimized.

Copy link
Member Author

@stgraber stgraber commented Nov 15, 2019

@DBaum1 can you show the full output for that? What you listed above looks correct other than Makefile failing.

@DBaum1

This comment has been minimized.

Copy link

@DBaum1 DBaum1 commented Nov 15, 2019

Full output
(base) dinah@liopleurodon:~/go/src/github.com/lxc/lxd$ make check
go get -t -v -d ./...
CC=cc go install -v -tags "libsqlite3"  ./...
CGO_ENABLED=0 go install -v -tags netgo ./lxd-p2c
go install -v -tags agent ./lxd-agent
LXD built successfully
go get -v -x github.com/rogpeppe/godeps
WORK=/tmp/go-build664948046
rm -r $WORK/b001/
go get -v -x github.com/tsenart/deadcode
WORK=/tmp/go-build135340509
rm -r $WORK/b001/
go get -v -x github.com/golang/lint/golint
WORK=/tmp/go-build928140067
rm -r $WORK/b001/
go test -v -tags "libsqlite3"  ./...
?   	github.com/lxc/lxd/client	[no test files]
?   	github.com/lxc/lxd/fuidshift	[no test files]
=== RUN   TestDotPrefixMatch
--- PASS: TestDotPrefixMatch (0.00s)
=== RUN   TestShouldShow
--- PASS: TestShouldShow (0.00s)
=== RUN   TestColumns
--- PASS: TestColumns (0.00s)
=== RUN   TestInvalidColumns
--- PASS: TestInvalidColumns (0.00s)
=== RUN   TestExpandAliases
--- PASS: TestExpandAliases (0.00s)
=== RUN   TestUtilsTestSuite
=== RUN   TestUtilsTestSuite/TestGetExistingAliases
=== RUN   TestUtilsTestSuite/TestGetExistingAliasesEmpty
=== RUN   TestUtilsTestSuite/Test_stringList
=== RUN   TestUtilsTestSuite/Test_stringList_empty_strings
=== RUN   TestUtilsTestSuite/Test_stringList_sort_by_column
--- PASS: TestUtilsTestSuite (0.04s)
    --- PASS: TestUtilsTestSuite/TestGetExistingAliases (0.00s)
    --- PASS: TestUtilsTestSuite/TestGetExistingAliasesEmpty (0.00s)
    --- PASS: TestUtilsTestSuite/Test_stringList (0.00s)
    --- PASS: TestUtilsTestSuite/Test_stringList_empty_strings (0.00s)
    --- PASS: TestUtilsTestSuite/Test_stringList_sort_by_column (0.00s)
PASS
ok  	github.com/lxc/lxd/lxc	(cached)
?   	github.com/lxc/lxd/lxc/config	[no test files]
?   	github.com/lxc/lxd/lxc/utils	[no test files]
=== RUN   TestValidateConfig
2019/11/15 13:28:19 Running test #0: container migrated
Checking whether container has already been migrated
2019/11/15 13:28:19 Running test #1: container name missmatch (1)
Checking whether container has already been migrated
2019/11/15 13:28:19 Running test #2: container name missmatch (2)
Checking whether container has already been migrated
2019/11/15 13:28:19 Running test #3: incomplete AppArmor support (1)
Checking whether container has already been migrated
Validating whether incomplete AppArmor support is enabled
2019/11/15 13:28:19 Running test #4: incomplete AppArmor support (2)
Checking whether container has already been migrated
Validating whether incomplete AppArmor support is enabled
2019/11/15 13:28:19 Running test #5: missing minimal /dev filesystem
Checking whether container has already been migrated
Validating whether incomplete AppArmor support is enabled
Validating whether mounting a minimal /dev is enabled
2019/11/15 13:28:19 Running test #6: missing lxc.rootfs key
Checking whether container has already been migrated
Validating whether incomplete AppArmor support is enabled
Validating whether mounting a minimal /dev is enabled
Validating container rootfs
2019/11/15 13:28:19 Running test #7: non-existent rootfs path
Checking whether container has already been migrated
Validating whether incomplete AppArmor support is enabled
Validating whether mounting a minimal /dev is enabled
Validating container rootfs
--- PASS: TestValidateConfig (0.00s)
=== RUN   TestConvertNetworkConfig
2019/11/15 13:28:19 Running test #0: loopback only
--- FAIL: TestConvertNetworkConfig (0.00s)
	main_migrate_test.go:216: 
			Error Trace:	main_migrate_test.go:216
			Error:      	Received unexpected error:
			            	creating the container failed
			Test:       	TestConvertNetworkConfig
=== RUN   TestConvertStorageConfig
2019/11/15 13:28:19 Running test #0: invalid path
Processing storage configuration
2019/11/15 13:28:19 Running test #1: ignored default mounts
Processing storage configuration
2019/11/15 13:28:19 Running test #2: ignored mounts
Processing storage configuration
2019/11/15 13:28:19 Running test #3: valid mount configuration
Processing storage configuration
--- PASS: TestConvertStorageConfig (0.00s)
=== RUN   TestGetRootfs
2019/11/15 13:28:19 Running test #0: missing lxc.rootfs key
2019/11/15 13:28:19 Running test #1: valid lxc.rootfs key (1)
2019/11/15 13:28:19 Running test #2: valid lxc.rootfs key (2)
--- PASS: TestGetRootfs (0.00s)
FAIL
FAIL	github.com/lxc/lxd/lxc-to-lxd	0.013s
=== RUN   TestCluster_Bootstrap
--- SKIP: TestCluster_Bootstrap (0.00s)
	api_cluster_test.go:23: issue #6122
=== RUN   TestCluster_Get
--- SKIP: TestCluster_Get (0.00s)
	api_cluster_test.go:49: issue #6122
=== RUN   TestCluster_GetMemberConfig
--- SKIP: TestCluster_GetMemberConfig (0.00s)
	api_cluster_test.go:63: issue #6122
=== RUN   TestCluster_Join
--- SKIP: TestCluster_Join (0.00s)
	api_cluster_test.go:97: issue #6122
=== RUN   TestCluster_JoinServerAddress
--- SKIP: TestCluster_JoinServerAddress (0.00s)
	api_cluster_test.go:200: issue #6122
=== RUN   TestCluster_JoinDifferentServerAddress
--- SKIP: TestCluster_JoinDifferentServerAddress (0.00s)
	api_cluster_test.go:298: issue #6122
=== RUN   TestCluster_JoinSameServerAddress
--- SKIP: TestCluster_JoinSameServerAddress (0.00s)
	api_cluster_test.go:352: issue #6122
=== RUN   TestCluster_JoinUnauthorized
--- SKIP: TestCluster_JoinUnauthorized (0.00s)
	api_cluster_test.go:388: issue #6122
=== RUN   TestCluster_Leave
--- SKIP: TestCluster_Leave (0.00s)
	api_cluster_test.go:449: issue #6122
=== RUN   TestCluster_LeaveWithImages
--- SKIP: TestCluster_LeaveWithImages (0.00s)
	api_cluster_test.go:476: issue #6122
=== RUN   TestCluster_LeaveForce
--- SKIP: TestCluster_LeaveForce (0.00s)
	api_cluster_test.go:507: issue #6122
=== RUN   TestCluster_NodeRename
--- SKIP: TestCluster_NodeRename (0.00s)
	api_cluster_test.go:572: issue #6122
=== RUN   TestNetworksCreate_TargetNode
--- SKIP: TestNetworksCreate_TargetNode (0.00s)
	api_networks_test.go:13: issue #6122
=== RUN   TestNetworksCreate_NotDefined
--- SKIP: TestNetworksCreate_NotDefined (0.00s)
	api_networks_test.go:51: issue #6122
=== RUN   TestNetworksCreate_MissingNodes
--- SKIP: TestNetworksCreate_MissingNodes (0.00s)
	api_networks_test.go:71: issue #6122
=== RUN   TestStoragePoolsCreate_TargetNode
--- SKIP: TestStoragePoolsCreate_TargetNode (0.00s)
	api_storage_pools_test.go:13: issue #6122
=== RUN   TestStoragePoolsCreate_NotDefined
--- SKIP: TestStoragePoolsCreate_NotDefined (0.00s)
	api_storage_pools_test.go:51: issue #6122
=== RUN   TestStoragePoolsCreate_MissingNodes
--- SKIP: TestStoragePoolsCreate_MissingNodes (0.00s)
	api_storage_pools_test.go:72: issue #6122
=== RUN   TestContainerTestSuite
=== RUN   TestContainerTestSuite/TestContainer_IsPrivileged_Privileged
=== RUN   TestContainerTestSuite/TestContainer_IsPrivileged_Unprivileged
=== RUN   TestContainerTestSuite/TestContainer_LoadFromDB
=== RUN   TestContainerTestSuite/TestContainer_LogPath
=== RUN   TestContainerTestSuite/TestContainer_Path_Regular
=== RUN   TestContainerTestSuite/TestContainer_ProfilesDefault
=== RUN   TestContainerTestSuite/TestContainer_ProfilesMulti
=== RUN   TestContainerTestSuite/TestContainer_ProfilesOverwriteDefaultNic
=== RUN   TestContainerTestSuite/TestContainer_Rename
=== RUN   TestContainerTestSuite/TestContainer_findIdmap_isolated
=== RUN   TestContainerTestSuite/TestContainer_findIdmap_maxed
=== RUN   TestContainerTestSuite/TestContainer_findIdmap_mixed
=== RUN   TestContainerTestSuite/TestContainer_findIdmap_raw
--- PASS: TestContainerTestSuite (83.17s)
    --- PASS: TestContainerTestSuite/TestContainer_IsPrivileged_Privileged (6.57s)
    --- PASS: TestContainerTestSuite/TestContainer_IsPrivileged_Unprivileged (6.30s)
    --- PASS: TestContainerTestSuite/TestContainer_LoadFromDB (5.62s)
    --- PASS: TestContainerTestSuite/TestContainer_LogPath (5.21s)
    --- PASS: TestContainerTestSuite/TestContainer_Path_Regular (6.48s)
    --- PASS: TestContainerTestSuite/TestContainer_ProfilesDefault (7.30s)
    --- PASS: TestContainerTestSuite/TestContainer_ProfilesMulti (6.64s)
    --- PASS: TestContainerTestSuite/TestContainer_ProfilesOverwriteDefaultNic (6.70s)
    --- PASS: TestContainerTestSuite/TestContainer_Rename (7.09s)
    --- PASS: TestContainerTestSuite/TestContainer_findIdmap_isolated (6.41s)
    --- PASS: TestContainerTestSuite/TestContainer_findIdmap_maxed (9.49s)
    --- PASS: TestContainerTestSuite/TestContainer_findIdmap_mixed (5.72s)
    --- PASS: TestContainerTestSuite/TestContainer_findIdmap_raw (3.62s)
=== RUN   TestIntegration_UnixSocket
--- PASS: TestIntegration_UnixSocket (3.31s)
	testing.go:36: 10:39:05.970 info Kernel uid/gid map:
	testing.go:36: 10:39:05.970 info  - u 0 0 4294967295
	testing.go:36: 10:39:05.970 info  - g 0 0 4294967295
	testing.go:36: 10:39:05.970 info Configured LXD uid/gid map:
	testing.go:36: 10:39:05.971 info  - u 0 100000 65536
	testing.go:36: 10:39:05.971 info  - g 0 100000 65536
	testing.go:36: 10:39:05.972 warn Per-container AppArmor profiles are disabled because the mac_admin capability is missing
	testing.go:36: 10:39:05.972 warn CGroup memory swap accounting is disabled, swap limits will be ignored.
	testing.go:36: 10:39:05.972 info LXD 3.18 is starting in mock mode path=/tmp/lxd_testrun_595725832
	testing.go:36: 10:39:05.973 info Kernel uid/gid map:
	testing.go:36: 10:39:05.973 info  - u 0 0 4294967295
	testing.go:36: 10:39:05.973 info  - g 0 0 4294967295
	testing.go:36: 10:39:05.973 info Configured LXD uid/gid map:
	testing.go:36: 10:39:05.973 info  - u 0 100000 65536
	testing.go:36: 10:39:05.973 info  - g 0 100000 65536
	testing.go:36: 10:39:05.974 warn Per-container AppArmor profiles are disabled because the mac_admin capability is missing
	testing.go:36: 10:39:05.974 warn CGroup memory swap accounting is disabled, swap limits will be ignored.
	testing.go:36: 10:39:05.974 info Kernel features:
	testing.go:36: 10:39:05.974 dbug Failed to attach to host network namespace
	testing.go:36: 10:39:05.974 info  - netnsid-based network retrieval: no
	testing.go:36: 10:39:05.974 info  - uevent injection: no
	testing.go:36: 10:39:05.975 info  - seccomp listener: no
	testing.go:36: 10:39:05.975 info  - seccomp listener continue syscalls: no
	testing.go:36: 10:39:05.975 info  - unprivileged file capabilities: no
	testing.go:36: 10:39:05.979 info  - shiftfs support: no
	testing.go:36: 10:39:05.979 info Initializing local database
	testing.go:36: 10:39:06.475 dbug Database error: failed to begin transaction: sql: database is closed
	testing.go:36: 10:39:06.475 warn Failed to delete operation 15d92261-48e5-4db9-8960-4311d95851cc: failed to begin transaction: sql: database is closed
	testing.go:36: 10:39:08.352 dbug Initializing database gateway
	testing.go:36: 10:39:08.352 dbug Start database node id=1 address=
	testing.go:36: 10:39:08.701 info Starting /dev/lxd handler:
	testing.go:36: 10:39:08.701 info  - binding devlxd socket socket=/tmp/lxd-sys-os-test-159120506/devlxd/sock
	testing.go:36: 10:39:08.701 info REST API daemon:
	testing.go:36: 10:39:08.701 info  - binding Unix socket socket=/tmp/lxd-sys-os-test-159120506/unix.socket
	testing.go:36: 10:39:08.701 info Initializing global database
	testing.go:36: 10:39:08.703 dbug Dqlite: connected address=1 id=1 attempt=0
	testing.go:36: 10:39:09.089 info Initializing storage pools
	testing.go:36: 10:39:09.089 dbug No existing storage pools detected
	testing.go:36: 10:39:09.090 info Initializing networks
	testing.go:36: 10:39:09.136 dbug New task Operation: 8fde0039-1fb4-4579-bab2-06e2bcd09aa9
	testing.go:36: 10:39:09.137 info Pruning leftover image files
	testing.go:36: 10:39:09.137 dbug Started task operation: 8fde0039-1fb4-4579-bab2-06e2bcd09aa9
	testing.go:36: 10:39:09.137 info Done pruning leftover image files
	testing.go:36: 10:39:09.138 info Loading daemon configuration
	testing.go:36: 10:39:09.139 dbug Failure for task operation: 8fde0039-1fb4-4579-bab2-06e2bcd09aa9: Unable to list the images directory: open /tmp/lxd_testrun_595725832/images: no such file or directory
	testing.go:36: 10:39:09.141 dbug Connecting to a local LXD over a Unix socket
	testing.go:36: 10:39:09.142 dbug Sending request to LXD method=GET url=http://unix.socket/1.0 etag=
	testing.go:36: 10:39:09.142 dbug Handling method=GET url=/1.0 ip=@ user=
	testing.go:36: 10:39:09.146 dbug Got response struct from LXD
	testing.go:36: 10:39:09.146 dbug 
			{
				"config": {},
				"api_extensions": [
					"storage_zfs_remove_snapshots",
					"container_host_shutdown_timeout",
					"container_stop_priority",
					"container_syscall_filtering",
					"auth_pki",
					"container_last_used_at",
					"etag",
					"patch",
					"usb_devices",
					"https_allowed_credentials",
					"image_compression_algorithm",
					"directory_manipulation",
					"container_cpu_time",
					"storage_zfs_use_refquota",
					"storage_lvm_mount_options",
					"network",
					"profile_usedby",
					"container_push",
					"container_exec_recording",
					"certificate_update",
					"container_exec_signal_handling",
					"gpu_devices",
					"container_image_properties",
					"migration_progress",
					"id_map",
					"network_firewall_filtering",
					"network_routes",
					"storage",
					"file_delete",
					"file_append",
					"network_dhcp_expiry",
					"storage_lvm_vg_rename",
					"storage_lvm_thinpool_rename",
					"network_vlan",
					"image_create_aliases",
					"container_stateless_copy",
					"container_only_migration",
					"storage_zfs_clone_copy",
					"unix_device_rename",
					"storage_lvm_use_thinpool",
					"storage_rsync_bwlimit",
					"network_vxlan_interface",
					"storage_btrfs_mount_options",
					"entity_description",
					"image_force_refresh",
					"storage_lvm_lv_resizing",
					"id_map_base",
					"file_symlinks",
					"container_push_target",
					"network_vlan_physical",
					"storage_images_delete",
					"container_edit_metadata",
					"container_snapshot_stateful_migration",
					"storage_driver_ceph",
					"storage_ceph_user_name",
					"resource_limits",
					"storage_volatile_initial_source",
					"storage_ceph_force_osd_reuse",
					"storage_block_filesystem_btrfs",
					"resources",
					"kernel_limits",
					"storage_api_volume_rename",
					"macaroon_authentication",
					"network_sriov",
					"console",
					"restrict_devlxd",
					"migration_pre_copy",
					"infiniband",
					"maas_network",
					"devlxd_events",
					"proxy",
					"network_dhcp_gateway",
					"file_get_symlink",
					"network_leases",
					"unix_device_hotplug",
					"storage_api_local_volume_handling",
					"operation_description",
					"clustering",
					"event_lifecycle",
					"storage_api_remote_volume_handling",
					"nvidia_runtime",
					"container_mount_propagation",
					"container_backup",
					"devlxd_images",
					"container_local_cross_pool_handling",
					"proxy_unix",
					"proxy_udp",
					"clustering_join",
					"proxy_tcp_udp_multi_port_handling",
					"network_state",
					"proxy_unix_dac_properties",
					"container_protection_delete",
					"unix_priv_drop",
					"pprof_http",
					"proxy_haproxy_protocol",
					"network_hwaddr",
					"proxy_nat",
					"network_nat_order",
					"container_full",
					"candid_authentication",
					"backup_compression",
					"candid_config",
					"nvidia_runtime_config",
					"storage_api_volume_snapshots",
					"storage_unmapped",
					"projects",
					"candid_config_key",
					"network_vxlan_ttl",
					"container_incremental_copy",
					"usb_optional_vendorid",
					"snapshot_scheduling",
					"container_copy_project",
					"clustering_server_address",
					"clustering_image_replication",
					"container_protection_shift",
					"snapshot_expiry",
					"container_backup_override_pool",
					"snapshot_expiry_creation",
					"network_leases_location",
					"resources_cpu_socket",
					"resources_gpu",
					"resources_numa",
					"kernel_features",
					"id_map_current",
					"event_location",
					"storage_api_remote_volume_snapshots",
					"network_nat_address",
					"container_nic_routes",
					"rbac",
					"cluster_internal_copy",
					"seccomp_notify",
					"lxc_features",
					"container_nic_ipvlan",
					"network_vlan_sriov",
					"storage_cephfs",
					"container_nic_ipfilter",
					"resources_v2",
					"container_exec_user_group_cwd",
					"container_syscall_intercept",
					"container_disk_shift",
					"storage_shifted",
					"resources_infiniband",
					"daemon_storage",
					"instances",
					"image_types",
					"resources_disk_sata",
					"clustering_roles",
					"images_expiry",
					"resources_network_firmware",
					"backup_compression_algorithm",
					"ceph_data_pool_name",
					"container_syscall_intercept_mount",
					"compression_squashfs",
					"container_raw_mount",
					"container_nic_routed",
					"container_syscall_intercept_mount_fuse"
				],
				"api_status": "stable",
				"api_version": "1.0",
				"auth": "trusted",
				"public": false,
				"auth_methods": [
					"tls"
				],
				"environment": {
					"addresses": [],
					"architectures": [
						"x86_64",
						"i686"
					],
					"certificate": "-----BEGIN CERTIFICATE-----\nMIIFzjCCA7igAwIBAgIRAKnCQRdpkZ86oXYOd9hGrPgwCwYJKoZIhvcNAQELMB4x\nHDAaBgNVBAoTE2xpbnV4Y29udGFpbmVycy5vcmcwHhcNMTUwNzE1MDQ1NjQ0WhcN\nMjUwNzEyMDQ1NjQ0WjAeMRwwGgYDVQQKExNsaW51eGNvbnRhaW5lcnMub3JnMIIC\nIjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAyViJkCzoxa1NYilXqGJog6xz\nlSm4xt8KIzayc0JdB9VxEdIVdJqUzBAUtyCS4KZ9MbPmMEOX9NbBASL0tRK58/7K\nScq99Kj4XbVMLU1P/y5aW0ymnF0OpKbG6unmgAI2k/duRlbYHvGRdhlswpKl0Yst\nl8i2kXOK0Rxcz90FewcEXGSnIYW21sz8YpBLfIZqOx6XEV36mOdi3MLrhUSAhXDw\nPay33Y7NonCQUBtiO7BT938cqI14FJrWdKon1UnODtzONcVBLTWtoe7D41+mx7EE\nTaq5OPxBSe0DD6KQcPOZ7ZSJEhIqVKMvzLyiOJpyShmhm4OuGNoAG6jAuSij/9Kc\naLU4IitcrvFOuAo8M9OpiY9ZCR7Gb/qaPAXPAxE7Ci3f9DDNKXtPXDjhj3YG01+h\nfNXMW3kCkMImn0A/+mZUMdCL87GWN2AN3Do5qaIc5XVEt1gp+LVqJeMoZ/lAeZWT\nIbzcnkneOzE25m+bjw3r3WlR26amhyrWNwjGzRkgfEpw336kniX/GmwaCNgdNk+g\n5aIbVxIHO0DbgkDBtdljR3VOic4djW/LtUIYIQ2egnPPyRR3fcFI+x5EQdVQYUXf\njpGIwovUDyG0Lkam2tpdeEXvLMZr8+Lhzu+H6vUFSj3cz6gcw/Xepw40FOkYdAI9\nLYB6nwpZLTVaOqZCJ2ECAwEAAaOCAQkwggEFMA4GA1UdDwEB/wQEAwIAoDATBgNV\nHSUEDDAKBggrBgEFBQcDATAMBgNVHRMBAf8EAjAAMIHPBgNVHREEgccwgcSCCVVi\ndW50dVByb4IRMTAuMTY3LjE2MC4xODMvMjSCHzIwMDE6MTVjMDo2NzM1OmVlMDA6\nOmU6ZTMxMy8xMjiCKWZkNTc6Yzg3ZDpmMWVlOmVlMDA6MjFkOjdkZmY6ZmUwOToz\nNzUzLzY0gikyMDAxOjE1YzA6NjczNTplZTAwOjIxZDo3ZGZmOmZlMDk6Mzc1My82\nNIIbZmU4MDo6MjFkOjdkZmY6ZmUwOTozNzUzLzY0ghAxOTIuMTY4LjEyMi4xLzI0\nMAsGCSqGSIb3DQEBCwOCAgEAmcJUSBH7cLw3auEEV1KewtdqY1ARVB/pafAtbe9F\n7ZKBbxUcS7cP3P1hRs5FH1bH44bIJKHxckctNUPqvC+MpXSryKinQ5KvGPNjGdlW\n6EPlQr23btizC6hRdQ6RjEkCnQxhyTLmQ9n78nt47hjA96rFAhCUyfPdv9dI4Zux\nbBTJekhCx5taamQKoxr7tql4Y2TchVlwASZvOfar8I0GxBRFT8w9IjckOSLoT9/s\nOhlvXpeoxxFT7OHwqXEXdRUvw/8MGBo6JDnw+J/NGDBw3Z0goebG4FMT//xGSHia\nczl3A0M0flk4/45L7N6vctwSqi+NxVaJRKeiYPZyzOO9K/d+No+WVBPwKmyP8icQ\nb7FGTelPJOUolC6kmoyM+vyaNUoU4nz6lgOSHAtuqGNDWZWuX/gqzZw77hzDIgkN\nqisOHZWPVlG/iUh1JBkbglBaPeaa3zf0XwSdgwwf4v8Z+YtEiRqkuFgQY70eQKI/\nCIkj1p0iW5IBEsEAGUGklz4ZwqJwH3lQIqDBzIgHe3EP4cXaYsx6oYhPSDdHLPv4\nHMZhl05DP75CEkEWRD0AIaL7SHdyuYUmCZ2zdrMI7TEDrAqcUuPbYpHcdJ2wnYmi\n2G8XHJibfu4PCpIm1J8kPL8rqpdgW3moKR8Mp0HJQOH4tSBr1Ep7xNLP1wg6PIe+\np7U=\n-----END CERTIFICATE-----\n",
					"certificate_fingerprint": "5325b921edf26720953bff015b69900943315b5db69f3c6e17c25a694875c5b8",
					"driver": "lxc",
					"driver_version": "3.0.3",
					"kernel": "Linux",
					"kernel_architecture": "x86_64",
					"kernel_features": {
						"netnsid_getifaddrs": "false",
						"seccomp_listener": "false",
						"seccomp_listener_continue": "false",
						"shiftfs": "false",
						"uevent_injection": "false",
						"unpriv_fscaps": "false"
					},
					"kernel_version": "4.15.0-70-generic",
					"lxc_features": {
						"mount_injection_file": "false",
						"network_gateway_device_route": "false",
						"network_ipvlan": "false",
						"network_l2proxy": "false",
						"network_phys_macvlan_mtu": "false",
						"network_veth_router": "false",
						"seccomp_notify": "false"
					},
					"project": "default",
					"server": "lxd",
					"server_clustered": false,
					"server_name": "liopleurodon",
					"server_pid": 10847,
					"server_version": "3.18",
					"storage": "mock",
					"storage_version": ""
				}
			}
	testing.go:36: 10:39:09.147 dbug Sending request to LXD method=GET url=http://unix.socket/1.0 etag=
	testing.go:36: 10:39:09.147 dbug Handling user= method=GET url=/1.0 ip=@
	testing.go:36: 10:39:09.150 dbug Got response struct from LXD
	testing.go:36: 10:39:09.151 dbug 
			{
				"config": {},
				"api_extensions": [
					"storage_zfs_remove_snapshots",
					"container_host_shutdown_timeout",
					"container_stop_priority",
					"container_syscall_filtering",
					"auth_pki",
					"container_last_used_at",
					"etag",
					"patch",
					"usb_devices",
					"https_allowed_credentials",
					"image_compression_algorithm",
					"directory_manipulation",
					"container_cpu_time",
					"storage_zfs_use_refquota",
					"storage_lvm_mount_options",
					"network",
					"profile_usedby",
					"container_push",
					"container_exec_recording",
					"certificate_update",
					"container_exec_signal_handling",
					"gpu_devices",
					"container_image_properties",
					"migration_progress",
					"id_map",
					"network_firewall_filtering",
					"network_routes",
					"storage",
					"file_delete",
					"file_append",
					"network_dhcp_expiry",
					"storage_lvm_vg_rename",
					"storage_lvm_thinpool_rename",
					"network_vlan",
					"image_create_aliases",
					"container_stateless_copy",
					"container_only_migration",
					"storage_zfs_clone_copy",
					"unix_device_rename",
					"storage_lvm_use_thinpool",
					"storage_rsync_bwlimit",
					"network_vxlan_interface",
					"storage_btrfs_mount_options",
					"entity_description",
					"image_force_refresh",
					"storage_lvm_lv_resizing",
					"id_map_base",
					"file_symlinks",
					"container_push_target",
					"network_vlan_physical",
					"storage_images_delete",
					"container_edit_metadata",
					"container_snapshot_stateful_migration",
					"storage_driver_ceph",
					"storage_ceph_user_name",
					"resource_limits",
					"storage_volatile_initial_source",
					"storage_ceph_force_osd_reuse",
					"storage_block_filesystem_btrfs",
					"resources",
					"kernel_limits",
					"storage_api_volume_rename",
					"macaroon_authentication",
					"network_sriov",
					"console",
					"restrict_devlxd",
					"migration_pre_copy",
					"infiniband",
					"maas_network",
					"devlxd_events",
					"proxy",
					"network_dhcp_gateway",
					"file_get_symlink",
					"network_leases",
					"unix_device_hotplug",
					"storage_api_local_volume_handling",
					"operation_description",
					"clustering",
					"event_lifecycle",
					"storage_api_remote_volume_handling",
					"nvidia_runtime",
					"container_mount_propagation",
					"container_backup",
					"devlxd_images",
					"container_local_cross_pool_handling",
					"proxy_unix",
					"proxy_udp",
					"clustering_join",
					"proxy_tcp_udp_multi_port_handling",
					"network_state",
					"proxy_unix_dac_properties",
					"container_protection_delete",
					"unix_priv_drop",
					"pprof_http",
					"proxy_haproxy_protocol",
					"network_hwaddr",
					"proxy_nat",
					"network_nat_order",
					"container_full",
					"candid_authentication",
					"backup_compression",
					"candid_config",
					"nvidia_runtime_config",
					"storage_api_volume_snapshots",
					"storage_unmapped",
					"projects",
					"candid_config_key",
					"network_vxlan_ttl",
					"container_incremental_copy",
					"usb_optional_vendorid",
					"snapshot_scheduling",
					"container_copy_project",
					"clustering_server_address",
					"clustering_image_replication",
					"container_protection_shift",
					"snapshot_expiry",
					"container_backup_override_pool",
					"snapshot_expiry_creation",
					"network_leases_location",
					"resources_cpu_socket",
					"resources_gpu",
					"resources_numa",
					"kernel_features",
					"id_map_current",
					"event_location",
					"storage_api_remote_volume_snapshots",
					"network_nat_address",
					"container_nic_routes",
					"rbac",
					"cluster_internal_copy",
					"seccomp_notify",
					"lxc_features",
					"container_nic_ipvlan",
					"network_vlan_sriov",
					"storage_cephfs",
					"container_nic_ipfilter",
					"resources_v2",
					"container_exec_user_group_cwd",
					"container_syscall_intercept",
					"container_disk_shift",
					"storage_shifted",
					"resources_infiniband",
					"daemon_storage",
					"instances",
					"image_types",
					"resources_disk_sata",
					"clustering_roles",
					"images_expiry",
					"resources_network_firmware",
					"backup_compression_algorithm",
					"ceph_data_pool_name",
					"container_syscall_intercept_mount",
					"compression_squashfs",
					"container_raw_mount",
					"container_nic_routed",
					"container_syscall_intercept_mount_fuse"
				],
				"api_status": "stable",
				"api_version": "1.0",
				"auth": "trusted",
				"public": false,
				"auth_methods": [
					"tls"
				],
				"environment": {
					"addresses": [],
					"architectures": [
						"x86_64",
						"i686"
					],
					"certificate": "-----BEGIN CERTIFICATE-----\nMIIFzjCCA7igAwIBAgIRAKnCQRdpkZ86oXYOd9hGrPgwCwYJKoZIhvcNAQELMB4x\nHDAaBgNVBAoTE2xpbnV4Y29udGFpbmVycy5vcmcwHhcNMTUwNzE1MDQ1NjQ0WhcN\nMjUwNzEyMDQ1NjQ0WjAeMRwwGgYDVQQKExNsaW51eGNvbnRhaW5lcnMub3JnMIIC\nIjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAyViJkCzoxa1NYilXqGJog6xz\nlSm4xt8KIzayc0JdB9VxEdIVdJqUzBAUtyCS4KZ9MbPmMEOX9NbBASL0tRK58/7K\nScq99Kj4XbVMLU1P/y5aW0ymnF0OpKbG6unmgAI2k/duRlbYHvGRdhlswpKl0Yst\nl8i2kXOK0Rxcz90FewcEXGSnIYW21sz8YpBLfIZqOx6XEV36mOdi3MLrhUSAhXDw\nPay33Y7NonCQUBtiO7BT938cqI14FJrWdKon1UnODtzONcVBLTWtoe7D41+mx7EE\nTaq5OPxBSe0DD6KQcPOZ7ZSJEhIqVKMvzLyiOJpyShmhm4OuGNoAG6jAuSij/9Kc\naLU4IitcrvFOuAo8M9OpiY9ZCR7Gb/qaPAXPAxE7Ci3f9DDNKXtPXDjhj3YG01+h\nfNXMW3kCkMImn0A/+mZUMdCL87GWN2AN3Do5qaIc5XVEt1gp+LVqJeMoZ/lAeZWT\nIbzcnkneOzE25m+bjw3r3WlR26amhyrWNwjGzRkgfEpw336kniX/GmwaCNgdNk+g\n5aIbVxIHO0DbgkDBtdljR3VOic4djW/LtUIYIQ2egnPPyRR3fcFI+x5EQdVQYUXf\njpGIwovUDyG0Lkam2tpdeEXvLMZr8+Lhzu+H6vUFSj3cz6gcw/Xepw40FOkYdAI9\nLYB6nwpZLTVaOqZCJ2ECAwEAAaOCAQkwggEFMA4GA1UdDwEB/wQEAwIAoDATBgNV\nHSUEDDAKBggrBgEFBQcDATAMBgNVHRMBAf8EAjAAMIHPBgNVHREEgccwgcSCCVVi\ndW50dVByb4IRMTAuMTY3LjE2MC4xODMvMjSCHzIwMDE6MTVjMDo2NzM1OmVlMDA6\nOmU6ZTMxMy8xMjiCKWZkNTc6Yzg3ZDpmMWVlOmVlMDA6MjFkOjdkZmY6ZmUwOToz\nNzUzLzY0gikyMDAxOjE1YzA6NjczNTplZTAwOjIxZDo3ZGZmOmZlMDk6Mzc1My82\nNIIbZmU4MDo6MjFkOjdkZmY6ZmUwOTozNzUzLzY0ghAxOTIuMTY4LjEyMi4xLzI0\nMAsGCSqGSIb3DQEBCwOCAgEAmcJUSBH7cLw3auEEV1KewtdqY1ARVB/pafAtbe9F\n7ZKBbxUcS7cP3P1hRs5FH1bH44bIJKHxckctNUPqvC+MpXSryKinQ5KvGPNjGdlW\n6EPlQr23btizC6hRdQ6RjEkCnQxhyTLmQ9n78nt47hjA96rFAhCUyfPdv9dI4Zux\nbBTJekhCx5taamQKoxr7tql4Y2TchVlwASZvOfar8I0GxBRFT8w9IjckOSLoT9/s\nOhlvXpeoxxFT7OHwqXEXdRUvw/8MGBo6JDnw+J/NGDBw3Z0goebG4FMT//xGSHia\nczl3A0M0flk4/45L7N6vctwSqi+NxVaJRKeiYPZyzOO9K/d+No+WVBPwKmyP8icQ\nb7FGTelPJOUolC6kmoyM+vyaNUoU4nz6lgOSHAtuqGNDWZWuX/gqzZw77hzDIgkN\nqisOHZWPVlG/iUh1JBkbglBaPeaa3zf0XwSdgwwf4v8Z+YtEiRqkuFgQY70eQKI/\nCIkj1p0iW5IBEsEAGUGklz4ZwqJwH3lQIqDBzIgHe3EP4cXaYsx6oYhPSDdHLPv4\nHMZhl05DP75CEkEWRD0AIaL7SHdyuYUmCZ2zdrMI7TEDrAqcUuPbYpHcdJ2wnYmi\n2G8XHJibfu4PCpIm1J8kPL8rqpdgW3moKR8Mp0HJQOH4tSBr1Ep7xNLP1wg6PIe+\np7U=\n-----END CERTIFICATE-----\n",
					"certificate_fingerprint": "5325b921edf26720953bff015b69900943315b5db69f3c6e17c25a694875c5b8",
					"driver": "lxc",
					"driver_version": "3.0.3",
					"kernel": "Linux",
					"kernel_architecture": "x86_64",
					"kernel_features": {
						"netnsid_getifaddrs": "false",
						"seccomp_listener": "false",
						"seccomp_listener_continue": "false",
						"shiftfs": "false",
						"uevent_injection": "false",
						"unpriv_fscaps": "false"
					},
					"kernel_version": "4.15.0-70-generic",
					"lxc_features": {
						"mount_injection_file": "false",
						"network_gateway_device_route": "false",
						"network_ipvlan": "false",
						"network_l2proxy": "false",
						"network_phys_macvlan_mtu": "false",
						"network_veth_router": "false",
						"seccomp_notify": "false"
					},
					"project": "default",
					"server": "lxd",
					"server_clustered": false,
					"server_name": "liopleurodon",
					"server_pid": 10847,
					"server_version": "3.18",
					"storage": "mock",
					"storage_version": ""
				}
			}
	testing.go:36: 10:39:09.151 info Starting shutdown sequence
	testing.go:36: 10:39:09.151 info Stopping REST API handler:
	testing.go:36: 10:39:09.151 info  - closing socket socket=/tmp/lxd-sys-os-test-159120506/unix.socket
	testing.go:36: 10:39:09.151 info Stopping /dev/lxd handler:
	testing.go:36: 10:39:09.151 info  - closing socket socket=/tmp/lxd-sys-os-test-159120506/devlxd/sock
	testing.go:36: 10:39:09.152 info Closing the database
	testing.go:36: 10:39:09.159 dbug Stop database gateway
	testing.go:36: 10:39:09.281 info Unmounting temporary filesystems
	testing.go:36: 10:39:09.281 info Done unmounting temporary filesystems
=== RUN   TestCredsSendRecv
--- PASS: TestCredsSendRecv (0.00s)
=== RUN   TestHttpRequest
--- PASS: TestHttpRequest (3.40s)
=== RUN   TestParseAddr
2019/11/15 10:39:12 Running test #0: Single port
2019/11/15 10:39:12 Running test #1: Multiple ports
2019/11/15 10:39:12 Running test #2: Port range
2019/11/15 10:39:12 Running test #3: Mixed ports and port ranges
2019/11/15 10:39:12 Running test #4: UDP
2019/11/15 10:39:12 Running test #5: Unix socket
2019/11/15 10:39:12 Running test #6: Abstract unix socket
2019/11/15 10:39:12 Running test #7: Unknown connection type
2019/11/15 10:39:12 Running test #8: Valid IPv6 address (1)
2019/11/15 10:39:12 Running test #9: Valid IPv6 address (2)
2019/11/15 10:39:12 Running test #10: Valid IPv6 address (3)
2019/11/15 10:39:12 Running test #11: Valid IPv6 address (4)
2019/11/15 10:39:12 Running test #12: Invalid IPv6 address (1)
2019/11/15 10:39:12 Running test #13: Invalid IPv6 address (2)
--- PASS: TestParseAddr (0.00s)
=== RUN   Test_removing_a_profile_deletes_associated_configuration_entries
--- PASS: Test_removing_a_profile_deletes_associated_configuration_entries (0.78s)
	testing.go:164: INFO: connected address=@000db id=1 attempt=0
PASS
ok  	github.com/lxc/lxd/lxd	(cached)
?   	github.com/lxc/lxd/lxd/apparmor	[no test files]
?   	github.com/lxc/lxd/lxd/backup	[no test files]
?   	github.com/lxc/lxd/lxd/cgroup	[no test files]
=== RUN   TestConfigLoad_Initial
--- PASS: TestConfigLoad_Initial (0.82s)
	testing.go:164: INFO: connected address=@0007a id=1 attempt=0
=== RUN   TestConfigLoad_IgnoreInvalidKeys
--- PASS: TestConfigLoad_IgnoreInvalidKeys (0.76s)
	testing.go:164: INFO: connected address=@0007b id=1 attempt=0
=== RUN   TestConfigLoad_Triggers
--- PASS: TestConfigLoad_Triggers (0.70s)
	testing.go:164: INFO: connected address=@0007c id=1 attempt=0
=== RUN   TestConfigLoad_OfflineThresholdValidator
--- PASS: TestConfigLoad_OfflineThresholdValidator (0.93s)
	testing.go:164: INFO: connected address=@0007d id=1 attempt=0
=== RUN   TestConfig_ReplaceDeleteValues
--- PASS: TestConfig_ReplaceDeleteValues (1.28s)
	testing.go:164: INFO: connected address=@0007e id=1 attempt=0
=== RUN   TestConfig_PatchKeepsValues
--- PASS: TestConfig_PatchKeepsValues (1.18s)
	testing.go:164: INFO: connected address=@0007f id=1 attempt=0
=== RUN   TestGateway_Single
2019/11/15 10:37:46.316605 [INFO]: connected address=1 id=1 attempt=0
--- PASS: TestGateway_Single (0.80s)
	testing.go:36: 10:37:45.666 dbug Initializing database gateway
	testing.go:36: 10:37:45.667 dbug Start database node id=1 address=
	testing.go:36: 10:37:46.315 dbug Found cert name=0
	testing.go:36: 10:37:46.317 dbug Stop database gateway
	testing.go:36: 10:37:46.317 warn Failed get database dump: failed to parse files response:  (2)
=== RUN   TestGateway_SingleWithNetworkAddress
2019/11/15 10:37:47.438697 [INFO]: connected address=127.0.0.1:33143 id=1 attempt=0
--- PASS: TestGateway_SingleWithNetworkAddress (1.12s)
	testing.go:36: 10:37:46.656 dbug Initializing database gateway
	testing.go:36: 10:37:46.656 dbug Start database node id=1 address=127.0.0.1:33143
	testing.go:36: 10:37:47.406 dbug Found cert name=0
	testing.go:36: 10:37:47.438 dbug Found cert name=0
	testing.go:36: 10:37:47.439 warn Dqlite client proxy TLS -> Unix: read tcp 127.0.0.1:49968->127.0.0.1:33143: use of closed network connection
	testing.go:36: 10:37:47.439 dbug Stop database gateway
	testing.go:36: 10:37:47.439 warn Dqlite server proxy Unix -> TLS: read unix @->@00081: use of closed network connection
	testing.go:36: 10:37:47.439 warn Failed get database dump: failed to parse files response:  (2)
=== RUN   TestGateway_NetworkAuth
--- PASS: TestGateway_NetworkAuth (0.91s)
	testing.go:36: 10:37:47.721 dbug Initializing database gateway
	testing.go:36: 10:37:47.721 dbug Start database node id=1 address=127.0.0.1:37995
	testing.go:36: 10:37:48.346 dbug Stop database gateway
	testing.go:36: 10:37:48.346 warn Failed get database dump: failed to parse files response:  (2)
=== RUN   TestGateway_RaftNodesNotLeader
--- PASS: TestGateway_RaftNodesNotLeader (0.98s)
	testing.go:36: 10:37:48.651 dbug Initializing database gateway
	testing.go:36: 10:37:48.651 dbug Start database node address=127.0.0.1:40803 id=1
	testing.go:36: 10:37:49.321 dbug Stop database gateway
	testing.go:36: 10:37:49.321 warn Failed get database dump: failed to parse files response:  (2)
=== RUN   TestHeartbeat
2019/11/15 10:37:51.359524 [INFO]: connected address=1 id=1 attempt=0
2019/11/15 10:37:52.611120 [INFO]: connected address=127.0.0.1:46439 id=1 attempt=0
2019/11/15 10:37:54.787667 [INFO]: connected address=1 id=1 attempt=0
2019/11/15 10:37:57.379049 [INFO]: connected address=127.0.0.1:46439 id=1 attempt=0
2019/11/15 10:37:59.549994 [INFO]: connected address=1 id=1 attempt=0
2019/11/15 10:38:01.943477 [INFO]: connected address=127.0.0.1:46439 id=1 attempt=0
--- PASS: TestHeartbeat (16.91s)
	heartbeat_test.go:127: create bootstrap node for test cluster
	testing.go:164: INFO: connected address=@00086 id=1 attempt=0
	testing.go:36: 10:37:50.578 dbug Initializing database gateway
	testing.go:36: 10:37:50.578 dbug Start database node address= id=1
	testing.go:36: 10:37:51.925 dbug Acquiring exclusive lock on cluster db
	testing.go:36: 10:37:51.925 dbug Stop database gateway
	testing.go:36: 10:37:52.084 dbug Initializing database gateway
	testing.go:36: 10:37:52.084 dbug Start database node id=1 address=127.0.0.1:46439
	testing.go:36: 10:37:52.537 dbug Releasing exclusive lock on cluster db
	testing.go:36: 10:37:52.576 dbug Found cert name=0
	testing.go:36: 10:37:52.610 dbug Found cert name=0
	heartbeat_test.go:139: adding another node to the test cluster
	testing.go:164: INFO: connected address=@0008a id=1 attempt=0
	testing.go:36: 10:37:54.006 info Kernel uid/gid map:
	testing.go:36: 10:37:54.006 info  - u 0 0 4294967295
	testing.go:36: 10:37:54.006 info  - g 0 0 4294967295
	testing.go:36: 10:37:54.006 info Configured LXD uid/gid map:
	testing.go:36: 10:37:54.006 info  - u 0 100000 65536
	testing.go:36: 10:37:54.006 info  - g 0 100000 65536
	testing.go:36: 10:37:54.007 warn Per-container AppArmor profiles are disabled because the mac_admin capability is missing
	testing.go:36: 10:37:54.007 warn CGroup memory swap accounting is disabled, swap limits will be ignored.
	testing.go:36: 10:37:54.009 dbug Initializing database gateway
	testing.go:36: 10:37:54.010 dbug Start database node id=1 address=
	testing.go:36: 10:37:55.569 dbug Acquiring exclusive lock on cluster db
	testing.go:36: 10:37:55.569 dbug Stop database gateway
	testing.go:36: 10:37:55.675 dbug Initializing database gateway
	testing.go:36: 10:37:55.676 dbug Start database node id=2 address=127.0.0.1:33049
	testing.go:36: 10:37:56.061 info Joining dqlite raft cluster target=127.0.0.1:33049 id=2 address=127.0.0.1:33049
	testing.go:36: 10:37:56.118 dbug Found cert name=0
	testing.go:36: 10:37:56.118 dbug Dqlite: connected address=127.0.0.1:46439 id=1 attempt=0
	testing.go:36: 10:37:56.149 dbug Found cert name=0
	testing.go:36: 10:37:56.322 dbug Found cert name=0
	testing.go:36: 10:37:57.316 info Migrate local data to cluster database
	testing.go:36: 10:37:57.316 dbug Releasing exclusive lock on cluster db
	testing.go:36: 10:37:57.350 dbug Found cert name=0
	testing.go:36: 10:37:57.378 dbug Found cert name=0
	testing.go:36: 10:37:57.381 eror Unaccounted raft node(s) not found in 'nodes' table for heartbeat: map[127.0.0.1:46439:{ID:1 Address:127.0.0.1:46439} 127.0.0.1:33049:{ID:2 Address:127.0.0.1:33049}]
	testing.go:36: 10:37:57.381 dbug Sending heartbeat request to 127.0.0.1:46439
	testing.go:36: 10:37:57.414 dbug Found cert name=0
	testing.go:36: 10:37:57.414 eror Empty raft node set received
	testing.go:36: 10:37:57.414 dbug Partial node list heartbeat received, skipping full update
	heartbeat_test.go:139: adding another node to the test cluster
	testing.go:36: 10:37:57.500 warn Dqlite client proxy TLS -> Unix: read tcp 127.0.0.1:35996->127.0.0.1:46439: use of closed network connection
	testing.go:36: 10:37:57.500 warn Dqlite server proxy Unix -> TLS: read unix @->@00088: use of closed network connection
	testing.go:164: INFO: connected address=@00092 id=1 attempt=0
	testing.go:36: 10:37:58.750 info Kernel uid/gid map:
	testing.go:36: 10:37:58.750 info  - u 0 0 4294967295
	testing.go:36: 10:37:58.750 info  - g 0 0 4294967295
	testing.go:36: 10:37:58.750 info Configured LXD uid/gid map:
	testing.go:36: 10:37:58.751 info  - u 0 100000 65536
	testing.go:36: 10:37:58.751 info  - g 0 100000 65536
	testing.go:36: 10:37:58.752 warn Per-container AppArmor profiles are disabled because the mac_admin capability is missing
	testing.go:36: 10:37:58.752 warn CGroup memory swap accounting is disabled, swap limits will be ignored.
	testing.go:36: 10:37:58.754 dbug Initializing database gateway
	testing.go:36: 10:37:58.754 dbug Start database node id=1 address=
	testing.go:36: 10:38:00.316 dbug Acquiring exclusive lock on cluster db
	testing.go:36: 10:38:00.316 dbug Stop database gateway
	testing.go:36: 10:38:00.500 dbug Initializing database gateway
	testing.go:36: 10:38:00.501 dbug Start database node id=3 address=127.0.0.1:34365
	testing.go:36: 10:38:00.904 info Joining dqlite raft cluster id=3 address=127.0.0.1:34365 target=127.0.0.1:34365
	testing.go:36: 10:38:00.941 dbug Found cert name=0
	testing.go:36: 10:38:00.942 dbug Dqlite: connected address=127.0.0.1:46439 id=1 attempt=0
	testing.go:36: 10:38:00.974 dbug Found cert name=0
	testing.go:36: 10:38:01.105 dbug Found cert name=0
	testing.go:36: 10:38:01.823 info Migrate local data to cluster database
	testing.go:36: 10:38:01.823 dbug Releasing exclusive lock on cluster db
	testing.go:36: 10:38:01.890 dbug Found cert name=0
	testing.go:36: 10:38:01.924 dbug Found cert name=0
	testing.go:36: 10:38:01.950 eror Unaccounted raft node(s) not found in 'nodes' table for heartbeat: map[127.0.0.1:46439:{ID:1 Address:127.0.0.1:46439} 127.0.0.1:33049:{ID:2 Address:127.0.0.1:33049} 127.0.0.1:34365:{ID:3 Address:127.0.0.1:34365}]
	testing.go:36: 10:38:01.950 dbug Sending heartbeat request to 127.0.0.1:46439
	testing.go:36: 10:38:01.950 dbug Sending heartbeat request to 127.0.0.1:33049
	testing.go:36: 10:38:02.010 dbug Found cert name=0
	testing.go:36: 10:38:02.010 eror Empty raft node set received
	testing.go:36: 10:38:02.010 dbug Partial node list heartbeat received, skipping full update
	testing.go:36: 10:38:02.010 dbug Found cert name=0
	testing.go:36: 10:38:02.010 eror Empty raft node set received
	testing.go:36: 10:38:02.010 dbug Partial node list heartbeat received, skipping full update
	testing.go:36: 10:38:02.070 warn Dqlite client proxy TLS -> Unix: read tcp 127.0.0.1:36008->127.0.0.1:46439: use of closed network connection
	testing.go:36: 10:38:02.071 warn Dqlite server proxy Unix -> TLS: read unix @->@00088: use of closed network connection
	testing.go:36: 10:38:02.182 dbug Starting heartbeat round
	testing.go:36: 10:38:02.182 dbug Heartbeat updating local raft nodes to [{ID:1 Address:127.0.0.1:46439} {ID:2 Address:127.0.0.1:33049} {ID:3 Address:127.0.0.1:34365}]
	testing.go:36: 10:38:03.796 dbug Sending heartbeat to 127.0.0.1:33049
	testing.go:36: 10:38:03.796 dbug Sending heartbeat request to 127.0.0.1:33049
	testing.go:36: 10:38:03.864 dbug Found cert name=0
	testing.go:36: 10:38:03.864 dbug Replace current raft nodes with [{ID:1 Address:127.0.0.1:46439} {ID:2 Address:127.0.0.1:33049} {ID:3 Address:127.0.0.1:34365}]
	testing.go:36: 10:38:04.215 dbug Successful heartbeat for 127.0.0.1:33049
	testing.go:36: 10:38:04.640 dbug Sending heartbeat to 127.0.0.1:34365
	testing.go:36: 10:38:04.640 dbug Sending heartbeat request to 127.0.0.1:34365
	testing.go:36: 10:38:04.681 dbug Found cert name=0
	testing.go:36: 10:38:04.681 dbug Replace current raft nodes with [{ID:2 Address:127.0.0.1:33049} {ID:3 Address:127.0.0.1:34365} {ID:1 Address:127.0.0.1:46439}]
	testing.go:36: 10:38:04.787 dbug Successful heartbeat for 127.0.0.1:34365
	testing.go:36: 10:38:04.939 dbug Completed heartbeat round
	testing.go:36: 10:38:04.943 dbug Stop database gateway
	testing.go:36: 10:38:04.995 warn Dqlite server proxy TLS -> Unix: read tcp 127.0.0.1:34365->127.0.0.1:33548: use of closed network connection
	testing.go:36: 10:38:04.995 warn Dqlite client proxy Unix -> TLS: read unix @->@00097: use of closed network connection
	testing.go:36: 10:38:04.995 warn Dqlite client proxy TLS -> Unix: read tcp 127.0.0.1:36012->127.0.0.1:46439: use of closed network connection
	testing.go:36: 10:38:04.995 warn Dqlite server proxy Unix -> TLS: read unix @->@00088: use of closed network connection
	testing.go:36: 10:38:05.405 dbug Stop database gateway
	testing.go:36: 10:38:05.471 warn Dqlite client proxy Unix -> TLS: read unix @->@0008f: use of closed network connection
	testing.go:36: 10:38:05.471 warn Dqlite client proxy TLS -> Unix: read tcp 127.0.0.1:36000->127.0.0.1:46439: use of closed network connection
	testing.go:36: 10:38:05.471 warn Dqlite server proxy TLS -> Unix: read tcp 127.0.0.1:33049->127.0.0.1:47874: use of closed network connection
	testing.go:36: 10:38:05.471 warn Dqlite server proxy Unix -> TLS: read unix @->@00088: use of closed network connection
	testing.go:36: 10:38:05.634 dbug Found cert name=0
	testing.go:36: 10:38:05.635 warn Dqlite server proxy TLS -> Unix: read tcp 127.0.0.1:33049->127.0.0.1:47902: use of closed network connection
	testing.go:36: 10:38:05.635 warn Dqlite client proxy Unix -> TLS: read unix @->@0009b: use of closed network connection
	testing.go:36: 10:38:05.668 dbug Found cert name=0
	testing.go:36: 10:38:05.853 dbug Stop database gateway
	testing.go:36: 10:38:05.858 warn Dqlite client proxy TLS -> Unix: read tcp 127.0.0.1:33566->127.0.0.1:34365: read: connection reset by peer
	testing.go:36: 10:38:05.858 warn Dqlite client proxy Unix -> TLS: read unix @->@0009c: use of closed network connection
	testing.go:36: 10:38:05.919 warn Dqlite server proxy TLS -> Unix: read tcp 127.0.0.1:46439->127.0.0.1:35994: use of closed network connection
	testing.go:36: 10:38:05.919 warn Dqlite client proxy Unix -> TLS: read unix @->@00089: use of closed network connection
	testing.go:36: 10:38:05.919 warn Dqlite client proxy Unix -> TLS: read unix @->@00099: use of closed network connection
	testing.go:36: 10:38:05.919 warn Dqlite server proxy TLS -> Unix: read tcp 127.0.0.1:46439->127.0.0.1:36016: use of closed network connection
	testing.go:36: 10:38:05.919 warn Dqlite server proxy TLS -> Unix: read tcp 127.0.0.1:46439->127.0.0.1:36004: use of closed network connection
	testing.go:36: 10:38:05.919 warn Dqlite client proxy Unix -> TLS: read unix @->@00091: use of closed network connection
=== RUN   TestBootstrap_UnmetPreconditions
=== RUN   TestBootstrap_UnmetPreconditions/inconsistent_state:_found_leftover_cluster_certificate
=== RUN   TestBootstrap_UnmetPreconditions/no_cluster.https_address_config_is_set_on_this_node
=== RUN   TestBootstrap_UnmetPreconditions/the_node_is_already_part_of_a_cluster
=== RUN   TestBootstrap_UnmetPreconditions/inconsistent_state:_found_leftover_entries_in_raft_nodes
=== RUN   TestBootstrap_UnmetPreconditions/inconsistent_state:_found_leftover_entries_in_nodes
--- PASS: TestBootstrap_UnmetPreconditions (11.05s)
    --- PASS: TestBootstrap_UnmetPreconditions/inconsistent_state:_found_leftover_cluster_certificate (1.66s)
    	testing.go:164: INFO: connected address=@0009d id=1 attempt=0
    	testing.go:36: 10:38:07.753 dbug Initializing database gateway
    	testing.go:36: 10:38:07.754 dbug Stop database gateway
    --- PASS: TestBootstrap_UnmetPreconditions/no_cluster.https_address_config_is_set_on_this_node (2.20s)
    	testing.go:164: INFO: connected address=@0009e id=1 attempt=0
    	testing.go:36: 10:38:09.101 dbug Initializing database gateway
    	testing.go:36: 10:38:09.101 dbug Start database node id=1 address=
    	testing.go:36: 10:38:09.958 dbug Stop database gateway
    	testing.go:36: 10:38:09.959 warn Failed get database dump: failed to parse files response:  (2)
    --- PASS: TestBootstrap_UnmetPreconditions/the_node_is_already_part_of_a_cluster (1.72s)
    	testing.go:164: INFO: connected address=@000a0 id=1 attempt=0
    	testing.go:36: 10:38:11.627 dbug Initializing database gateway
    	testing.go:36: 10:38:11.628 dbug Stop database gateway
    --- PASS: TestBootstrap_UnmetPreconditions/inconsistent_state:_found_leftover_entries_in_raft_nodes (2.20s)
    	testing.go:164: INFO: connected address=@000a2 id=1 attempt=0
    	testing.go:36: 10:38:13.181 dbug Initializing database gateway
    	testing.go:36: 10:38:13.182 dbug Start database node id=1 address=
    	testing.go:36: 10:38:13.848 dbug Stop database gateway
    	testing.go:36: 10:38:13.849 warn Failed get database dump: failed to parse files response:  (2)
    --- PASS: TestBootstrap_UnmetPreconditions/inconsistent_state:_found_leftover_entries_in_nodes (3.27s)
    	testing.go:164: INFO: connected address=@000a4 id=1 attempt=0
    	testing.go:36: 10:38:16.291 dbug Initializing database gateway
    	testing.go:36: 10:38:16.291 dbug Start database node id=1 address=
    	testing.go:36: 10:38:17.123 dbug Database error: &errors.errorString{s:"inconsistent state: found leftover entries in nodes"}
    	testing.go:36: 10:38:17.123 dbug Stop database gateway
    	testing.go:36: 10:38:17.124 warn Failed get database dump: failed to parse files response:  (2)
=== RUN   TestBootstrap
--- PASS: TestBootstrap (2.81s)
	testing.go:164: INFO: connected address=@000a6 id=1 attempt=0
	testing.go:36: 10:38:18.525 dbug Initializing database gateway
	testing.go:36: 10:38:18.526 dbug Start database node id=1 address=
	testing.go:36: 10:38:19.497 dbug Acquiring exclusive lock on cluster db
	testing.go:36: 10:38:19.497 dbug Stop database gateway
	testing.go:36: 10:38:19.497 warn Failed get database dump: failed to parse files response:  (2)
	testing.go:36: 10:38:19.497 dbug Initializing database gateway
	testing.go:36: 10:38:19.498 dbug Start database node id=1 address=127.0.0.1:44611
	testing.go:36: 10:38:19.919 dbug Releasing exclusive lock on cluster db
	testing.go:36: 10:38:19.921 dbug Stop database gateway
	testing.go:36: 10:38:19.921 warn Failed get database dump: failed to parse files response:  (2)
=== RUN   TestAccept_UnmetPreconditions
=== RUN   TestAccept_UnmetPreconditions/Clustering_isn't_enabled
=== RUN   TestAccept_UnmetPreconditions/The_cluster_already_has_a_member_with_name:_rusp
=== RUN   TestAccept_UnmetPreconditions/The_cluster_already_has_a_member_with_address:_5.6.7.8:666
=== RUN   TestAccept_UnmetPreconditions/The_joining_server_version_doesn't_(expected_3.18_with_DB_schema_17)
=== RUN   TestAccept_UnmetPreconditions/The_joining_server_version_doesn't_(expected_3.18_with_API_count_155)
--- PASS: TestAccept_UnmetPreconditions (10.38s)
    --- PASS: TestAccept_UnmetPreconditions/Clustering_isn't_enabled (2.00s)
    	testing.go:164: INFO: connected address=@000aa id=1 attempt=0
    	testing.go:36: 10:38:21.311 dbug Initializing database gateway
    	testing.go:36: 10:38:21.311 dbug Start database node id=1 address=
    	testing.go:36: 10:38:21.947 dbug Database error: &errors.errorString{s:"Clustering isn't enabled"}
    	testing.go:36: 10:38:21.947 dbug Stop database gateway
    	testing.go:36: 10:38:21.947 warn Failed get database dump: failed to parse files response:  (2)
    --- PASS: TestAccept_UnmetPreconditions/The_cluster_already_has_a_member_with_name:_rusp (2.15s)
    	testing.go:164: INFO: connected address=@000ac id=1 attempt=0
    	testing.go:36: 10:38:23.283 dbug Initializing database gateway
    	testing.go:36: 10:38:23.283 dbug Start database node id=1 address=
    	testing.go:36: 10:38:24.123 dbug Database error: &errors.errorString{s:"The cluster already has a member with name: rusp"}
    	testing.go:36: 10:38:24.124 dbug Stop database gateway
    	testing.go:36: 10:38:24.124 warn Failed get database dump: failed to parse files response:  (2)
    --- PASS: TestAccept_UnmetPreconditions/The_cluster_already_has_a_member_with_address:_5.6.7.8:666 (2.08s)
    	testing.go:164: INFO: connected address=@000ae id=1 attempt=0
    	testing.go:36: 10:38:25.449 dbug Initializing database gateway
    	testing.go:36: 10:38:25.449 dbug Start database node id=1 address=
    	testing.go:36: 10:38:26.165 dbug Database error: &errors.errorString{s:"The cluster already has a member with address: 5.6.7.8:666"}
    	testing.go:36: 10:38:26.165 dbug Stop database gateway
    	testing.go:36: 10:38:26.166 warn Failed get database dump: failed to parse files response:  (2)
    --- PASS: TestAccept_UnmetPreconditions/The_joining_server_version_doesn't_(expected_3.18_with_DB_schema_17) (1.99s)
    	testing.go:164: INFO: connected address=@000b1 id=1 attempt=0
    	testing.go:36: 10:38:27.456 dbug Initializing database gateway
    	testing.go:36: 10:38:27.457 dbug Start database node id=1 address=
    	testing.go:36: 10:38:28.201 dbug Database error: &errors.errorString{s:"The joining server version doesn't (expected 3.18 with DB schema 17)"}
    	testing.go:36: 10:38:28.201 dbug Stop database gateway
    	testing.go:36: 10:38:28.201 warn Failed get database dump: failed to parse files response:  (2)
    --- PASS: TestAccept_UnmetPreconditions/The_joining_server_version_doesn't_(expected_3.18_with_API_count_155) (2.16s)
    	testing.go:164: INFO: connected address=@000b3 id=1 attempt=0
    	testing.go:36: 10:38:29.606 dbug Initializing database gateway
    	testing.go:36: 10:38:29.606 dbug Start database node id=1 address=
    	testing.go:36: 10:38:30.356 dbug Database error: &errors.errorString{s:"The joining server version doesn't (expected 3.18 with API count 155)"}
    	testing.go:36: 10:38:30.357 dbug Stop database gateway
    	testing.go:36: 10:38:30.357 warn Failed get database dump: failed to parse files response:  (2)
=== RUN   TestAccept
--- PASS: TestAccept (2.48s)
	testing.go:164: INFO: connected address=@000b5 id=1 attempt=0
	testing.go:36: 10:38:31.735 dbug Initializing database gateway
	testing.go:36: 10:38:31.735 dbug Start database node address= id=1
	testing.go:36: 10:38:32.769 dbug Stop database gateway
	testing.go:36: 10:38:32.770 warn Failed get database dump: failed to parse files response:  (2)
=== RUN   TestJoin
2019/11/15 10:38:35.231146 [INFO]: connected address=1 id=1 attempt=0
2019/11/15 10:38:36.836185 [INFO]: connected address=127.0.0.1:33069 id=1 attempt=0
2019/11/15 10:38:38.908731 [INFO]: connected address=1 id=1 attempt=0
2019/11/15 10:38:41.544865 [INFO]: connected address=127.0.0.1:33069 id=1 attempt=0
--- PASS: TestJoin (9.65s)
	testing.go:164: INFO: connected address=@000b8 id=1 attempt=0
	testing.go:36: 10:38:34.129 dbug Initializing database gateway
	testing.go:36: 10:38:34.129 dbug Start database node id=1 address=
	testing.go:36: 10:38:35.963 dbug Acquiring exclusive lock on cluster db
	testing.go:36: 10:38:35.963 dbug Stop database gateway
	testing.go:36: 10:38:36.153 dbug Initializing database gateway
	testing.go:36: 10:38:36.154 dbug Start database node id=1 address=127.0.0.1:33069
	testing.go:36: 10:38:36.766 dbug Releasing exclusive lock on cluster db
	testing.go:36: 10:38:36.806 dbug Found cert name=0
	testing.go:36: 10:38:36.835 dbug Found cert name=0
	testing.go:164: INFO: connected address=@000bc id=1 attempt=0
	testing.go:36: 10:38:38.242 info Kernel uid/gid map:
	testing.go:36: 10:38:38.243 info  - u 0 0 4294967295
	testing.go:36: 10:38:38.243 info  - g 0 0 4294967295
	testing.go:36: 10:38:38.243 info Configured LXD uid/gid map:
	testing.go:36: 10:38:38.243 info  - u 0 100000 65536
	testing.go:36: 10:38:38.243 info  - g 0 100000 65536
	testing.go:36: 10:38:38.244 warn Per-container AppArmor profiles are disabled because the mac_admin capability is missing
	testing.go:36: 10:38:38.244 warn CGroup memory swap accounting is disabled, swap limits will be ignored.
	testing.go:36: 10:38:38.245 dbug Initializing database gateway
	testing.go:36: 10:38:38.245 dbug Start database node id=1 address=
	testing.go:36: 10:38:39.817 dbug Acquiring exclusive lock on cluster db
	testing.go:36: 10:38:39.817 dbug Stop database gateway
	testing.go:36: 10:38:39.973 dbug Initializing database gateway
	testing.go:36: 10:38:39.974 dbug Start database node id=2 address=127.0.0.1:45497
	testing.go:36: 10:38:40.338 info Joining dqlite raft cluster id=2 address=127.0.0.1:45497 target=127.0.0.1:45497
	testing.go:36: 10:38:40.372 dbug Found cert name=0
	testing.go:36: 10:38:40.373 dbug Dqlite: connected address=127.0.0.1:33069 id=1 attempt=0
	testing.go:36: 10:38:40.405 dbug Found cert name=0
	testing.go:36: 10:38:40.531 dbug Found cert name=0
	testing.go:36: 10:38:41.472 info Migrate local data to cluster database
	testing.go:36: 10:38:41.472 dbug Releasing exclusive lock on cluster db
	testing.go:36: 10:38:41.513 dbug Found cert name=0
	testing.go:36: 10:38:41.543 dbug Found cert name=0
	testing.go:36: 10:38:41.547 eror Unaccounted raft node(s) not found in 'nodes' table for heartbeat: map[127.0.0.1:33069:{ID:1 Address:127.0.0.1:33069} 127.0.0.1:45497:{ID:2 Address:127.0.0.1:45497}]
	testing.go:36: 10:38:41.547 dbug Sending heartbeat request to 127.0.0.1:33069
	testing.go:36: 10:38:41.578 dbug Found cert name=0
	testing.go:36: 10:38:41.578 eror Empty raft node set received
	testing.go:36: 10:38:41.578 dbug Partial node list heartbeat received, skipping full update
	testing.go:36: 10:38:41.659 warn Dqlite client proxy TLS -> Unix: read tcp 127.0.0.1:34714->127.0.0.1:33069: use of closed network connection
	testing.go:36: 10:38:41.659 warn Dqlite server proxy Unix -> TLS: read unix @->@000ba: use of closed network connection
	testing.go:36: 10:38:41.663 dbug Make node rusp leave the cluster
	testing.go:36: 10:38:41.669 info Remove node from dqlite raft cluster id=2 address=127.0.0.1:45497 target=127.0.0.1:33069
	testing.go:36: 10:38:41.729 dbug Found cert name=0
	testing.go:36: 10:38:41.730 dbug Dqlite: connected address=127.0.0.1:33069 id=1 attempt=0
	testing.go:36: 10:38:41.760 dbug Remove node rusp from the database
	testing.go:36: 10:38:41.761 warn Dqlite server proxy Unix -> TLS: read unix @->@000ba: use of closed network connection
	testing.go:36: 10:38:41.761 warn Dqlite client proxy TLS -> Unix: read tcp 127.0.0.1:34726->127.0.0.1:33069: use of closed network connection
	testing.go:36: 10:38:41.840 dbug Stop database gateway
	testing.go:36: 10:38:41.898 warn Dqlite client proxy TLS -> Unix: read tcp 127.0.0.1:34718->127.0.0.1:33069: use of closed network connection
	testing.go:36: 10:38:41.898 warn Dqlite server proxy TLS -> Unix: read tcp 127.0.0.1:45497->127.0.0.1:53798: use of closed network connection
	testing.go:36: 10:38:41.898 warn Dqlite client proxy Unix -> TLS: read unix @->@000c1: use of closed network connection
	testing.go:36: 10:38:41.898 warn Dqlite server proxy Unix -> TLS: read unix @->@000ba: use of closed network connection
	testing.go:36: 10:38:42.234 dbug Stop database gateway
	testing.go:36: 10:38:42.281 warn Dqlite server proxy TLS -> Unix: read tcp 127.0.0.1:33069->127.0.0.1:34712: use of closed network connection
	testing.go:36: 10:38:42.281 warn Dqlite server proxy TLS -> Unix: read tcp 127.0.0.1:33069->127.0.0.1:34722: use of closed network connection
	testing.go:36: 10:38:42.281 warn Dqlite client proxy Unix -> TLS: read unix @->@000bb: use of closed network connection
	testing.go:36: 10:38:42.281 warn Dqlite client proxy Unix -> TLS: read unix @->@000c3: use of closed network connection
=== RUN   TestMigrateToDqlite10
2019/11/15 10:38:43.066834 [INFO]: connected address=@1 id=1 attempt=0
--- PASS: TestMigrateToDqlite10 (0.47s)
=== RUN   TestNewNotifier
--- PASS: TestNewNotifier (1.78s)
	testing.go:164: INFO: connected address=@000c5 id=1 attempt=0
=== RUN   TestNewNotify_NotifyAllError
--- PASS: TestNewNotify_NotifyAllError (1.65s)
	testing.go:164: INFO: connected address=@000c7 id=1 attempt=0
=== RUN   TestNewNotify_NotifyAlive
--- PASS: TestNewNotify_NotifyAlive (1.63s)
	testing.go:164: INFO: connected address=@000c8 id=1 attempt=0
=== RUN   TestNotifyUpgradeCompleted
2019/11/15 10:38:50.157909 [INFO]: connected address=1 id=1 attempt=0
2019/11/15 10:38:51.466559 [INFO]: connected address=127.0.0.1:35899 id=1 attempt=0
2019/11/15 10:38:53.684302 [INFO]: connected address=1 id=1 attempt=0
2019/11/15 10:38:56.198925 [INFO]: connected address=127.0.0.1:35899 id=1 attempt=0
--- PASS: TestNotifyUpgradeCompleted (9.00s)
	heartbeat_test.go:127: create bootstrap node for test cluster
	testing.go:164: INFO: connected address=@000c9 id=1 attempt=0
	testing.go:36: 10:38:49.374 dbug Initializing database gateway
	testing.go:36: 10:38:49.374 dbug Start database node id=1 address=
	testing.go:36: 10:38:50.774 dbug Acquiring exclusive lock on cluster db
	testing.go:36: 10:38:50.774 dbug Stop database gateway
	testing.go:36: 10:38:50.943 dbug Initializing database gateway
	testing.go:36: 10:38:50.943 dbug Start database node id=1 address=127.0.0.1:35899
	testing.go:36: 10:38:51.396 dbug Releasing exclusive lock on cluster db
	testing.go:36: 10:38:51.436 dbug Found cert name=0
	testing.go:36: 10:38:51.466 dbug Found cert name=0
	heartbeat_test.go:139: adding another node to the test cluster
	testing.go:164: INFO: connected address=@000ce id=1 attempt=0
	testing.go:36: 10:38:52.862 info Kernel uid/gid map:
	testing.go:36: 10:38:52.862 info  - u 0 0 4294967295
	testing.go:36: 10:38:52.862 info  - g 0 0 4294967295
	testing.go:36: 10:38:52.862 info Configured LXD uid/gid map:
	testing.go:36: 10:38:52.862 info  - u 0 100000 65536
	testing.go:36: 10:38:52.862 info  - g 0 100000 65536
	testing.go:36: 10:38:52.863 warn Per-container AppArmor profiles are disabled because the mac_admin capability is missing
	testing.go:36: 10:38:52.863 warn CGroup memory swap accounting is disabled, swap limits will be ignored.
	testing.go:36: 10:38:52.863 dbug Initializing database gateway
	testing.go:36: 10:38:52.863 dbug Start database node id=1 address=
	testing.go:36: 10:38:54.443 dbug Acquiring exclusive lock on cluster db
	testing.go:36: 10:38:54.443 dbug Stop database gateway
	testing.go:36: 10:38:54.642 dbug Initializing database gateway
	testing.go:36: 10:38:54.643 dbug Start database node id=2 address=127.0.0.1:38779
	testing.go:36: 10:38:54.982 info Joining dqlite raft cluster id=2 address=127.0.0.1:38779 target=127.0.0.1:38779
	testing.go:36: 10:38:55.046 dbug Found cert name=0
	testing.go:36: 10:38:55.047 dbug Dqlite: connected address=127.0.0.1:35899 id=1 attempt=0
	testing.go:36: 10:38:55.078 dbug Found cert name=0
	testing.go:36: 10:38:55.204 dbug Found cert name=0
	testing.go:36: 10:38:56.136 info Migrate local data to cluster database
	testing.go:36: 10:38:56.136 dbug Releasing exclusive lock on cluster db
	testing.go:36: 10:38:56.169 dbug Found cert name=0
	testing.go:36: 10:38:56.198 dbug Found cert name=0
	testing.go:36: 10:38:56.200 eror Unaccounted raft node(s) not found in 'nodes' table for heartbeat: map[127.0.0.1:35899:{ID:1 Address:127.0.0.1:35899} 127.0.0.1:38779:{ID:2 Address:127.0.0.1:38779}]
	testing.go:36: 10:38:56.201 dbug Sending heartbeat request to 127.0.0.1:35899
	testing.go:36: 10:38:56.232 dbug Found cert name=0
	testing.go:36: 10:38:56.233 eror Empty raft node set received
	testing.go:36: 10:38:56.233 dbug Partial node list heartbeat received, skipping full update
	testing.go:36: 10:38:56.313 warn Dqlite client proxy TLS -> Unix: read tcp 127.0.0.1:34174->127.0.0.1:35899: use of closed network connection
	testing.go:36: 10:38:56.313 warn Dqlite server proxy Unix -> TLS: read unix @->@000cb: use of closed network connection
	testing.go:36: 10:38:56.316 dbug Notify node 127.0.0.1:38779 of state changes
	testing.go:36: 10:38:56.316 dbug Connecting to a remote LXD over HTTPs
	testing.go:36: 10:38:56.416 dbug Found cert name=0
	testing.go:36: 10:38:56.416 dbug Stop database gateway
	testing.go:36: 10:38:56.451 warn Dqlite client proxy TLS -> Unix: read tcp 127.0.0.1:34178->127.0.0.1:35899: use of closed network connection
	testing.go:36: 10:38:56.451 warn Dqlite client proxy Unix -> TLS: read unix @->@000d2: use of closed network connection
	testing.go:36: 10:38:56.451 warn Dqlite server proxy Unix -> TLS: read unix @->@000cb: use of closed network connection
	testing.go:36: 10:38:56.451 warn Dqlite server proxy TLS -> Unix: read tcp 127.0.0.1:38779->127.0.0.1:41740: use of closed network connection
	testing.go:36: 10:38:56.741 dbug Stop database gateway
	testing.go:36: 10:38:56.802 warn Dqlite server proxy TLS -> Unix: read tcp 127.0.0.1:35899->127.0.0.1:34172: use of closed network connection
	testing.go:36: 10:38:56.802 warn Dqlite client proxy Unix -> TLS: read unix @->@000d4: use of closed network connection
	testing.go:36: 10:38:56.802 warn Dqlite server proxy TLS -> Unix: read tcp 127.0.0.1:35899->127.0.0.1:34182: use of closed network connection
	testing.go:36: 10:38:56.802 warn Dqlite client proxy Unix -> TLS: read unix @->@000cc: use of closed network connection
=== RUN   TestMaybeUpdate_Upgrade
--- PASS: TestMaybeUpdate_Upgrade (1.61s)
	testing.go:164: INFO: connected address=@000d5 id=1 attempt=0
=== RUN   TestMaybeUpdate_NothingToDo
--- PASS: TestMaybeUpdate_NothingToDo (1.54s)
	testing.go:164: INFO: connected address=@000d6 id=1 attempt=0
PASS
ok  	github.com/lxc/lxd/lxd/cluster	(cached)
?   	github.com/lxc/lxd/lxd/cluster/raft	[no test files]
=== RUN   TestErrorList_Error
--- PASS: TestErrorList_Error (0.00s)
=== RUN   TestKey_validate
=== RUN   TestKey_validate/hello
=== RUN   TestKey_validate/yes
=== RUN   TestKey_validate/0
=== RUN   TestKey_validate/666
=== RUN   TestKey_validate/666#01
=== RUN   TestKey_validate/#00
=== RUN   TestKey_validate/#01
--- PASS: TestKey_validate (0.00s)
    --- PASS: TestKey_validate/hello (0.00s)
    --- PASS: TestKey_validate/yes (0.00s)
    --- PASS: TestKey_validate/0 (0.00s)
    --- PASS: TestKey_validate/666 (0.00s)
    --- PASS: TestKey_validate/666#01 (0.00s)
    --- PASS: TestKey_validate/#00 (0.00s)
    --- PASS: TestKey_validate/#01 (0.00s)
=== RUN   TestKey_validateError
=== RUN   TestKey_validateError/invalid_integer
=== RUN   TestKey_validateError/invalid_boolean
=== RUN   TestKey_validateError/ugh
=== RUN   TestKey_validateError/deprecated:_don't_use_this
--- PASS: TestKey_validateError (0.00s)
    --- PASS: TestKey_validateError/invalid_integer (0.00s)
    --- PASS: TestKey_validateError/invalid_boolean (0.00s)
    --- PASS: TestKey_validateError/ugh (0.00s)
    --- PASS: TestKey_validateError/deprecated:_don't_use_this (0.00s)
=== RUN   TestKey_UnexpectedKind
--- PASS: TestKey_UnexpectedKind (0.00s)
=== RUN   TestLoad
=== RUN   TestLoad/plain_load_of_regular_key
=== RUN   TestLoad/key_setter_is_ignored_upon_loading
=== RUN   TestLoad/bool_true_values_are_normalized
=== RUN   TestLoad/multiple_values_are_all_loaded
--- PASS: TestLoad (0.00s)
    --- PASS: TestLoad/plain_load_of_regular_key (0.00s)
    --- PASS: TestLoad/key_setter_is_ignored_upon_loading (0.00s)
    --- PASS: TestLoad/bool_true_values_are_normalized (0.00s)
    --- PASS: TestLoad/multiple_values_are_all_loaded (0.00s)
=== RUN   TestLoad_Error
=== RUN   TestLoad_Error/schema_has_no_key_with_the_given_name
=== RUN   TestLoad_Error/validation_fails
=== RUN   TestLoad_Error/only_the_first_of_multiple_errors_is_shown_(in_key_name_order)
--- PASS: TestLoad_Error (0.00s)
    --- PASS: TestLoad_Error/schema_has_no_key_with_the_given_name (0.00s)
    --- PASS: TestLoad_Error/validation_fails (0.00s)
    --- PASS: TestLoad_Error/only_the_first_of_multiple_errors_is_shown_(in_key_name_order) (0.00s)
=== RUN   TestChange
=== RUN   TestChange/plain_change_of_regular_key
=== RUN   TestChange/key_setter_is_honored
=== RUN   TestChange/bool_true_values_are_normalized
=== RUN   TestChange/bool_false_values_are_normalized
=== RUN   TestChange/the_special_value_'true'_is_a_passthrough_for_hidden_keys
=== RUN   TestChange/the_special_value_nil_is_converted_to_empty_string
=== RUN   TestChange/multiple_values_are_all_mutated
--- PASS: TestChange (0.00s)
    --- PASS: TestChange/plain_change_of_regular_key (0.00s)
    --- PASS: TestChange/key_setter_is_honored (0.00s)
    --- PASS: TestChange/bool_true_values_are_normalized (0.00s)
    --- PASS: TestChange/bool_false_values_are_normalized (0.00s)
    --- PASS: TestChange/the_special_value_'true'_is_a_passthrough_for_hidden_keys (0.00s)
    --- PASS: TestChange/the_special_value_nil_is_converted_to_empty_string (0.00s)
    --- PASS: TestChange/multiple_values_are_all_mutated (0.00s)
=== RUN   TestMap_ChangeReturnsChangedKeys
=== RUN   TestMap_ChangeReturnsChangedKeys/plain_single_change
=== RUN   TestMap_ChangeReturnsChangedKeys/unchanged_boolean_value,_even_if_it's_spelled_'yes'_and_not_'true'
=== RUN   TestMap_ChangeReturnsChangedKeys/unset_value
=== RUN   TestMap_ChangeReturnsChangedKeys/unchanged_value,_since_it_matches_the_default
=== RUN   TestMap_ChangeReturnsChangedKeys/multiple_changes
--- PASS: TestMap_ChangeReturnsChangedKeys (0.00s)
    --- PASS: TestMap_ChangeReturnsChangedKeys/plain_single_change (0.00s)
    --- PASS: TestMap_ChangeReturnsChangedKeys/unchanged_boolean_value,_even_if_it's_spelled_'yes'_and_not_'true' (0.00s)
    --- PASS: TestMap_ChangeReturnsChangedKeys/unset_value (0.00s)
    --- PASS: TestMap_ChangeReturnsChangedKeys/unchanged_value,_since_it_matches_the_default (0.00s)
    --- PASS: TestMap_ChangeReturnsChangedKeys/multiple_changes (0.00s)
=== RUN   TestMap_ChangeError
=== RUN   TestMap_ChangeError/cannot_set_'xxx'_to_'':_unknown_key
=== RUN   TestMap_ChangeError/cannot_set_'foo'_to_'yyy':_invalid_boolean
=== RUN   TestMap_ChangeError/cannot_set_'egg'_to_'xxx':_boom
=== RUN   TestMap_ChangeError/cannot_set_'egg':_invalid_type_int
--- PASS: TestMap_ChangeError (0.00s)
    --- PASS: TestMap_ChangeError/cannot_set_'xxx'_to_'':_unknown_key (0.00s)
    --- PASS: TestMap_ChangeError/cannot_set_'foo'_to_'yyy':_invalid_boolean (0.00s)
    --- PASS: TestMap_ChangeError/cannot_set_'egg'_to_'xxx':_boom (0.00s)
    --- PASS: TestMap_ChangeError/cannot_set_'egg':_invalid_type_int (0.00s)
=== RUN   TestMap_Dump
--- PASS: TestMap_Dump (0.00s)
=== RUN   TestMap_Getters
--- PASS: TestMap_Getters (0.00s)
=== RUN   TestMap_GettersPanic
--- PASS: TestMap_GettersPanic (0.00s)
=== RUN   TestSafeLoad_IgnoreInvalidKeys
--- PASS: TestSafeLoad_IgnoreInvalidKeys (0.00s)
=== RUN   TestSchema_Defaults
--- PASS: TestSchema_Defaults (0.00s)
=== RUN   TestSchema_Keys
--- PASS: TestSchema_Keys (0.00s)
=== RUN   TestAvailableExecutable
--- PASS: TestAvailableExecutable (0.00s)
PASS
ok  	github.com/lxc/lxd/lxd/config	(cached)
?   	github.com/lxc/lxd/lxd/daemon	[no test files]
=== RUN   TestDBTestSuite
=== RUN   TestDBTestSuite/Test_ContainerConfig
=== RUN   TestDBTestSuite/Test_ContainerProfiles
=== RUN   TestDBTestSuite/Test_ImageAliasAdd
=== RUN   TestDBTestSuite/Test_ImageAliasGet_alias_does_not_exists
=== RUN   TestDBTestSuite/Test_ImageAliasGet_alias_exists
=== RUN   TestDBTestSuite/Test_ImageExists_false
=== RUN   TestDBTestSuite/Test_ImageExists_true
=== RUN   TestDBTestSuite/Test_ImageGet_finds_image_for_fingerprint
=== RUN   TestDBTestSuite/Test_ImageGet_for_missing_fingerprint
=== RUN   TestDBTestSuite/Test_ImageSourceGetCachedFingerprint
=== RUN   TestDBTestSuite/Test_ImageSourceGetCachedFingerprint_no_match
=== RUN   TestDBTestSuite/Test_dbDevices_containers
=== RUN   TestDBTestSuite/Test_dbDevices_profiles
=== RUN   TestDBTestSuite/Test_dbProfileConfig
=== RUN   TestDBTestSuite/Test_deleting_a_container_cascades_on_related_tables
=== RUN   TestDBTestSuite/Test_deleting_a_profile_cascades_on_related_tables
=== RUN   TestDBTestSuite/Test_deleting_an_image_cascades_on_related_tables
--- PASS: TestDBTestSuite (28.92s)
    --- PASS: TestDBTestSuite/Test_ContainerConfig (1.78s)
    	testing.go:164: INFO: connected address=@00192 id=1 attempt=0
    --- PASS: TestDBTestSuite/Test_ContainerProfiles (2.52s)
    	testing.go:164: INFO: connected address=@00195 id=1 attempt=0
    --- PASS: TestDBTestSuite/Test_ImageAliasAdd (1.69s)
    	testing.go:164: INFO: connected address=@00197 id=1 attempt=0
    --- PASS: TestDBTestSuite/Test_ImageAliasGet_alias_does_not_exists (1.70s)
    	testing.go:164: INFO: connected address=@00199 id=1 attempt=0
    --- PASS: TestDBTestSuite/Test_ImageAliasGet_alias_exists (1.52s)
    	testing.go:164: INFO: connected address=@0019b id=1 attempt=0
    --- PASS: TestDBTestSuite/Test_ImageExists_false (1.65s)
    	testing.go:164: INFO: connected address=@0019d id=1 attempt=0
    --- PASS: TestDBTestSuite/Test_ImageExists_true (1.47s)
    	testing.go:164: INFO: connected address=@0019f id=1 attempt=0
    --- PASS: TestDBTestSuite/Test_ImageGet_finds_image_for_fingerprint (1.45s)
    	testing.go:164: INFO: connected address=@001a2 id=1 attempt=0
    --- PASS: TestDBTestSuite/Test_ImageGet_for_missing_fingerprint (1.63s)
    	testing.go:164: INFO: connected address=@001a5 id=1 attempt=0
    --- PASS: TestDBTestSuite/Test_ImageSourceGetCachedFingerprint (1.90s)
    	testing.go:164: INFO: connected address=@001a8 id=1 attempt=0
    --- PASS: TestDBTestSuite/Test_ImageSourceGetCachedFingerprint_no_match (1.67s)
    	testing.go:164: INFO: connected address=@001aa id=1 attempt=0
    --- PASS: TestDBTestSuite/Test_dbDevices_containers (1.79s)
    	testing.go:164: INFO: connected address=@001ac id=1 attempt=0
    --- PASS: TestDBTestSuite/Test_dbDevices_profiles (1.47s)
    	testing.go:164: INFO: connected address=@001af id=1 attempt=0
    --- PASS: TestDBTestSuite/Test_dbProfileConfig (1.52s)
    	testing.go:164: INFO: connected address=@001b2 id=1 attempt=0
    --- PASS: TestDBTestSuite/Test_deleting_a_container_cascades_on_related_tables (1.80s)
    	testing.go:164: INFO: connected address=@001b6 id=1 attempt=0
    --- PASS: TestDBTestSuite/Test_deleting_a_profile_cascades_on_related_tables (1.78s)
    	testing.go:164: INFO: connected address=@001ba id=1 attempt=0
    --- PASS: TestDBTestSuite/Test_deleting_an_image_cascades_on_related_tables (1.59s)
    	testing.go:164: INFO: connected address=@001bc id=1 attempt=0
=== RUN   TestTx_Config
--- PASS: TestTx_Config (0.23s)
=== RUN   TestTx_UpdateConfig
--- PASS: TestTx_UpdateConfig (0.32s)
=== RUN   TestTx_UpdateConfigUnsetKeys
--- PASS: TestTx_UpdateConfigUnsetKeys (1.25s)
=== RUN   TestContainerList
--- PASS: TestContainerList (1.29s)
	testing.go:164: INFO: connected address=@001c2 id=1 attempt=0
=== RUN   TestContainerList_FilterByNode
--- PASS: TestContainerList_FilterByNode (1.15s)
	testing.go:164: INFO: connected address=@001c4 id=1 attempt=0
=== RUN   TestInstanceList_ContainerWithSameNameInDifferentProjects
--- PASS: TestInstanceList_ContainerWithSameNameInDifferentProjects (1.27s)
	testing.go:164: INFO: connected address=@001c5 id=1 attempt=0
=== RUN   TestInstanceListExpanded
--- PASS: TestInstanceListExpanded (1.97s)
	testing.go:164: INFO: connected address=@001c6 id=1 attempt=0
=== RUN   TestInstanceCreate
--- PASS: TestInstanceCreate (2.40s)
	testing.go:164: INFO: connected address=@001c8 id=1 attempt=0
=== RUN   TestInstanceCreate_Snapshot
--- PASS: TestInstanceCreate_Snapshot (1.62s)
	testing.go:164: INFO: connected address=@001ce id=1 attempt=0
=== RUN   TestContainersListByNodeAddress
--- PASS: TestContainersListByNodeAddress (1.62s)
	testing.go:164: INFO: connected address=@001cf id=1 attempt=0
=== RUN   TestContainersByNodeName
--- PASS: TestContainersByNodeName (1.56s)
	testing.go:164: INFO: connected address=@001d2 id=1 attempt=0
=== RUN   TestInstancePool
--- PASS: TestInstancePool (1.82s)
	testing.go:164: INFO: connected address=@001d4 id=1 attempt=0
=== RUN   TestContainersNodeList
--- PASS: TestContainersNodeList (1.50s)
	testing.go:164: INFO: connected address=@001d6 id=1 attempt=0
=== RUN   TestContainerNodeList
--- PASS: TestContainerNodeList (1.49s)
	testing.go:164: INFO: connected address=@001d9 id=1 attempt=0
=== RUN   TestNode_Schema
--- PASS: TestNode_Schema (0.20s)
=== RUN   TestCluster_Setup
--- PASS: TestCluster_Setup (1.38s)
	testing.go:164: INFO: connected address=@001db id=1 attempt=0
=== RUN   TestImageLocate
--- PASS: TestImageLocate (1.80s)
	testing.go:164: INFO: connected address=@001dd id=1 attempt=0
=== RUN   TestLoadPreClusteringData
--- PASS: TestLoadPreClusteringData (0.01s)
=== RUN   TestImportPreClusteringData
2019/11/15 00:16:50.151385 [INFO]: connected address=@001e0 id=1 attempt=0
--- PASS: TestImportPreClusteringData (1.80s)
=== RUN   TestNetworksNodeConfigs
--- PASS: TestNetworksNodeConfigs (1.42s)
	testing.go:164: INFO: connected address=@001e2 id=1 attempt=0
=== RUN   TestNetworkCreatePending
--- PASS: TestNetworkCreatePending (1.38s)
	testing.go:164: INFO: connected address=@001e4 id=1 attempt=0
=== RUN   TestNetworksCreatePending_AlreadyDefined
--- PASS: TestNetworksCreatePending_AlreadyDefined (1.29s)
	testing.go:164: INFO: connected address=@001e7 id=1 attempt=0
=== RUN   TestNetworksCreatePending_NonExistingNode
--- PASS: TestNetworksCreatePending_NonExistingNode (1.33s)
	testing.go:164: INFO: connected address=@001e9 id=1 attempt=0
=== RUN   TestNodeAdd
--- PASS: TestNodeAdd (1.52s)
	testing.go:164: INFO: connected address=@001ec id=1 attempt=0
=== RUN   TestNodesCount
--- PASS: TestNodesCount (1.56s)
	testing.go:164: INFO: connected address=@001ee id=1 attempt=0
=== RUN   TestNodeIsOutdated_SingleNode
--- PASS: TestNodeIsOutdated_SingleNode (1.32s)
	testing.go:164: INFO: connected address=@001f0 id=1 attempt=0
=== RUN   TestNodeIsOutdated_AllNodesAtSameVersion
--- PASS: TestNodeIsOutdated_AllNodesAtSameVersion (1.43s)
	testing.go:164: INFO: connected address=@001f3 id=1 attempt=0
=== RUN   TestNodeIsOutdated_OneNodeWithHigherVersion
--- PASS: TestNodeIsOutdated_OneNodeWithHigherVersion (1.34s)
	testing.go:164: INFO: connected address=@001f5 id=1 attempt=0
=== RUN   TestNodeIsOutdated_OneNodeWithLowerVersion
--- PASS: TestNodeIsOutdated_OneNodeWithLowerVersion (1.57s)
	testing.go:164: INFO: connected address=@001f8 id=1 attempt=0
=== RUN   TestNodeName
--- PASS: TestNodeName (1.77s)
	testing.go:164: INFO: connected address=@001fa id=1 attempt=0
=== RUN   TestNodeRename
--- PASS: TestNodeRename (1.53s)
	testing.go:164: INFO: connected address=@001fc id=1 attempt=0
=== RUN   TestNodeRemove
--- PASS: TestNodeRemove (1.48s)
	testing.go:164: INFO: connected address=@001fe id=1 attempt=0
=== RUN   TestNodePending
--- PASS: TestNodePending (1.31s)
	testing.go:164: INFO: connected address=@00201 id=1 attempt=0
=== RUN   TestNodeHeartbeat
--- PASS: TestNodeHeartbeat (1.56s)
	testing.go:164: INFO: connected address=@00204 id=1 attempt=0
=== RUN   TestNodeIsEmpty_Containers
--- PASS: TestNodeIsEmpty_Containers (1.39s)
	testing.go:164: INFO: connected address=@00206 id=1 attempt=0
=== RUN   TestNodeIsEmpty_Images
--- PASS: TestNodeIsEmpty_Images (1.34s)
	testing.go:164: INFO: connected address=@00208 id=1 attempt=0
=== RUN   TestNodeIsEmpty_CustomVolumes
--- PASS: TestNodeIsEmpty_CustomVolumes (1.66s)
	testing.go:164: INFO: connected address=@0020c id=1 attempt=0
=== RUN   TestNodeWithLeastContainers
--- PASS: TestNodeWithLeastContainers (1.50s)
	testing.go:164: INFO: connected address=@0020f id=1 attempt=0
=== RUN   TestNodeWithLeastContainers_OfflineNode
--- PASS: TestNodeWithLeastContainers_OfflineNode (1.24s)
	testing.go:164: INFO: connected address=@00212 id=1 attempt=0
=== RUN   TestNodeWithLeastContainers_Pending
--- PASS: TestNodeWithLeastContainers_Pending (1.49s)
	testing.go:164: INFO: connected address=@00214 id=1 attempt=0
=== RUN   TestOperation
--- PASS: TestOperation (1.61s)
	testing.go:164: INFO: connected address=@00215 id=1 attempt=0
=== RUN   TestOperationNoProject
--- PASS: TestOperationNoProject (1.36s)
	testing.go:164: INFO: connected address=@00217 id=1 attempt=0
=== RUN   TestProjectsList
--- PASS: TestProjectsList (1.36s)
	testing.go:164: INFO: connected address=@00219 id=1 attempt=0
=== RUN   TestRaftNodes
--- PASS: TestRaftNodes (0.44s)
=== RUN   TestRaftNodeAddresses
--- PASS: TestRaftNodeAddresses (0.38s)
=== RUN   TestRaftNodeAddress
--- PASS: TestRaftNodeAddress (0.46s)
=== RUN   TestRaftNodeFirst
--- PASS: TestRaftNodeFirst (0.41s)
=== RUN   TestRaftNodeAdd
--- PASS: TestRaftNodeAdd (0.45s)
=== RUN   TestRaftNodeDelete
--- PASS: TestRaftNodeDelete (0.29s)
=== RUN   TestRaftNodeDelete_NonExisting
--- PASS: TestRaftNodeDelete_NonExisting (0.14s)
=== RUN   TestRaftNodesReplace
--- PASS: TestRaftNodesReplace (0.42s)
=== RUN   TestInstanceSnapshotList
--- PASS: TestInstanceSnapshotList (1.44s)
	testing.go:164: INFO: connected address=@0021f id=1 attempt=0
=== RUN   TestInstanceSnapshotList_FilterByInstance
--- PASS: TestInstanceSnapshotList_FilterByInstance (1.48s)
	testing.go:164: INFO: connected address=@00221 id=1 attempt=0
=== RUN   TestInstanceSnapshotList_SameNameInDifferentProjects
--- PASS: TestInstanceSnapshotList_SameNameInDifferentProjects (1.42s)
	testing.go:164: INFO: connected address=@00226 id=1 attempt=0
=== RUN   TestStoragePoolsNodeConfigs
--- PASS: TestStoragePoolsNodeConfigs (1.74s)
	testing.go:164: INFO: connected address=@00229 id=1 attempt=0
=== RUN   TestStoragePoolsCreatePending
--- PASS: TestStoragePoolsCreatePending (1.40s)
	testing.go:164: INFO: connected address=@0022c id=1 attempt=0
=== RUN   TestStoragePoolsCreatePending_OtherPool
--- PASS: TestStoragePoolsCreatePending_OtherPool (1.49s)
	testing.go:164: INFO: connected address=@0022e id=1 attempt=0
=== RUN   TestStoragePoolsCreatePending_AlreadyDefined
--- PASS: TestStoragePoolsCreatePending_AlreadyDefined (1.16s)
	testing.go:164: INFO: connected address=@0022f id=1 attempt=0
=== RUN   TestStoragePoolsCreatePending_NonExistingNode
--- PASS: TestStoragePoolsCreatePending_NonExistingNode (1.14s)
	testing.go:164: INFO: connected address=@00230 id=1 attempt=0
=== RUN   TestStoragePoolVolume_Ceph
--- PASS: TestStoragePoolVolume_Ceph (1.50s)
	testing.go:164: INFO: connected address=@00231 id=1 attempt=0
=== RUN   TestStorageVolumeNodeAddresses
--- PASS: TestStorageVolumeNodeAddresses (1.15s)
	testing.go:164: INFO: connected address=@00232 id=1 attempt=0
PASS
ok  	github.com/lxc/lxd/lxd/db	(cached)
=== RUN   TestEnsureSchema_NoClustered
--- PASS: TestEnsureSchema_NoClustered (0.01s)
=== RUN   TestEnsureSchema_ClusterNotUpgradable
=== RUN   TestEnsureSchema_ClusterNotUpgradable/a_node's_schema_version_is_behind
=== RUN   TestEnsureSchema_ClusterNotUpgradable/a_node's_number_of_API_extensions_is_behind
=== RUN   TestEnsureSchema_ClusterNotUpgradable/this_node's_schema_is_behind
=== RUN   TestEnsureSchema_ClusterNotUpgradable/this_node's_number_of_API_extensions_is_behind
=== RUN   TestEnsureSchema_ClusterNotUpgradable/inconsistent_schema_version_and_API_extensions_number
--- PASS: TestEnsureSchema_ClusterNotUpgradable (0.04s)
    --- PASS: TestEnsureSchema_ClusterNotUpgradable/a_node's_schema_version_is_behind (0.01s)
    --- PASS: TestEnsureSchema_ClusterNotUpgradable/a_node's_number_of_API_extensions_is_behind (0.01s)
    --- PASS: TestEnsureSchema_ClusterNotUpgradable/this_node's_schema_is_behind (0.01s)
    --- PASS: TestEnsureSchema_ClusterNotUpgradable/this_node's_number_of_API_extensions_is_behind (0.01s)
    --- PASS: TestEnsureSchema_ClusterNotUpgradable/inconsistent_schema_version_and_API_extensions_number (0.01s)
=== RUN   TestEnsureSchema_UpdateNodeVersion
=== RUN   TestEnsureSchema_UpdateNodeVersion/true
=== RUN   TestEnsureSchema_UpdateNodeVersion/true#01
--- PASS: TestEnsureSchema_UpdateNodeVersion (0.02s)
    --- PASS: TestEnsureSchema_UpdateNodeVersion/true (0.01s)
    --- PASS: TestEnsureSchema_UpdateNodeVersion/true#01 (0.01s)
=== RUN   TestUpdateFromV0
--- PASS: TestUpdateFromV0 (0.00s)
=== RUN   TestUpdateFromV1_Certificates
--- PASS: TestUpdateFromV1_Certificates (0.00s)
=== RUN   TestUpdateFromV1_Config
--- PASS: TestUpdateFromV1_Config (0.00s)
=== RUN   TestUpdateFromV1_Containers
--- PASS: TestUpdateFromV1_Containers (0.00s)
=== RUN   TestUpdateFromV1_Network
--- PASS: TestUpdateFromV1_Network (0.01s)
=== RUN   TestUpdateFromV1_ConfigTables
--- PASS: TestUpdateFromV1_ConfigTables (0.01s)
=== RUN   TestUpdateFromV2
--- PASS: TestUpdateFromV2 (0.00s)
=== RUN   TestUpdateFromV3
--- PASS: TestUpdateFromV3 (0.00s)
=== RUN   TestUpdateFromV5
--- PASS: TestUpdateFromV5 (0.00s)
=== RUN   TestUpdateFromV6
--- PASS: TestUpdateFromV6 (0.00s)
=== RUN   TestUpdateFromV9
--- PASS: TestUpdateFromV9 (0.01s)
=== RUN   TestUpdateFromV11
--- PASS: TestUpdateFromV11 (0.04s)
=== RUN   TestUpdateFromV14
--- PASS: TestUpdateFromV14 (0.09s)
=== RUN   TestUpdateFromV15
--- PASS: TestUpdateFromV15 (0.09s)
PASS
ok  	github.com/lxc/lxd/lxd/db/cluster	(cached)
=== RUN   TestOpen
--- PASS: TestOpen (0.00s)
=== RUN   TestEnsureSchema
--- PASS: TestEnsureSchema (0.14s)
=== RUN   TestUpdateFromV36_RaftNodes
--- PASS: TestUpdateFromV36_RaftNodes (0.04s)
=== RUN   TestUpdateFromV36_DropTables
--- PASS: TestUpdateFromV36_DropTables (0.05s)
=== RUN   TestUpdateFromV37_CopyCoreHTTPSAddress
--- PASS: TestUpdateFromV37_CopyCoreHTTPSAddress (0.05s)
=== RUN   TestUpdateFromV37_NotClustered
--- PASS: TestUpdateFromV37_NotClustered (0.03s)
PASS
ok  	github.com/lxc/lxd/lxd/db/node	(cached)
=== RUN   TestSelectConfig
--- PASS: TestSelectConfig (0.00s)
=== RUN   TestSelectConfig_WithFilters
--- PASS: TestSelectConfig_WithFilters (0.00s)
=== RUN   TestUpdateConfig_NewKeys
--- PASS: TestUpdateConfig_NewKeys (0.00s)
=== RUN   TestDeleteConfig_Delete
--- PASS: TestDeleteConfig_Delete (0.00s)
=== RUN   TestCount
=== RUN   TestCount/0
=== RUN   TestCount/1
=== RUN   TestCount/2
--- PASS: TestCount (0.00s)
    --- PASS: TestCount/0 (0.00s)
    --- PASS: TestCount/1 (0.00s)
    --- PASS: TestCount/2 (0.00s)
=== RUN   TestCountAll
--- PASS: TestCountAll (0.00s)
=== RUN   TestDump
--- PASS: TestDump (0.00s)
=== RUN   TestDumpTablePatches
--- PASS: TestDumpTablePatches (0.00s)
=== RUN   TestDumpTableConfig
--- PASS: TestDumpTableConfig (0.00s)
=== RUN   TestDumpTableStoragePoolsConfig
--- PASS: TestDumpTableStoragePoolsConfig (0.00s)
=== RUN   TestDumpParseSchema
=== RUN   TestDumpParseSchema/local
=== RUN   TestDumpParseSchema/global
--- PASS: TestDumpParseSchema (0.00s)
    --- PASS: TestDumpParseSchema/local (0.00s)
    --- PASS: TestDumpParseSchema/global (0.00s)
=== RUN   TestSelectObjects_Error
=== RUN   TestSelectObjects_Error/SELECT_id,_name_FROM_test
--- PASS: TestSelectObjects_Error (0.00s)
    --- PASS: TestSelectObjects_Error/SELECT_id,_name_FROM_test (0.00s)
=== RUN   TestSelectObjects
--- PASS: TestSelectObjects (0.00s)
=== RUN   TestUpsertObject_Error
=== RUN   TestUpsertObject_Error/columns_length_is_zero
=== RUN   TestUpsertObject_Error/columns_length_does_not_match_values_length
--- PASS: TestUpsertObject_Error (0.00s)
    --- PASS: TestUpsertObject_Error/columns_length_is_zero (0.00s)
    --- PASS: TestUpsertObject_Error/columns_length_does_not_match_values_length (0.00s)
=== RUN   TestUpsertObject_Insert
--- PASS: TestUpsertObject_Insert (0.00s)
=== RUN   TestUpsertObject_Update
--- PASS: TestUpsertObject_Update (0.00s)
=== RUN   TestDeleteObject_Error
--- PASS: TestDeleteObject_Error (0.00s)
=== RUN   TestDeleteObject_Deleted
--- PASS: TestDeleteObject_Deleted (0.00s)
=== RUN   TestDeleteObject_NotDeleted
--- PASS: TestDeleteObject_NotDeleted (0.00s)
=== RUN   TestSelectURIs
--- PASS: TestSelectURIs (0.00s)
=== RUN   TestStrings_Error
=== RUN   TestStrings_Error/garbage
=== RUN   TestStrings_Error/SELECT_id,_name_FROM_test
--- PASS: TestStrings_Error (0.00s)
    --- PASS: TestStrings_Error/garbage (0.00s)
    --- PASS: TestStrings_Error/SELECT_id,_name_FROM_test (0.00s)
=== RUN   TestStrings
--- PASS: TestStrings (0.00s)
=== RUN   TestIntegers_Error
=== RUN   TestIntegers_Error/garbage
=== RUN   TestIntegers_Error/SELECT_id,_name_FROM_test
--- PASS: TestIntegers_Error (0.00s)
    --- PASS: TestIntegers_Error/garbage (0.00s)
    --- PASS: TestIntegers_Error/SELECT_id,_name_FROM_test (0.00s)
=== RUN   TestIntegers
--- PASS: TestIntegers (0.00s)
=== RUN   TestInsertStrings
--- PASS: TestInsertStrings (0.00s)
=== RUN   TestTransaction_BeginError
--- PASS: TestTransaction_BeginError (0.00s)
=== RUN   TestTransaction_FunctionError
--- PASS: TestTransaction_FunctionError (0.00s)
PASS
ok  	github.com/lxc/lxd/lxd/db/query	(cached)
=== RUN   TestNewFromMap
--- PASS: TestNewFromMap (0.00s)
=== RUN   TestNewFromMap_MissingVersions
--- PASS: TestNewFromMap_MissingVersions (0.00s)
=== RUN   TestSchemaEnsure_VersionMoreRecentThanExpected
--- PASS: TestSchemaEnsure_VersionMoreRecentThanExpected (0.00s)
=== RUN   TestSchemaEnsure_FreshStatementError
--- PASS: TestSchemaEnsure_FreshStatementError (0.00s)
=== RUN   TestSchemaEnsure_MissingVersion
--- PASS: TestSchemaEnsure_MissingVersion (0.00s)
=== RUN   TestSchemaEnsure_ZeroUpdates
--- PASS: TestSchemaEnsure_ZeroUpdates (0.00s)
=== RUN   TestSchemaEnsure_ApplyAllUpdates
--- PASS: TestSchemaEnsure_ApplyAllUpdates (0.00s)
=== RUN   TestSchemaEnsure_ApplyAfterInitialDumpCreation
--- PASS: TestSchemaEnsure_ApplyAfterInitialDumpCreation (0.00s)
=== RUN   TestSchemaEnsure_OnlyApplyMissing
--- PASS: TestSchemaEnsure_OnlyApplyMissing (0.00s)
=== RUN   TestSchemaEnsure_FailingUpdate
--- PASS: TestSchemaEnsure_FailingUpdate (0.00s)
=== RUN   TestSchemaEnsure_FailingHook
--- PASS: TestSchemaEnsure_FailingHook (0.00s)
=== RUN   TestSchemaEnsure_CheckGracefulAbort
--- PASS: TestSchemaEnsure_CheckGracefulAbort (0.00s)
=== RUN   TestSchemaDump
--- PASS: TestSchemaDump (0.00s)
=== RUN   TestSchemaDump_MissingUpdatees
--- PASS: TestSchemaDump_MissingUpdatees (0.00s)
=== RUN   TestSchema_Trim
--- PASS: TestSchema_Trim (0.00s)
=== RUN   TestSchema_ExeciseUpdate
--- PASS: TestSchema_ExeciseUpdate (0.00s)
=== RUN   TestSchema_File_NotExists
--- PASS: TestSchema_File_NotExists (0.00s)
=== RUN   TestSchema_File_Garbage
--- PASS: TestSchema_File_Garbage (0.00s)
=== RUN   TestSchema_File
--- PASS: TestSchema_File (0.00s)
=== RUN   TestSchema_File_Hook
--- PASS: TestSchema_File_Hook (0.00s)
=== RUN   TestDotGo
--- PASS: TestDotGo (0.00s)
PASS
ok  	github.com/lxc/lxd/lxd/db/schema	(cached)
?   	github.com/lxc/lxd/lxd/device	[no test files]
=== RUN   TestSortableDevices
--- PASS: TestSortableDevices (0.00s)
PASS
ok  	github.com/lxc/lxd/lxd/device/config	(cached)
?   	github.com/lxc/lxd/lxd/dnsmasq	[no test files]
=== RUN   TestEndpoints_ClusterCreateTCPSocket
--- PASS: TestEndpoints_ClusterCreateTCPSocket (0.30s)
	testing.go:36: 01:10:26.383 info Starting cluster handler:
	testing.go:36: 01:10:26.383 info  - binding cluster socket socket=127.0.0.1:54321
	testing.go:36: 01:10:26.383 info Starting /dev/lxd handler:
	testing.go:36: 01:10:26.383 info  - binding devlxd socket socket=/tmp/lxd-endpoints-test-371203462/devlxd/sock
	testing.go:36: 01:10:26.383 info REST API daemon:
	testing.go:36: 01:10:26.383 info  - binding Unix socket socket=/tmp/lxd-endpoints-test-371203462/unix.socket
	testing.go:36: 01:10:26.383 info  - binding TCP socket socket=127.0.0.1:12345
	testing.go:36: 01:10:26.679 info Stopping REST API handler:
	testing.go:36: 01:10:26.679 info  - closing socket socket=127.0.0.1:12345
	testing.go:36: 01:10:26.680 info  - closing socket socket=/tmp/lxd-endpoints-test-371203462/unix.socket
	testing.go:36: 01:10:26.680 info Stopping cluster handler:
	testing.go:36: 01:10:26.680 info  - closing socket socket=127.0.0.1:54321
	testing.go:36: 01:10:26.680 info Stopping /dev/lxd handler:
	testing.go:36: 01:10:26.680 info  - closing socket socket=/tmp/lxd-endpoints-test-371203462/devlxd/sock
=== RUN   TestEndpoints_ClusterUpdateAddressIsCovered
--- PASS: TestEndpoints_ClusterUpdateAddressIsCovered (0.15s)
	testing.go:36: 01:10:26.683 info Starting /dev/lxd handler:
	testing.go:36: 01:10:26.683 info  - binding devlxd socket socket=/tmp/lxd-endpoints-test-835321133/devlxd/sock
	testing.go:36: 01:10:26.683 info REST API daemon:
	testing.go:36: 01:10:26.683 info  - binding Unix socket socket=/tmp/lxd-endpoints-test-835321133/unix.socket
	testing.go:36: 01:10:26.683 info  - binding TCP socket socket=[::]:12345
	testing.go:36: 01:10:26.683 info Update cluster address
	testing.go:36: 01:10:26.823 info Stopping REST API handler:
	testing.go:36: 01:10:26.823 info  - closing socket socket=[::]:12345
	testing.go:36: 01:10:26.823 info  - closing socket socket=/tmp/lxd-endpoints-test-835321133/unix.socket
	testing.go:36: 01:10:26.823 info Stopping /dev/lxd handler:
	testing.go:36: 01:10:26.823 info  - closing socket socket=/tmp/lxd-endpoints-test-835321133/devlxd/sock
=== RUN   TestEndpoints_DevLxdCreateUnixSocket
--- PASS: TestEndpoints_DevLxdCreateUnixSocket (0.01s)
	testing.go:36: 01:10:26.831 info Starting /dev/lxd handler:
	testing.go:36: 01:10:26.831 info  - binding devlxd socket socket=/tmp/lxd-endpoints-test-073644200/devlxd/sock
	testing.go:36: 01:10:26.831 info REST API daemon:
	testing.go:36: 01:10:26.831 info  - binding Unix socket socket=/tmp/lxd-endpoints-test-073644200/unix.socket
	testing.go:36: 01:10:26.832 info Stopping REST API handler:
	testing.go:36: 01:10:26.832 info  - closing socket socket=/tmp/lxd-endpoints-test-073644200/unix.socket
	testing.go:36: 01:10:26.832 info Stopping /dev/lxd handler:
	testing.go:36: 01:10:26.832 info  - closing socket socket=/tmp/lxd-endpoints-test-073644200/devlxd/sock
=== RUN   TestEndpoints_LocalCreateUnixSocket
--- PASS: TestEndpoints_LocalCreateUnixSocket (0.01s)
	testing.go:36: 01:10:26.844 info Starting /dev/lxd handler:
	testing.go:36: 01:10:26.844 info  - binding devlxd socket socket=/tmp/lxd-endpoints-test-798814695/devlxd/sock
	testing.go:36: 01:10:26.844 info REST API daemon:
	testing.go:36: 01:10:26.844 info  - binding Unix socket socket=/tmp/lxd-endpoints-test-798814695/unix.socket
	testing.go:36: 01:10:26.844 info Stopping REST API handler:
	testing.go:36: 01:10:26.844 info  - closing socket socket=/tmp/lxd-endpoints-test-798814695/unix.socket
	testing.go:36: 01:10:26.844 info Stopping /dev/lxd handler:
	testing.go:36: 01:10:26.844 info  - closing socket socket=/tmp/lxd-endpoints-test-798814695/devlxd/sock
=== RUN   TestEndpoints_LocalSocketBasedActivation
--- PASS: TestEndpoints_LocalSocketBasedActivation (0.02s)
	testing.go:36: 01:10:26.855 info Starting /dev/lxd handler:
	testing.go:36: 01:10:26.855 info  - binding devlxd socket socket=/tmp/lxd-endpoints-test-468816561/devlxd/sock
	testing.go:36: 01:10:26.855 info REST API daemon:
	testing.go:36: 01:10:26.855 info  - binding Unix socket socket=/tmp/lxd-endpoints-test852702746 inherited=true
	testing.go:36: 01:10:26.856 info Stopping REST API handler:
	testing.go:36: 01:10:26.856 info  - closing socket socket=/tmp/lxd-endpoints-test852702746
	testing.go:36: 01:10:26.856 info Stopping /dev/lxd handler:
	testing.go:36: 01:10:26.856 info  - closing socket socket=/tmp/lxd-endpoints-test-468816561/devlxd/sock
=== RUN   TestEndpoints_LocalUnknownUnixGroup
--- PASS: TestEndpoints_LocalUnknownUnixGroup (0.01s)
=== RUN   TestEndpoints_LocalAlreadyRunning
--- PASS: TestEndpoints_LocalAlreadyRunning (0.01s)
	testing.go:36: 01:10:26.887 info Starting /dev/lxd handler:
	testing.go:36: 01:10:26.887 info  - binding devlxd socket socket=/tmp/lxd-endpoints-test-285476875/devlxd/sock
	testing.go:36: 01:10:26.887 info REST API daemon:
	testing.go:36: 01:10:26.887 info  - binding Unix socket socket=/tmp/lxd-endpoints-test-285476875/unix.socket
	testing.go:36: 01:10:26.889 dbug Connecting to a local LXD over a Unix socket
	testing.go:36: 01:10:26.889 dbug Sending request to LXD method=GET url=http://unix.socket/1.0 etag=
	testing.go:36: 01:10:26.889 dbug Got response struct from LXD
	testing.go:36: 01:10:26.890 dbug 
			{
				"config": null,
				"api_extensions": null,
				"api_status": "",
				"api_version": "",
				"auth": "",
				"public": false,
				"auth_methods": null,
				"environment": {
					"addresses": null,
					"architectures": null,
					"certificate": "",
					"certificate_fingerprint": "",
					"driver": "",
					"driver_version": "",
					"kernel": "",
					"kernel_architecture": "",
					"kernel_features": null,
					"kernel_version": "",
					"lxc_features": null,
					"project": "",
					"server": "",
					"server_clustered": false,
					"server_name": "",
					"server_pid": 0,
					"server_version": "",
					"storage": "",
					"storage_version": ""
				}
			}
	testing.go:36: 01:10:26.894 info Stopping REST API handler:
	testing.go:36: 01:10:26.894 info  - closing socket socket=/tmp/lxd-endpoints-test-285476875/unix.socket
	testing.go:36: 01:10:26.894 info Stopping /dev/lxd handler:
	testing.go:36: 01:10:26.894 info  - closing socket socket=/tmp/lxd-endpoints-test-285476875/devlxd/sock
=== RUN   TestEndpoints_NetworkCreateTCPSocket
--- PASS: TestEndpoints_NetworkCreateTCPSocket (0.20s)
	testing.go:36: 01:10:26.900 info Starting /dev/lxd handler:
	testing.go:36: 01:10:26.900 info  - binding devlxd socket socket=/tmp/lxd-endpoints-test-148119669/devlxd/sock
	testing.go:36: 01:10:26.900 info REST API daemon:
	testing.go:36: 01:10:26.900 info  - binding Unix socket socket=/tmp/lxd-endpoints-test-148119669/unix.socket
	testing.go:36: 01:10:26.900 info  - binding TCP socket socket=127.0.0.1:38009
	testing.go:36: 01:10:27.096 info Stopping REST API handler:
	testing.go:36: 01:10:27.096 info  - closing socket socket=127.0.0.1:38009
	testing.go:36: 01:10:27.096 info  - closing socket socket=/tmp/lxd-endpoints-test-148119669/unix.socket
	testing.go:36: 01:10:27.096 info Stopping /dev/lxd handler:
	testing.go:36: 01:10:27.096 info  - closing socket socket=/tmp/lxd-endpoints-test-148119669/devlxd/sock
=== RUN   TestEndpoints_NetworkUpdateCert
--- PASS: TestEndpoints_NetworkUpdateCert (0.21s)
	testing.go:36: 01:10:27.100 info Starting /dev/lxd handler:
	testing.go:36: 01:10:27.100 info  - binding devlxd socket socket=/tmp/lxd-endpoints-test-603286864/devlxd/sock
	testing.go:36: 01:10:27.100 info REST API daemon:
	testing.go:36: 01:10:27.100 info  - binding Unix socket socket=/tmp/lxd-endpoints-test-603286864/unix.socket
	testing.go:36: 01:10:27.100 info  - binding TCP socket socket=127.0.0.1:33299
	testing.go:36: 01:10:27.295 info Stopping REST API handler:
	testing.go:36: 01:10:27.295 info  - closing socket socket=127.0.0.1:33299
	testing.go:36: 01:10:27.295 info  - closing socket socket=/tmp/lxd-endpoints-test-603286864/unix.socket
	testing.go:36: 01:10:27.295 info Stopping /dev/lxd handler:
	testing.go:36: 01:10:27.295 info  - closing socket socket=/tmp/lxd-endpoints-test-603286864/devlxd/sock
=== RUN   TestEndpoints_NetworkSocketBasedActivation
--- PASS: TestEndpoints_NetworkSocketBasedActivation (0.14s)
	testing.go:36: 01:10:27.310 info Starting /dev/lxd handler:
	testing.go:36: 01:10:27.310 info  - binding devlxd socket socket=/tmp/lxd-endpoints-test-087063151/devlxd/sock
	testing.go:36: 01:10:27.310 info REST API daemon:
	testing.go:36: 01:10:27.310 info  - binding TCP socket socket=127.0.0.1:45417 inherited=true
	testing.go:36: 01:10:27.440 info Stopping REST API handler:
	testing.go:36: 01:10:27.440 info  - closing socket socket=127.0.0.1:45417
	testing.go:36: 01:10:27.440 info Stopping /dev/lxd handler:
	testing.go:36: 01:10:27.440 info  - closing socket socket=/tmp/lxd-endpoints-test-087063151/devlxd/sock
=== RUN   TestEndpoints_NetworkUpdateAddress
--- PASS: TestEndpoints_NetworkUpdateAddress (0.14s)
	testing.go:36: 01:10:27.447 info Starting /dev/lxd handler:
	testing.go:36: 01:10:27.447 info  - binding devlxd socket socket=/tmp/lxd-endpoints-test-061856002/devlxd/sock
	testing.go:36: 01:10:27.447 info REST API daemon:
	testing.go:36: 01:10:27.447 info  - binding Unix socket socket=/tmp/lxd-endpoints-test-061856002/unix.socket
	testing.go:36: 01:10:27.447 info  - binding TCP socket socket=127.0.0.1:33335
	testing.go:36: 01:10:27.447 info Update network address
	testing.go:36: 01:10:27.447 info  - closing socket socket=127.0.0.1:33335
	testing.go:36: 01:10:27.447 info  - binding TCP socket socket=127.0.0.1:46523
	testing.go:36: 01:10:27.576 info Stopping REST API handler:
	testing.go:36: 01:10:27.576 info  - closing socket socket=127.0.0.1:46523
	testing.go:36: 01:10:27.576 info  - closing socket socket=/tmp/lxd-endpoints-test-061856002/unix.socket
	testing.go:36: 01:10:27.576 info Stopping /dev/lxd handler:
	testing.go:36: 01:10:27.576 info  - closing socket socket=/tmp/lxd-endpoints-test-061856002/devlxd/sock
PASS
ok  	github.com/lxc/lxd/lxd/endpoints	(cached)
?   	github.com/lxc/lxd/lxd/events	[no test files]
?   	github.com/lxc/lxd/lxd/instance/instancetype	[no test files]
?   	github.com/lxc/lxd/lxd/instance/operationlock	[no test files]
?   	github.com/lxc/lxd/lxd/iptables	[no test files]
?   	github.com/lxc/lxd/lxd/maas	[no test files]
?   	github.com/lxc/lxd/lxd/migration	[no test files]
=== RUN   TestConfigLoad_Initial
--- PASS: TestConfigLoad_Initial (0.22s)
=== RUN   TestConfigLoad_IgnoreInvalidKeys
--- PASS: TestConfigLoad_IgnoreInvalidKeys (0.60s)
=== RUN   TestConfigLoad_Triggers
--- PASS: TestConfigLoad_Triggers (0.29s)
=== RUN   TestConfig_ReplaceDeleteValues
--- PASS: TestConfig_ReplaceDeleteValues (0.48s)
=== RUN   TestConfig_PatchKeepsValues
--- PASS: TestConfig_PatchKeepsValues (0.53s)
=== RUN   TestHTTPSAddress
--- PASS: TestHTTPSAddress (1.21s)
=== RUN   TestClusterAddress
--- PASS: TestClusterAddress (0.32s)
=== RUN   TestDetermineRaftNode
=== RUN   TestDetermineRaftNode/no_cluster.https_address_set
=== RUN   TestDetermineRaftNode/cluster.https_address_set_and_and_no_raft_nodes_rows
=== RUN   TestDetermineRaftNode/cluster.https_address_set_and_matching_the_one_and_only_raft_nodes_row
=== RUN   TestDetermineRaftNode/cluster.https_address_set_and_matching_one_of_many_raft_nodes_rows
=== RUN   TestDetermineRaftNode/core.cluster_set_and_no_matching_raft_nodes_row
--- PASS: TestDetermineRaftNode (2.37s)
    --- PASS: TestDetermineRaftNode/no_cluster.https_address_set (0.36s)
    --- PASS: TestDetermineRaftNode/cluster.https_address_set_and_and_no_raft_nodes_rows (0.46s)
    --- PASS: TestDetermineRaftNode/cluster.https_address_set_and_matching_the_one_and_only_raft_nodes_row (0.51s)
    --- PASS: TestDetermineRaftNode/cluster.https_address_set_and_matching_one_of_many_raft_nodes_rows (0.50s)
    --- PASS: TestDetermineRaftNode/core.cluster_set_and_no_matching_raft_nodes_row (0.54s)
PASS
ok  	github.com/lxc/lxd/lxd/node	(cached)
?   	github.com/lxc/lxd/lxd/operations	[no test files]
?   	github.com/lxc/lxd/lxd/project	[no test files]
?   	github.com/lxc/lxd/lxd/rbac	[no test files]
?   	github.com/lxc/lxd/lxd/resources	[no test files]
?   	github.com/lxc/lxd/lxd/response	[no test files]
?   	github.com/lxc/lxd/lxd/rsync	[no test files]
=== RUN   TestMountFlagsToOpts
--- PASS: TestMountFlagsToOpts (0.00s)
PASS
ok  	github.com/lxc/lxd/lxd/seccomp	(cached)
?   	github.com/lxc/lxd/lxd/state	[no test files]
?   	github.com/lxc/lxd/lxd/storage	[no test files]
=== RUN   TestGetVolumeMountPath
--- PASS: TestGetVolumeMountPath (0.00s)
PASS
ok  	github.com/lxc/lxd/lxd/storage/drivers	(cached)
?   	github.com/lxc/lxd/lxd/storage/memorypipe	[no test files]
?   	github.com/lxc/lxd/lxd/storage/quota	[no test files]
?   	github.com/lxc/lxd/lxd/sys	[no test files]
=== RUN   TestGroup_Add
--- PASS: TestGroup_Add (0.00s)
=== RUN   TestGroup_StopUngracefully
--- PASS: TestGroup_StopUngracefully (0.00s)
=== RUN   TestTask_ExecuteImmediately
--- PASS: TestTask_ExecuteImmediately (0.00s)
=== RUN   TestTask_ExecutePeriodically
--- PASS: TestTask_ExecutePeriodically (0.25s)
=== RUN   TestTask_Reset
--- PASS: TestTask_Reset (0.25s)
=== RUN   TestTask_ZeroInterval
--- PASS: TestTask_ZeroInterval (0.10s)
=== RUN   TestTask_ScheduleError
--- PASS: TestTask_ScheduleError (0.10s)
=== RUN   TestTask_ScheduleTemporaryError
--- PASS: TestTask_ScheduleTemporaryError (0.00s)
=== RUN   TestTask_SkipFirst
--- PASS: TestTask_SkipFirst (0.40s)
PASS
ok  	github.com/lxc/lxd/lxd/task	(cached)
?   	github.com/lxc/lxd/lxd/template	[no test files]
?   	github.com/lxc/lxd/lxd/ucred	[no test files]
=== RUN   Test_CompareConfigsMismatch
=== RUN   Test_CompareConfigsMismatch/different_values_for_keys:_foo
=== RUN   Test_CompareConfigsMismatch/different_values_for_keys:_egg,_foo
--- PASS: Test_CompareConfigsMismatch (0.00s)
    --- PASS: Test_CompareConfigsMismatch/different_values_for_keys:_foo (0.00s)
    --- PASS: Test_CompareConfigsMismatch/different_values_for_keys:_egg,_foo (0.00s)
=== RUN   Test_CompareConfigs
--- PASS: Test_CompareConfigs (0.00s)
=== RUN   TestInMemoryNetwork
--- PASS: TestInMemoryNetwork (0.00s)
=== RUN   TestCanonicalNetworkAddress
=== RUN   TestCanonicalNetworkAddress/127.0.0.1
=== RUN   TestCanonicalNetworkAddress/foo.bar
=== RUN   TestCanonicalNetworkAddress/192.168.1.1:443
=== RUN   TestCanonicalNetworkAddress/f921:7358:4510:3fce:ac2e:844:2a35:54e
--- PASS: TestCanonicalNetworkAddress (0.00s)
    --- PASS: TestCanonicalNetworkAddress/127.0.0.1 (0.00s)
    --- PASS: TestCanonicalNetworkAddress/foo.bar (0.00s)
    --- PASS: TestCanonicalNetworkAddress/192.168.1.1:443 (0.00s)
    --- PASS: TestCanonicalNetworkAddress/f921:7358:4510:3fce:ac2e:844:2a35:54e (0.00s)
=== RUN   TestIsAddressCovered
=== RUN   TestIsAddressCovered/127.0.0.1:8443-127.0.0.1:8443
=== RUN   TestIsAddressCovered/garbage-127.0.0.1:8443
=== RUN   TestIsAddressCovered/127.0.0.1:8444-garbage
=== RUN   TestIsAddressCovered/127.0.0.1:8444-127.0.0.1:8443
=== RUN   TestIsAddressCovered/127.0.0.1:8443-0.0.0.0:8443
=== RUN   TestIsAddressCovered/[::1]:8443-0.0.0.0:8443
=== RUN   TestIsAddressCovered/:8443-0.0.0.0:8443
=== RUN   TestIsAddressCovered/127.0.0.1:8443-[::]:8443
=== RUN   TestIsAddressCovered/[::1]:8443-[::]:8443
=== RUN   TestIsAddressCovered/[::1]:8443-:8443
=== RUN   TestIsAddressCovered/:8443-[::]:8443
=== RUN   TestIsAddressCovered/0.0.0.0:8443-[::]:8443
--- PASS: TestIsAddressCovered (0.00s)
    --- PASS: TestIsAddressCovered/127.0.0.1:8443-127.0.0.1:8443 (0.00s)
    --- PASS: TestIsAddressCovered/garbage-127.0.0.1:8443 (0.00s)
    --- PASS: TestIsAddressCovered/127.0.0.1:8444-garbage (0.00s)
    --- PASS: TestIsAddressCovered/127.0.0.1:8444-127.0.0.1:8443 (0.00s)
    --- PASS: TestIsAddressCovered/127.0.0.1:8443-0.0.0.0:8443 (0.00s)
    --- PASS: TestIsAddressCovered/[::1]:8443-0.0.0.0:8443 (0.00s)
    --- PASS: TestIsAddressCovered/:8443-0.0.0.0:8443 (0.00s)
    --- PASS: TestIsAddressCovered/127.0.0.1:8443-[::]:8443 (0.00s)
    --- PASS: TestIsAddressCovered/[::1]:8443-[::]:8443 (0.00s)
    --- PASS: TestIsAddressCovered/[::1]:8443-:8443 (0.00s)
    --- PASS: TestIsAddressCovered/:8443-[::]:8443 (0.00s)
    --- PASS: TestIsAddressCovered/0.0.0.0:8443-[::]:8443 (0.00s)
=== RUN   TestListenImplicitIPv6Wildcard
--- PASS: TestListenImplicitIPv6Wildcard (0.00s)
PASS
ok  	github.com/lxc/lxd/lxd/util	(cached)
?   	github.com/lxc/lxd/lxd/vsock	[no test files]
?   	github.com/lxc/lxd/lxd-agent	[no test files]
?   	github.com/lxc/lxd/lxd-benchmark	[no test files]
?   	github.com/lxc/lxd/lxd-benchmark/benchmark	[no test files]
?   	github.com/lxc/lxd/lxd-p2c	[no test files]
=== RUN   TestGetAllXattr
--- PASS: TestGetAllXattr (0.01s)
=== RUN   TestURLEncode
--- PASS: TestURLEncode (0.00s)
=== RUN   TestFileCopy
--- PASS: TestFileCopy (0.00s)
=== RUN   TestDirCopy
--- PASS: TestDirCopy (0.00s)
=== RUN   TestReaderToChannel
--- PASS: TestReaderToChannel (0.01s)
=== RUN   TestGetSnapshotExpiry
--- PASS: TestGetSnapshotExpiry (0.00s)
=== RUN   TestKeyPairAndCA
--- PASS: TestKeyPairAndCA (0.04s)
=== RUN   TestGenerateMemCert
--- PASS: TestGenerateMemCert (0.03s)
PASS
ok  	github.com/lxc/lxd/shared	(cached)
?   	github.com/lxc/lxd/shared/api	[no test files]
?   	github.com/lxc/lxd/shared/cancel	[no test files]
?   	github.com/lxc/lxd/shared/cmd	[no test files]
?   	github.com/lxc/lxd/shared/containerwriter	[no test files]
?   	github.com/lxc/lxd/shared/dnsutil	[no test files]
?   	github.com/lxc/lxd/shared/eagain	[no test files]
?   	github.com/lxc/lxd/shared/generate	[no test files]
=== RUN   TestPackages
--- PASS: TestPackages (0.05s)
=== RUN   TestParse
--- PASS: TestParse (0.00s)
PASS
ok  	github.com/lxc/lxd/shared/generate/db	(cached)
?   	github.com/lxc/lxd/shared/generate/file	[no test files]
=== RUN   TestParse
--- PASS: TestParse (0.00s)
PASS
ok  	github.com/lxc/lxd/shared/generate/lex	(cached)
?   	github.com/lxc/lxd/shared/i18n	[no test files]
=== RUN   TestIdmapSetAddSafe_split
--- PASS: TestIdmapSetAddSafe_split (0.00s)
=== RUN   TestIdmapSetAddSafe_lower
--- PASS: TestIdmapSetAddSafe_lower (0.00s)
=== RUN   TestIdmapSetAddSafe_upper
--- PASS: TestIdmapSetAddSafe_upper (0.00s)
=== RUN   TestIdmapSetIntersects
--- PASS: TestIdmapSetIntersects (0.00s)
PASS
ok  	github.com/lxc/lxd/shared/idmap	(cached)
?   	github.com/lxc/lxd/shared/ioprogress	[no test files]
?   	github.com/lxc/lxd/shared/log15	[no test files]
?   	github.com/lxc/lxd/shared/log15/stack	[no test files]
?   	github.com/lxc/lxd/shared/log15/term	[no test files]
?   	github.com/lxc/lxd/shared/logger	[no test files]
?   	github.com/lxc/lxd/shared/logging	[no test files]
?   	github.com/lxc/lxd/shared/netutils	[no test files]
=== RUN   TestReleaseTestSuite
=== RUN   TestReleaseTestSuite/TestGetLSBRelease
=== RUN   TestReleaseTestSuite/TestGetLSBReleaseInvalidLine
=== RUN   TestReleaseTestSuite/TestGetLSBReleaseNoQuotes
=== RUN   TestReleaseTestSuite/TestGetLSBReleaseSingleQuotes
=== RUN   TestReleaseTestSuite/TestGetLSBReleaseSkipCommentsEmpty
--- PASS: TestReleaseTestSuite (0.01s)
    --- PASS: TestReleaseTestSuite/TestGetLSBRelease (0.00s)
    --- PASS: TestReleaseTestSuite/TestGetLSBReleaseInvalidLine (0.00s)
    --- PASS: TestReleaseTestSuite/TestGetLSBReleaseNoQuotes (0.00s)
    --- PASS: TestReleaseTestSuite/TestGetLSBReleaseSingleQuotes (0.00s)
    --- PASS: TestReleaseTestSuite/TestGetLSBReleaseSkipCommentsEmpty (0.00s)
PASS
ok  	github.com/lxc/lxd/shared/osarch	(cached)
?   	github.com/lxc/lxd/shared/simplestreams	[no test files]
?   	github.com/lxc/lxd/shared/subtest	[no test files]
?   	github.com/lxc/lxd/shared/termios	[no test files]
?   	github.com/lxc/lxd/shared/units	[no test files]
=== RUN   TestVersionTestSuite
=== RUN   TestVersionTestSuite/TestCompareEqual
=== RUN   TestVersionTestSuite/TestCompareNewer
=== RUN   TestVersionTestSuite/TestCompareOlder
=== RUN   TestVersionTestSuite/TestNewVersion
=== RUN   TestVersionTestSuite/TestNewVersionInvalid
=== RUN   TestVersionTestSuite/TestNewVersionNoPatch
=== RUN   TestVersionTestSuite/TestParseDashes
=== RUN   TestVersionTestSuite/TestParseFail
=== RUN   TestVersionTestSuite/TestParseParentheses
=== RUN   TestVersionTestSuite/TestString
--- PASS: TestVersionTestSuite (0.00s)
    --- PASS: TestVersionTestSuite/TestCompareEqual (0.00s)
    --- PASS: TestVersionTestSuite/TestCompareNewer (0.00s)
    --- PASS: TestVersionTestSuite/TestCompareOlder (0.00s)
    --- PASS: TestVersionTestSuite/TestNewVersion (0.00s)
    --- PASS: TestVersionTestSuite/TestNewVersionInvalid (0.00s)
    --- PASS: TestVersionTestSuite/TestNewVersionNoPatch (0.00s)
    --- PASS: TestVersionTestSuite/TestParseDashes (0.00s)
    --- PASS: TestVersionTestSuite/TestParseFail (0.00s)
    --- PASS: TestVersionTestSuite/TestParseParentheses (0.00s)
    --- PASS: TestVersionTestSuite/TestString (0.00s)
PASS
ok  	github.com/lxc/lxd/shared/version	(cached)
?   	github.com/lxc/lxd/test/deps	[no test files]
?   	github.com/lxc/lxd/test/macaroon-identity	[no test files]
make: *** [Makefile:152: check] Error 1
@stgraber

This comment has been minimized.

Copy link
Member Author

@stgraber stgraber commented Nov 15, 2019

Thanks, I'm getting the same result here. We should probably fix that test to be skipped when non-root.

Anyway, your best bet is to do:

sudo -E -s
make check

Which will then run that test as root and should go fine (it does here anyway).

@DBaum1

This comment has been minimized.

Copy link

@DBaum1 DBaum1 commented Nov 15, 2019

@stgraber The result is the same when run as root.

@stgraber

This comment has been minimized.

Copy link
Member Author

@stgraber stgraber commented Nov 15, 2019

That's weird. Anyway, I'd say not worry about that too much, what's more interesting for this change is going to be the system tests.

Do those work for you if you do:

sudo -E -s
cd test
LXD_TMPFS=1 LXD_VERBOSE=1 ./main.sh
@DBaum1

This comment has been minimized.

Copy link

@DBaum1 DBaum1 commented Nov 16, 2019

That fails as well.

Results
(base) root@liopleurodon:~/go/src/github.com/lxc/lxd/test# LXD_TMPFS=1 LXD_VERBOSE=1 ./main.sh
+ export DEBUG=
+ [ -n 1 ]
+ DEBUG=--verbose
+ [ -n  ]
+ [ -z  ]
+ LXD_BACKEND=dir
+ LXD_NETNS=
+ LXD_ALT_CERT=
+ export SHELLCHECK_OPTS=-e SC2230
+ import_subdir_files includes
+ test includes
+ local file
+ . includes/check.sh
+ . includes/clustering.sh
+ . includes/external_auth.sh
+ . includes/lxc.sh
+ . includes/lxd.sh
+ . includes/net.sh
+ . includes/setup.sh
+ . includes/storage.sh
+ echo ==> Checking for dependencies
==> Checking for dependencies
+ check_dependencies lxd lxc curl dnsmasq jq git xgettext sqlite3 msgmerge msgfmt shuf setfacl uuidgen socat dig
+ local dep missing
+ missing=
+ which lxd
+ which lxc
+ which curl
+ which dnsmasq
+ which jq
+ which git
+ which xgettext
+ which sqlite3
+ which msgmerge
+ which msgfmt
+ which shuf
+ which setfacl
+ which uuidgen
+ which socat
+ which dig
+ [  ]
+ [ root != root ]
+ [ -n  ]
+ available_storage_backends
+ local backend backends storage_backends
+ + sortbackends=dir

+ storage_backends=btrfs lvm zfs
+ [ -n  ]
+ which btrfs
+ backends=dir btrfs
+ which lvm
+ backends=dir btrfs lvm
+ which zfs
+ backends=dir btrfs lvm zfs
+ echo dir btrfs lvm zfs
+ echo ==> Available storage backends: dir btrfs lvm zfs
==> Available storage backends: dir btrfs lvm zfs
+ [ dir != random ]
+ storage_backend_available dir
+ local backends
+ available_storage_backends
+ local backend backends storage_backends
+ backends=dir
+ storage_backends=btrfs lvm zfs
+ [ -n  ]
+ which btrfs
+ backends=dir btrfs
+ which lvm
+ backends=dir btrfs lvm
+ which zfs
+ backends=dir btrfs lvm zfs
+ echo dir btrfs lvm zfs
+ backends=dir btrfs lvm zfs
+ [  btrfs lvm zfs != dir btrfs lvm zfs ]
+ echo ==> Using storage backend dir
==> Using storage backend dir
+ import_storage_backends
+ local backend
+ available_storage_backends
+ local backend backends storage_backends
+ backends=dir
+ storage_backends=btrfs lvm zfs
+ [ -n  ]
+ which btrfs
+ backends=dir btrfs
+ which lvm
+ backends=dir btrfs lvm
+ which zfs
+ backends=dir btrfs lvm zfs
+ echo dir btrfs lvm zfs
+ . backends/dir.sh
+ . backends/btrfs.sh
+ . backends/lvm.sh
+ . backends/zfs.sh
+ TEST_CURRENT=setup
+ TEST_RESULT=failure
+ trap cleanup EXIT HUP INT TERM
+ import_subdir_files suites
+ test suites
+ local file
+ . suites/backup.sh
+ . suites/basic.sh
+ . suites/clustering.sh
+ . suites/concurrent.sh
+ . suites/config.sh
+ . suites/console.sh
+ . suites/container_devices_disk.sh
+ . suites/container_devices_gpu.sh
+ . suites/container_devices_infiniband_physical.sh
+ . suites/container_devices_infiniband_sriov.sh
+ . suites/container_devices_nic_bridged.sh
+ . suites/container_devices_nic_bridged_filtering.sh
+ . suites/container_devices_nic_ipvlan.sh
+ . suites/container_devices_nic_macvlan.sh
+ . suites/container_devices_nic_p2p.sh
+ . suites/container_devices_nic_physical.sh
+ . suites/container_devices_nic_routed.sh
+ . suites/container_devices_nic_sriov.sh
+ . suites/container_devices_proxy.sh
+ . suites/container_devices_unix.sh
+ . suites/container_local_cross_pool_handling.sh
+ . suites/database_update.sh
+ . suites/deps.sh
+ . suites/devlxd.sh
+ . suites/exec.sh
+ . suites/external_auth.sh
+ . suites/fdleak.sh
+ . suites/filemanip.sh
+ . suites/fuidshift.sh
+ . suites/idmap.sh
+ . suites/image.sh
+ . suites/image_auto_update.sh
+ . suites/image_prefer_cached.sh
+ . suites/incremental_copy.sh
+ . suites/init_auto.sh
+ . suites/init_dump.sh
+ . suites/init_interactive.sh
+ . suites/init_preseed.sh
+ . suites/kernel_limits.sh
+ . suites/lxc-to-lxd.sh
+ . suites/migration.sh
+ . suites/network.sh
+ . suites/pki.sh
+ . suites/projects.sh
+ . suites/query.sh
+ . suites/remote.sh
+ . suites/resources.sh
+ . suites/security.sh
+ . suites/serverconfig.sh
+ . suites/snapshots.sh
+ . suites/sql.sh
+ . suites/static_analysis.sh
+ . suites/storage.sh
+ . suites/storage_driver_ceph.sh
+ . suites/storage_driver_cephfs.sh
+ . suites/storage_local_volume_handling.sh
+ . suites/storage_profiles.sh
+ . suites/storage_snapshots.sh
+ . suites/storage_volume_attach.sh
+ . suites/template.sh
+ pwd
+ mktemp -d -p /home/dinah/go/src/github.com/lxc/lxd/test tmp.XXX
+ TEST_DIR=/home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc
+ chmod +x /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc
+ [ -n 1 ]
+ mount -t tmpfs tmpfs /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc -o mode=0751
+ mktemp -d -p /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc XXX
+ LXD_CONF=/home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/6bR
+ export LXD_CONF
+ mktemp -d -p /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc XXX
+ LXD_DIR=/home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/yaf
+ export LXD_DIR
+ chmod +x /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/yaf
+ spawn_lxd /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/yaf true
+ set +x
==> Setting up directory backend in /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/yaf
==> Spawning lxd in /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/yaf
==> Spawned LXD (PID is 16352)
==> Confirming lxd is responsive
INFO[11-16|00:08:08] LXD 3.18 is starting in normal mode      path=/home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/yaf
INFO[11-16|00:08:08] Kernel uid/gid map: 
INFO[11-16|00:08:08]  - u 0 0 4294967295 
INFO[11-16|00:08:08]  - g 0 0 4294967295 
INFO[11-16|00:08:08] Configured LXD uid/gid map: 
INFO[11-16|00:08:08]  - u 0 1000000 65536 
INFO[11-16|00:08:08]  - g 0 1000000 65536 
WARN[11-16|00:08:08] Couldn't find the CGroup blkio.weight, I/O weight limits will be ignored. 
WARN[11-16|00:08:08] CGroup memory swap accounting is disabled, swap limits will be ignored. 
INFO[11-16|00:08:08] Kernel features: 
INFO[11-16|00:08:08]  - netnsid-based network retrieval: yes 
INFO[11-16|00:08:08]  - uevent injection: yes 
INFO[11-16|00:08:08]  - seccomp listener: yes 
INFO[11-16|00:08:08]  - seccomp listener continue syscalls: yes 
INFO[11-16|00:08:08]  - unprivileged file capabilities: yes 
INFO[11-16|00:08:08]  - shiftfs support: yes 
INFO[11-16|00:08:08] Initializing local database 
INFO[11-16|00:08:08] Starting /dev/lxd handler: 
INFO[11-16|00:08:08]  - binding devlxd socket                 socket=/home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/yaf/devlxd/sock
INFO[11-16|00:08:08] REST API daemon: 
INFO[11-16|00:08:08]  - binding Unix socket                   socket=/home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/yaf/unix.socket
INFO[11-16|00:08:08] Initializing global database 
INFO[11-16|00:08:08] Initializing storage pools 
INFO[11-16|00:08:08] Initializing networks 
INFO[11-16|00:08:08] Pruning leftover image files 
INFO[11-16|00:08:08] Done pruning leftover image files 
INFO[11-16|00:08:08] Loading daemon configuration 
INFO[11-16|00:08:08] Started seccomp handler                  path=/home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/yaf/seccomp.socket
INFO[11-16|00:08:08] Pruning expired images 
INFO[11-16|00:08:08] Done pruning expired images 
INFO[11-16|00:08:08] Pruning expired container backups 
INFO[11-16|00:08:08] Done pruning expired container backups 
INFO[11-16|00:08:08] Expiring log files 
INFO[11-16|00:08:08] Done expiring log files 
INFO[11-16|00:08:08] Updating images 
INFO[11-16|00:08:08] Done updating images 
INFO[11-16|00:08:08] Updating instance types 
INFO[11-16|00:08:08] Done updating instance types 
==> Binding to network
+ eval /home/dinah/go/bin/lxc "config" "set" "core.https_address" "127.0.0.1:54153" --verbose
+ /home/dinah/go/bin/lxc config set core.https_address 127.0.0.1:54153 --verbose
If this is your first time running LXD on this machine, you should also run: lxd init
To start your first container, try: lxc launch ubuntu:18.04

INFO[11-16|00:08:09] Update network address 
INFO[11-16|00:08:09]  - binding TCP socket                    socket=127.0.0.1:54153
+ echo 127.0.0.1:54153
+ echo ==> Bound to 127.0.0.1:54153
==> Bound to 127.0.0.1:54153
+ break
+ echo ==> Setting trust password
==> Setting trust password
+ LXD_DIR=/home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/yaf lxc config set core.trust_password foo
+ LXC_LOCAL=1 lxc_remote config set core.trust_password foo
+ set +x
+ eval /home/dinah/go/bin/lxc "config" "set" "core.trust_password" "foo" --verbose
+ /home/dinah/go/bin/lxc config set core.trust_password foo --verbose
+ [ -n --verbose ]
+ set -x
+ [  =  ]
+ echo ==> Setting up networking
==> Setting up networking
+ LXD_DIR=/home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/yaf lxc profile device add default eth0 nic nictype=p2p name=eth0
+ LXC_LOCAL=1 lxc_remote profile device add default eth0 nic nictype=p2p name=eth0
+ set +x
+ eval /home/dinah/go/bin/lxc "profile" "device" "add" "default" "eth0" "nic" "nictype=p2p" "name=eth0" --verbose
+ /home/dinah/go/bin/lxc profile device add default eth0 nic nictype=p2p name=eth0 --verbose
Device eth0 added to default
+ [ true = true ]
+ echo ==> Configuring storage backend
==> Configuring storage backend
+ dir_configure /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/yaf
+ local LXD_DIR
+ LXD_DIR=/home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/yaf
+ echo ==> Configuring directory backend in /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/yaf
==> Configuring directory backend in /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/yaf
+ basename /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/yaf
+ lxc storage create lxdtest-yaf dir
+ LXC_LOCAL=1 lxc_remote storage create lxdtest-yaf dir
+ set +x
+ eval /home/dinah/go/bin/lxc "storage" "create" "lxdtest-yaf" "dir" --verbose
+ /home/dinah/go/bin/lxc storage create lxdtest-yaf dir --verbose
Storage pool lxdtest-yaf created
+ basename /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/yaf
+ lxc profile device add default root disk path=/ pool=lxdtest-yaf
+ LXC_LOCAL=1 lxc_remote profile device add default root disk path=/ pool=lxdtest-yaf
+ set +x
+ eval /home/dinah/go/bin/lxc "profile" "device" "add" "default" "root" "disk" "path=/" "pool=lxdtest-yaf" --verbose
+ /home/dinah/go/bin/lxc profile device add default root disk path=/ pool=lxdtest-yaf --verbose
Device root added to default
+ cat /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/yaf/lxd.addr
+ LXD_ADDR=127.0.0.1:54153
+ export LXD_ADDR
+ start_external_auth_daemon /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/yaf
+ cd macaroon-identity
+ go get -d ./...
+ go build ./...
+ local credentials_file tcp_port
+ credentials_file=/home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/yaf/macaroon-identity-credentials.csv
+ local_tcp_port
+ which python3
+ cat
+ python3
+ return
+ tcp_port=44853
+ cat
+ set +x
+ macaroon-identity/macaroon-identity -endpoint localhost:44853 -creds /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/yaf/macaroon-identity-credentials.csv
==> TEST BEGIN: checking dependencies
2019/11/16 00:08:10 auth - running at http://127.0.0.1:44853
==> TEST DONE: checking dependencies (0s)
==> TEST BEGIN: static analysis
On branch master
Your branch is up to date with 'origin/master'.

Untracked files:
  (use "git add <file>..." to include in what will be committed)

	test/tmp.jIc/

nothing added to commit but untracked files present (use "git add" to track)
github.com/snapcore/snapd (download)
cd .
git clone https://github.com/snapcore/snapd /home/dinah/go/src/github.com/snapcore/snapd
cd /home/dinah/go/src/github.com/snapcore/snapd
git submodule update --init --recursive
cd /home/dinah/go/src/github.com/snapcore/snapd
git show-ref
cd /home/dinah/go/src/github.com/snapcore/snapd
git submodule update --init --recursive
github.com/jessevdk/go-flags (download)
cd .
git clone https://github.com/jessevdk/go-flags /home/dinah/go/src/github.com/jessevdk/go-flags
cd /home/dinah/go/src/github.com/jessevdk/go-flags
git submodule update --init --recursive
cd /home/dinah/go/src/github.com/jessevdk/go-flags
git show-ref
cd /home/dinah/go/src/github.com/jessevdk/go-flags
git submodule update --init --recursive
WORK=/tmp/go-build327687907
mkdir -p $WORK/b001/
cat >$WORK/b001/importcfg.link << 'EOF' # internal
packagefile github.com/snapcore/snapd/i18n/xgettext-go=/home/dinah/.cache/go-build/f4/f4867b34151e1ad4bc7bede8a754e223945ad23ba7c0e237d99f3c2d9c38e229-d
packagefile bytes=/usr/lib/go-1.10/pkg/linux_amd64/bytes.a
packagefile fmt=/usr/lib/go-1.10/pkg/linux_amd64/fmt.a
packagefile github.com/jessevdk/go-flags=/home/dinah/.cache/go-build/00/00021410c8cb461095d254886a520abbf73438e0c7e1250dd6f2946afc75dbd6-d
packagefile go/ast=/usr/lib/go-1.10/pkg/linux_amd64/go/ast.a
packagefile go/parser=/usr/lib/go-1.10/pkg/linux_amd64/go/parser.a
packagefile go/token=/usr/lib/go-1.10/pkg/linux_amd64/go/token.a
packagefile io=/usr/lib/go-1.10/pkg/linux_amd64/io.a
packagefile io/ioutil=/usr/lib/go-1.10/pkg/linux_amd64/io/ioutil.a
packagefile log=/usr/lib/go-1.10/pkg/linux_amd64/log.a
packagefile os=/usr/lib/go-1.10/pkg/linux_amd64/os.a
packagefile sort=/usr/lib/go-1.10/pkg/linux_amd64/sort.a
packagefile strings=/usr/lib/go-1.10/pkg/linux_amd64/strings.a
packagefile time=/usr/lib/go-1.10/pkg/linux_amd64/time.a
packagefile runtime=/usr/lib/go-1.10/pkg/linux_amd64/runtime.a
packagefile errors=/usr/lib/go-1.10/pkg/linux_amd64/errors.a
packagefile internal/cpu=/usr/lib/go-1.10/pkg/linux_amd64/internal/cpu.a
packagefile unicode=/usr/lib/go-1.10/pkg/linux_amd64/unicode.a
packagefile unicode/utf8=/usr/lib/go-1.10/pkg/linux_amd64/unicode/utf8.a
packagefile math=/usr/lib/go-1.10/pkg/linux_amd64/math.a
packagefile reflect=/usr/lib/go-1.10/pkg/linux_amd64/reflect.a
packagefile strconv=/usr/lib/go-1.10/pkg/linux_amd64/strconv.a
packagefile sync=/usr/lib/go-1.10/pkg/linux_amd64/sync.a
packagefile bufio=/usr/lib/go-1.10/pkg/linux_amd64/bufio.a
packagefile path=/usr/lib/go-1.10/pkg/linux_amd64/path.a
packagefile path/filepath=/usr/lib/go-1.10/pkg/linux_amd64/path/filepath.a
packagefile syscall=/usr/lib/go-1.10/pkg/linux_amd64/syscall.a
packagefile go/scanner=/usr/lib/go-1.10/pkg/linux_amd64/go/scanner.a
packagefile sync/atomic=/usr/lib/go-1.10/pkg/linux_amd64/sync/atomic.a
packagefile internal/poll=/usr/lib/go-1.10/pkg/linux_amd64/internal/poll.a
packagefile internal/testlog=/usr/lib/go-1.10/pkg/linux_amd64/internal/testlog.a
packagefile runtime/internal/atomic=/usr/lib/go-1.10/pkg/linux_amd64/runtime/internal/atomic.a
packagefile runtime/internal/sys=/usr/lib/go-1.10/pkg/linux_amd64/runtime/internal/sys.a
packagefile internal/race=/usr/lib/go-1.10/pkg/linux_amd64/internal/race.a
EOF
mkdir -p $WORK/b001/exe/
cd .
BUILD_PATH_PREFIX_MAP='/tmp/go-build=$WORK:' /usr/lib/go-1.10/pkg/tool/linux_amd64/link -o $WORK/b001/exe/a.out -importcfg $WORK/b001/importcfg.link -buildmode=exe -buildid=CyZcAVxDaVq2idmJ1E77/UHbzCFjG2gz9DInG79Pl/ZfeVxVHa1pFzlbDgFgCJ/CyZcAVxDaVq2idmJ1E77 -extld=gcc /home/dinah/.cache/go-build/f4/f4867b34151e1ad4bc7bede8a754e223945ad23ba7c0e237d99f3c2d9c38e229-d
/usr/lib/go-1.10/pkg/tool/linux_amd64/buildid -w $WORK/b001/exe/a.out # internal
mkdir -p /home/dinah/go/bin/
mv $WORK/b001/exe/a.out /home/dinah/go/bin/xgettext-go
rm -r $WORK/b001/
............................................................................................. done.
................................................................................................ done.
.................................................................................................. done.
................................................................................................. done.
.................................................................................................. done.
...................................................................................................... done.
................................................................................................ done.
................................................................................................ done.
.................................................................................................. done.
........................................................................................................... done.
.................................................................................................... done.
............................................................................................... done.
............................................................................................... done.
.............................................................................................. done.
............................................................................................. done.
...................................................................................................... done.
................................................................................................ done.
............................................................................................. done.
..................................................................................................... done.
.............................................................................................. done.
................................................................................................. done.
................................................................................................ done.
............................................................................................... done.
............................................................................................. done.
....................................................................................................... done.
==> TEST DONE: static analysis (35s)
==> TEST BEGIN: database schema updates
==> Setting up directory backend in /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/7YN
==> Spawning lxd in /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/7YN
==> Spawned LXD (PID is 18576)
==> Confirming lxd is responsive
INFO[11-16|00:08:45] LXD 3.18 is starting in normal mode      path=/home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/7YN
INFO[11-16|00:08:45] Kernel uid/gid map: 
INFO[11-16|00:08:45]  - u 0 0 4294967295 
INFO[11-16|00:08:45]  - g 0 0 4294967295 
INFO[11-16|00:08:45] Configured LXD uid/gid map: 
INFO[11-16|00:08:45]  - u 0 1000000 65536 
INFO[11-16|00:08:45]  - g 0 1000000 65536 
WARN[11-16|00:08:45] Couldn't find the CGroup blkio.weight, I/O weight limits will be ignored. 
WARN[11-16|00:08:45] CGroup memory swap accounting is disabled, swap limits will be ignored. 
INFO[11-16|00:08:45] Kernel features: 
INFO[11-16|00:08:45]  - netnsid-based network retrieval: yes 
INFO[11-16|00:08:45]  - uevent injection: yes 
INFO[11-16|00:08:45]  - seccomp listener: yes 
INFO[11-16|00:08:45]  - seccomp listener continue syscalls: yes 
INFO[11-16|00:08:45]  - unprivileged file capabilities: yes 
INFO[11-16|00:08:45]  - shiftfs support: yes 
INFO[11-16|00:08:45] Initializing local database 
INFO[11-16|00:08:45] Updating the LXD database schema. Backup made as "local.db.bak" 
INFO[11-16|00:08:45] Starting /dev/lxd handler: 
INFO[11-16|00:08:45]  - binding devlxd socket                 socket=/home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/7YN/devlxd/sock
INFO[11-16|00:08:45] REST API daemon: 
INFO[11-16|00:08:45]  - binding Unix socket                   socket=/home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/7YN/unix.socket
INFO[11-16|00:08:45] Initializing global database 
INFO[11-16|00:08:45] Migrating data from local to global database 
INFO[11-16|00:08:45] Updating the LXD global schema. Backup made as "global.bak" 
INFO[11-16|00:08:45] Initializing storage pools 
INFO[11-16|00:08:45] Applying patch: shrink_logs_db_file 
INFO[11-16|00:08:45] Applying patch: invalid_profile_names 
INFO[11-16|00:08:45] Applying patch: leftover_profile_config 
INFO[11-16|00:08:45] Applying patch: network_permissions 
INFO[11-16|00:08:45] Applying patch: storage_api 
INFO[11-16|00:08:45] Applying patch: storage_api_v1 
INFO[11-16|00:08:45] Applying patch: storage_api_dir_cleanup 
INFO[11-16|00:08:45] Applying patch: storage_api_lvm_keys 
INFO[11-16|00:08:45] Applying patch: storage_api_keys 
INFO[11-16|00:08:45] Applying patch: storage_api_update_storage_configs 
INFO[11-16|00:08:45] Applying patch: storage_api_lxd_on_btrfs 
INFO[11-16|00:08:45] Applying patch: storage_api_lvm_detect_lv_size 
INFO[11-16|00:08:45] Applying patch: storage_api_insert_zfs_driver 
INFO[11-16|00:08:45] Applying patch: storage_zfs_noauto 
INFO[11-16|00:08:45] Applying patch: storage_zfs_volume_size 
INFO[11-16|00:08:45] Applying patch: network_dnsmasq_hosts 
INFO[11-16|00:08:45] Applying patch: storage_api_dir_bind_mount 
INFO[11-16|00:08:45] Applying patch: fix_uploaded_at 
INFO[11-16|00:08:45] Applying patch: storage_api_ceph_size_remove 
INFO[11-16|00:08:45] Applying patch: devices_new_naming_scheme 
INFO[11-16|00:08:45] Applying patch: storage_api_permissions 
INFO[11-16|00:08:45] Applying patch: container_config_regen 
INFO[11-16|00:08:45] Applying patch: lvm_node_specific_config_keys 
INFO[11-16|00:08:45] Applying patch: candid_rename_config_key 
INFO[11-16|00:08:45] Applying patch: move_backups 
INFO[11-16|00:08:45] Applying patch: storage_api_rename_container_snapshots_dir 
INFO[11-16|00:08:45] Applying patch: storage_api_rename_container_snapshots_links 
INFO[11-16|00:08:45] Applying patch: fix_lvm_pool_volume_names 
INFO[11-16|00:08:45] Applying patch: storage_api_rename_container_snapshots_dir_again 
INFO[11-16|00:08:45] Applying patch: storage_api_rename_container_snapshots_links_again 
INFO[11-16|00:08:45] Applying patch: storage_api_rename_container_snapshots_dir_again_again 
INFO[11-16|00:08:45] Applying patch: clustering_add_roles 
INFO[11-16|00:08:45] Initializing networks 
INFO[11-16|00:08:45] Pruning leftover image files 
INFO[11-16|00:08:45] Done pruning leftover image files 
INFO[11-16|00:08:45] Loading daemon configuration 
INFO[11-16|00:08:45] Started seccomp handler                  path=/home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/7YN/seccomp.socket
INFO[11-16|00:08:45] Error decoding certificate for cert1: %!s(<nil>) 
INFO[11-16|00:08:45] Pruning expired images 
INFO[11-16|00:08:45] Done pruning expired images 
INFO[11-16|00:08:45] Pruning expired container backups 
INFO[11-16|00:08:45] Done pruning expired container backups 
INFO[11-16|00:08:45] Expiring log files 
INFO[11-16|00:08:45] Done expiring log files 
INFO[11-16|00:08:45] Updating images 
INFO[11-16|00:08:45] Done updating images 
INFO[11-16|00:08:45] Updating instance types 
INFO[11-16|00:08:45] Done updating instance types 
==> Binding to network
+ eval /home/dinah/go/bin/lxc "config" "set" "core.https_address" "127.0.0.1:43653" --verbose
+ /home/dinah/go/bin/lxc config set core.https_address 127.0.0.1:43653 --verbose
INFO[11-16|00:08:45] Update network address 
INFO[11-16|00:08:45]  - binding TCP socket                    socket=127.0.0.1:43653
+ echo 127.0.0.1:43653
+ echo ==> Bound to 127.0.0.1:43653
==> Bound to 127.0.0.1:43653
+ break
+ echo ==> Setting trust password
==> Setting trust password
+ LXD_DIR=/home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/7YN lxc config set core.trust_password foo
+ LXC_LOCAL=1 lxc_remote config set core.trust_password foo
+ set +x
+ eval /home/dinah/go/bin/lxc "config" "set" "core.trust_password" "foo" --verbose
+ /home/dinah/go/bin/lxc config set core.trust_password foo --verbose
+ [ -n --verbose ]
+ set -x
+ [  =  ]
+ echo ==> Setting up networking
==> Setting up networking
+ LXD_DIR=/home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/7YN lxc profile device add default eth0 nic nictype=p2p name=eth0
+ LXC_LOCAL=1 lxc_remote profile device add default eth0 nic nictype=p2p name=eth0
+ set +x
+ eval /home/dinah/go/bin/lxc "profile" "device" "add" "default" "eth0" "nic" "nictype=p2p" "name=eth0" --verbose
+ /home/dinah/go/bin/lxc profile device add default eth0 nic nictype=p2p name=eth0 --verbose
Device eth0 added to default
+ [ true = true ]
+ echo ==> Configuring storage backend
==> Configuring storage backend
+ dir_configure /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/7YN
+ local LXD_DIR
+ LXD_DIR=/home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/7YN
+ echo ==> Configuring directory backend in /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/7YN
==> Configuring directory backend in /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/7YN
+ basename /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/7YN
+ lxc storage create lxdtest-7YN dir
+ LXC_LOCAL=1 lxc_remote storage create lxdtest-7YN dir
+ set +x
+ eval /home/dinah/go/bin/lxc "storage" "create" "lxdtest-7YN" "dir" --verbose
+ /home/dinah/go/bin/lxc storage create lxdtest-7YN dir --verbose
Storage pool lxdtest-7YN created
+ basename /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/7YN
+ lxc profile device add default root disk path=/ pool=lxdtest-7YN
+ LXC_LOCAL=1 lxc_remote profile device add default root disk path=/ pool=lxdtest-7YN
+ set +x
+ eval /home/dinah/go/bin/lxc "profile" "device" "add" "default" "root" "disk" "path=/" "pool=lxdtest-7YN" --verbose
+ /home/dinah/go/bin/lxc profile device add default root disk path=/ pool=lxdtest-7YN --verbose
Device root added to default
+ expected_tables=5
+ sqlite3 /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/7YN/database/local.db .dump
+ grep -c CREATE TABLE
+ tables=5
+ [ 5 -eq 5 ]
+ LXD_DIR=/home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/7YN lxd sql local SELECT * FROM test
+ grep -q 1
+ + grepLXD_DIR=/home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/7YN -q lxd cert1 sql
 global SELECT * FROM certificates
+ grep -q 1
+ LXD_DIR=/home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/7YN lxd sql global SELECT * FROM test
+ [ -e /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/7YN/database/patch.local.sql ]
+ [ -e /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/7YN/database/patch.global.sql ]
+ kill_lxd /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/7YN
+ local LXD_DIR daemon_dir daemon_pid check_leftovers lxd_backend
+ daemon_dir=/home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/7YN
+ LXD_DIR=/home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/7YN
+ [ ! -f /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/7YN/lxd.pid ]
+ cat /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/7YN/lxd.pid
+ daemon_pid=18576
+ check_leftovers=false
+ storage_backend /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/7YN
+ cat /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/7YN/lxd.backend
+ lxd_backend=dir
+ echo ==> Killing LXD at /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/7YN
==> Killing LXD at /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/7YN
+ [ -e /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/7YN/unix.socket ]
+ echo ==> Deleting all containers
==> Deleting all containers
+ lxc list --fast --force-local
+ LXC_LOCAL=1 lxc_remote list --fast --force-local
+ set +x
+ tail -n+3
+ grep ^| 
+ cut -d  -f2
+ eval /home/dinah/go/bin/lxc "list" "--fast" --verbose
+ /home/dinah/go/bin/lxc list --fast --verbose
+ echo ==> Deleting all images
==> Deleting all images
+ lxc image list --force-local
+ + LXC_LOCAL=1tail lxc_remote -n+3 image
 list --force-local
+ + set +x
grep ^| 
+ cut -d| -f3
+ sed s/^ //g
+ eval /home/dinah/go/bin/lxc "image" "list" --verbose
+ /home/dinah/go/bin/lxc image list --verbose
+ echo ==> Deleting all networks
==> Deleting all networks
+ lxc network list --force-local
+ LXC_LOCAL=1 lxc_remote network list --force-local
+ + setgrep +x YES

+ grep ^| 
+ cut -d  -f2
+ eval /home/dinah/go/bin/lxc "network" "list" --verbose
+ /home/dinah/go/bin/lxc network list --verbose
+ echo ==> Deleting all profiles
==> Deleting all profiles
+ lxc profile list --force-local
+ LXC_LOCAL=1 lxc_remote profile list --force-local
+ set +x
+ tail -n+3
+ grep ^| 
+ cut -d  -f2
+ eval /home/dinah/go/bin/lxc "profile" "list" --verbose
+ /home/dinah/go/bin/lxc profile list --verbose
+ lxc profile delete default --force-local
+ LXC_LOCAL=1 lxc_remote profile delete default --force-local
+ set +x
+ eval /home/dinah/go/bin/lxc "profile" "delete" "default" --verbose
+ /home/dinah/go/bin/lxc profile delete default --verbose
Error: The 'default' profile cannot be deleted
+ true
+ lxc profile delete docker --force-local
+ LXC_LOCAL=1 lxc_remote profile delete docker --force-local
+ set +x
+ eval /home/dinah/go/bin/lxc "profile" "delete" "docker" --verbose
+ /home/dinah/go/bin/lxc profile delete docker --verbose
Profile docker deleted
+ echo ==> Clearing config of default profile
==> Clearing config of default profile
+ printf config: {}\ndevices: {}
+ lxc profile edit default
+ LXC_LOCAL=1 lxc_remote profile edit default
+ set +x
+ eval /home/dinah/go/bin/lxc "profile" "edit" "default" --verbose
+ /home/dinah/go/bin/lxc profile edit default --verbose
+ echo ==> Deleting all storage pools
==> Deleting all storage pools
+ lxc storage list --force-local
+ LXC_LOCAL=1 lxc_remote storage list --force-local
+ set +x
+ tail -n+3
+ grep ^| 
+ cut -d  -f2
+ eval /home/dinah/go/bin/lxc "storage" "list" --verbose
+ /home/dinah/go/bin/lxc storage list --verbose
+ lxc storage delete lxdtest-7YN --force-local
+ LXC_LOCAL=1 lxc_remote storage delete lxdtest-7YN --force-local
+ set +x
+ eval /home/dinah/go/bin/lxc "storage" "delete" "lxdtest-7YN" --verbose
+ /home/dinah/go/bin/lxc storage delete lxdtest-7YN --verbose
Storage pool lxdtest-7YN deleted
+ echo ==> Checking for locked DB tables
==> Checking for locked DB tables
+ echo .tables
+ sqlite3 /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/7YN/local.db
+ lxd shutdown
INFO[11-16|00:08:46] Asked to shutdown by API, shutting down containers 
INFO[11-16|00:08:46] Starting shutdown sequence 
INFO[11-16|00:08:46] Stopping REST API handler: 
INFO[11-16|00:08:46]  - closing socket                        socket=127.0.0.1:43653
INFO[11-16|00:08:46]  - closing socket                        socket=/home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/7YN/unix.socket
INFO[11-16|00:08:46] Stopping /dev/lxd handler: 
INFO[11-16|00:08:46]  - closing socket                        socket=/home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/7YN/devlxd/sock
WARN[11-16|00:08:46] Failed to update instance types: Get https://us.images.linuxcontainers.org/meta/instance-types/azure.yaml: context canceled 
INFO[11-16|00:08:46] Closing the database 
INFO[11-16|00:08:46] Unmounting temporary filesystems 
INFO[11-16|00:08:46] Done unmounting temporary filesystems 
+ sleep 2
+ find /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/7YN -name shmounts -exec umount -l {} ;
+ find /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/7YN -name devlxd -exec umount -l {} ;
+ check_leftovers=true
+ [ -n  ]
+ [ true = true ]
+ echo ==> Checking for leftover files
==> Checking for leftover files
+ rm -f /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/7YN/containers/lxc-monitord.log
+ apparmor_parser --help
+ grep -q -- --print-cache.dir
+ apparmor_parser -L /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/7YN/security/apparmor/cache --print-cache-dir
+ apparmor_cache_dir=/home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/7YN/security/apparmor/cache/26b63962.0
+ rm -f /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/7YN/security/apparmor/cache/26b63962.0/.features
+ check_empty /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/7YN/containers/
+ find /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/7YN/containers/
+ wc -l
+ [ 1 -gt 1 ]
+ check_empty /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/7YN/devices/
+ find /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/7YN/devices/
+ wc -l
+ [ 1 -gt 1 ]
+ check_empty /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/7YN/images/
+ find /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/7YN/images/
+ wc -l
+ [ 1 -gt 1 ]
+ check_empty /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/7YN/security/apparmor/cache/26b63962.0
+ find /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/7YN/security/apparmor/cache/26b63962.0
+ wc -l
+ [ 0 -gt 1 ]
+ check_empty /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/7YN/security/apparmor/profiles/
+ find /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/7YN/security/apparmor/profiles/
+ wc -l
+ [ 0 -gt 1 ]
+ check_empty /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/7YN/security/seccomp/
+ find /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/7YN/security/seccomp/
+ wc -l
+ [ 0 -gt 1 ]
+ check_empty /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/7YN/shmounts/
+ find /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/7YN/shmounts/
+ wc -l
+ [ 1 -gt 1 ]
+ check_empty /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/7YN/snapshots/
+ find /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/7YN/snapshots/
+ wc -l
+ [ 1 -gt 1 ]
+ echo ==> Checking for leftover DB entries
==> Checking for leftover DB entries
+ check_empty_table /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/7YN/database/global/db.bin instances
+ [ instances = profiles ]
+ sqlite3 /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/7YN/database/global/db.bin SELECT * FROM instances;
+ [ -n  ]
+ check_empty_table /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/7YN/database/global/db.bin instances_config
+ [ instances_config = profiles ]
+ sqlite3 /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/7YN/database/global/db.bin SELECT * FROM instances_config;
+ [ -n  ]
+ check_empty_table /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/7YN/database/global/db.bin instances_devices
+ [ instances_devices = profiles ]
+ sqlite3 /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/7YN/database/global/db.bin SELECT * FROM instances_devices;
+ [ -n  ]
+ check_empty_table /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/7YN/database/global/db.bin instances_devices_config
+ [ instances_devices_config = profiles ]
+ sqlite3 /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/7YN/database/global/db.bin SELECT * FROM instances_devices_config;
+ [ -n  ]
+ check_empty_table /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/7YN/database/global/db.bin instances_profiles
+ [ instances_profiles = profiles ]
+ sqlite3 /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/7YN/database/global/db.bin SELECT * FROM instances_profiles;
+ [ -n  ]
+ check_empty_table /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/7YN/database/global/db.bin images
+ [ images = profiles ]
+ sqlite3 /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/7YN/database/global/db.bin SELECT * FROM images;
+ [ -n  ]
+ check_empty_table /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/7YN/database/global/db.bin images_aliases
+ [ images_aliases = profiles ]
+ sqlite3 /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/7YN/database/global/db.bin SELECT * FROM images_aliases;
+ [ -n  ]
+ check_empty_table /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/7YN/database/global/db.bin images_properties
+ [ images_properties = profiles ]
+ sqlite3 /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/7YN/database/global/db.bin SELECT * FROM images_properties;
+ [ -n  ]
+ check_empty_table /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/7YN/database/global/db.bin images_source
+ [ images_source = profiles ]
+ sqlite3 /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/7YN/database/global/db.bin SELECT * FROM images_source;
+ [ -n  ]
+ check_empty_table /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/7YN/database/global/db.bin images_nodes
+ [ images_nodes = profiles ]
+ sqlite3 /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/7YN/database/global/db.bin SELECT * FROM images_nodes;
+ [ -n  ]
+ check_empty_table /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/7YN/database/global/db.bin networks
+ [ networks = profiles ]
+ sqlite3 /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/7YN/database/global/db.bin SELECT * FROM networks;
+ [ -n  ]
+ check_empty_table /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/7YN/database/global/db.bin networks_config
+ [ networks_config = profiles ]
+ sqlite3 /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/7YN/database/global/db.bin SELECT * FROM networks_config;
+ [ -n  ]
+ check_empty_table /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/7YN/database/global/db.bin profiles
+ [ profiles = profiles ]
+ sqlite3 /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/7YN/database/global/db.bin SELECT * FROM profiles WHERE name != 'default';
+ [ -n  ]
+ return 0
+ check_empty_table /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/7YN/database/global/db.bin profiles_config
+ [ profiles_config = profiles ]
+ sqlite3 /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/7YN/database/global/db.bin SELECT * FROM profiles_config;
+ [ -n  ]
+ check_empty_table /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/7YN/database/global/db.bin profiles_devices
+ [ profiles_devices = profiles ]
+ sqlite3 /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/7YN/database/global/db.bin SELECT * FROM profiles_devices;
+ [ -n  ]
+ check_empty_table /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/7YN/database/global/db.bin profiles_devices_config
+ [ profiles_devices_config = profiles ]
+ sqlite3 /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/7YN/database/global/db.bin SELECT * FROM profiles_devices_config;
+ [ -n  ]
+ check_empty_table /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/7YN/database/global/db.bin storage_pools
+ [ storage_pools = profiles ]
+ sqlite3 /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/7YN/database/global/db.bin SELECT * FROM storage_pools;
+ [ -n  ]
+ check_empty_table /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/7YN/database/global/db.bin storage_pools_nodes
+ [ storage_pools_nodes = profiles ]
+ sqlite3 /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/7YN/database/global/db.bin SELECT * FROM storage_pools_nodes;
+ [ -n  ]
+ check_empty_table /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/7YN/database/global/db.bin storage_pools_config
+ [ storage_pools_config = profiles ]
+ sqlite3 /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/7YN/database/global/db.bin SELECT * FROM storage_pools_config;
+ [ -n  ]
+ check_empty_table /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/7YN/database/global/db.bin storage_volumes
+ [ storage_volumes = profiles ]
+ sqlite3 /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/7YN/database/global/db.bin SELECT * FROM storage_volumes;
+ [ -n  ]
+ check_empty_table /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/7YN/database/global/db.bin storage_volumes_config
+ [ storage_volumes_config = profiles ]
+ sqlite3 /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/7YN/database/global/db.bin SELECT * FROM storage_volumes_config;
+ [ -n  ]
+ dir_teardown /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/7YN
+ local LXD_DIR
+ LXD_DIR=/home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/7YN
+ echo ==> Tearing down directory backend in /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/7YN
==> Tearing down directory backend in /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/7YN
+ wipe /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/7YN
+ which btrfs
+ rm -Rf /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/7YN
+ [ -d /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/7YN ]
+ local pid
+ ps aux
+ grep lxc-monitord
+ grep /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/7YN
+ awk {print $2}
+ read -r pid
+ mountpoint -q /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/7YN
+ rm -Rf /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/7YN
+ sed \|^/home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/7YN|d -i /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/daemons
+ date +%s
+ END_TIME=1573862928
+ echo ==> TEST DONE: database schema updates (3s)
==> TEST DONE: database schema updates (3s)
+ run_test test_database_restore database restore
+ TEST_CURRENT=test_database_restore
+ TEST_CURRENT_DESCRIPTION=database restore
+ echo ==> TEST BEGIN: database restore
==> TEST BEGIN: database restore
+ date +%s
+ START_TIME=1573862928
+ test_database_restore
+ mktemp -d -p /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc XXX
+ LXD_RESTORE_DIR=/home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/ixr
+ spawn_lxd /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/ixr true
+ set +x
==> Setting up directory backend in /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/ixr
==> Spawning lxd in /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/ixr
==> Spawned LXD (PID is 18874)
==> Confirming lxd is responsive
INFO[11-16|00:08:48] LXD 3.18 is starting in normal mode      path=/home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/ixr
INFO[11-16|00:08:48] Kernel uid/gid map: 
INFO[11-16|00:08:48]  - u 0 0 4294967295 
INFO[11-16|00:08:48]  - g 0 0 4294967295 
INFO[11-16|00:08:48] Configured LXD uid/gid map: 
INFO[11-16|00:08:48]  - u 0 1000000 65536 
INFO[11-16|00:08:48]  - g 0 1000000 65536 
WARN[11-16|00:08:48] Couldn't find the CGroup blkio.weight, I/O weight limits will be ignored. 
WARN[11-16|00:08:48] CGroup memory swap accounting is disabled, swap limits will be ignored. 
INFO[11-16|00:08:48] Kernel features: 
INFO[11-16|00:08:48]  - netnsid-based network retrieval: yes 
INFO[11-16|00:08:48]  - uevent injection: yes 
INFO[11-16|00:08:48]  - seccomp listener: yes 
INFO[11-16|00:08:48]  - seccomp listener continue syscalls: yes 
INFO[11-16|00:08:48]  - unprivileged file capabilities: yes 
INFO[11-16|00:08:48]  - shiftfs support: yes 
INFO[11-16|00:08:48] Initializing local database 
INFO[11-16|00:08:48] Starting /dev/lxd handler: 
INFO[11-16|00:08:48]  - binding devlxd socket                 socket=/home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/ixr/devlxd/sock
INFO[11-16|00:08:48] REST API daemon: 
INFO[11-16|00:08:48]  - binding Unix socket                   socket=/home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/ixr/unix.socket
INFO[11-16|00:08:48] Initializing global database 
INFO[11-16|00:08:48] Initializing storage pools 
INFO[11-16|00:08:48] Initializing networks 
INFO[11-16|00:08:48] Pruning leftover image files 
INFO[11-16|00:08:48] Done pruning leftover image files 
INFO[11-16|00:08:48] Loading daemon configuration 
INFO[11-16|00:08:48] Started seccomp handler                  path=/home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/ixr/seccomp.socket
INFO[11-16|00:08:48] Pruning expired images 
INFO[11-16|00:08:48] Done pruning expired images 
INFO[11-16|00:08:48] Pruning expired container backups 
INFO[11-16|00:08:48] Done pruning expired container backups 
INFO[11-16|00:08:48] Updating images 
INFO[11-16|00:08:48] Expiring log files 
INFO[11-16|00:08:48] Done updating images 
INFO[11-16|00:08:48] Done expiring log files 
INFO[11-16|00:08:48] Updating instance types 
INFO[11-16|00:08:48] Done updating instance types 
==> Binding to network
+ eval /home/dinah/go/bin/lxc "config" "set" "core.https_address" "127.0.0.1:34875" --verbose
+ /home/dinah/go/bin/lxc config set core.https_address 127.0.0.1:34875 --verbose
INFO[11-16|00:08:49] Update network address 
INFO[11-16|00:08:49]  - binding TCP socket                    socket=127.0.0.1:34875
+ echo 127.0.0.1:34875
+ echo ==> Bound to 127.0.0.1:34875
==> Bound to 127.0.0.1:34875
+ break
+ echo ==> Setting trust password
==> Setting trust password
+ LXD_DIR=/home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/ixr lxc config set core.trust_password foo
+ LXC_LOCAL=1 lxc_remote config set core.trust_password foo
+ set +x
+ eval /home/dinah/go/bin/lxc "config" "set" "core.trust_password" "foo" --verbose
+ /home/dinah/go/bin/lxc config set core.trust_password foo --verbose
+ [ -n --verbose ]
+ set -x
+ [  =  ]
+ echo ==> Setting up networking
==> Setting up networking
+ LXD_DIR=/home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/ixr lxc profile device add default eth0 nic nictype=p2p name=eth0
+ LXC_LOCAL=1 lxc_remote profile device add default eth0 nic nictype=p2p name=eth0
+ set +x
+ eval /home/dinah/go/bin/lxc "profile" "device" "add" "default" "eth0" "nic" "nictype=p2p" "name=eth0" --verbose
+ /home/dinah/go/bin/lxc profile device add default eth0 nic nictype=p2p name=eth0 --verbose
Device eth0 added to default
+ [ true = true ]
+ echo ==> Configuring storage backend
==> Configuring storage backend
+ dir_configure /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/ixr
+ local LXD_DIR
+ LXD_DIR=/home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/ixr
+ echo ==> Configuring directory backend in /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/ixr
==> Configuring directory backend in /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/ixr
+ basename /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/ixr
+ lxc storage create lxdtest-ixr dir
+ LXC_LOCAL=1 lxc_remote storage create lxdtest-ixr dir
+ set +x
+ eval /home/dinah/go/bin/lxc "storage" "create" "lxdtest-ixr" "dir" --verbose
+ /home/dinah/go/bin/lxc storage create lxdtest-ixr dir --verbose
Storage pool lxdtest-ixr created
+ basename /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/ixr
+ lxc profile device add default root disk path=/ pool=lxdtest-ixr
+ LXC_LOCAL=1 lxc_remote profile device add default root disk path=/ pool=lxdtest-ixr
+ set +x
+ eval /home/dinah/go/bin/lxc "profile" "device" "add" "default" "root" "disk" "path=/" "pool=lxdtest-ixr" --verbose
+ /home/dinah/go/bin/lxc profile device add default root disk path=/ pool=lxdtest-ixr --verbose
Device root added to default
+ set -e
+ LXD_DIR=/home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/ixr
+ lxc config set core.https_allowed_credentials true
+ LXC_LOCAL=1 lxc_remote config set core.https_allowed_credentials true
+ set +x
+ eval /home/dinah/go/bin/lxc "config" "set" "core.https_allowed_credentials" "true" --verbose
+ /home/dinah/go/bin/lxc config set core.https_allowed_credentials true --verbose
+ shutdown_lxd /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/ixr
+ local LXD_DIR
+ daemon_dir=/home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/ixr
+ LXD_DIR=/home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/ixr
+ cat /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/ixr/lxd.pid
+ daemon_pid=18874
+ echo ==> Killing LXD at /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/ixr
==> Killing LXD at /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/ixr
+ lxd shutdown
INFO[11-16|00:08:49] Asked to shutdown by API, shutting down containers 
INFO[11-16|00:08:49] Starting shutdown sequence 
INFO[11-16|00:08:49] Stopping REST API handler: 
INFO[11-16|00:08:49]  - closing socket                        socket=127.0.0.1:34875
INFO[11-16|00:08:49]  - closing socket                        socket=/home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/ixr/unix.socket
INFO[11-16|00:08:49] Stopping /dev/lxd handler: 
INFO[11-16|00:08:49]  - closing socket                        socket=/home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/ixr/devlxd/sock
WARN[11-16|00:08:49] Failed to update instance types: Get https://us.images.linuxcontainers.org/meta/instance-types/.yaml: context canceled 
INFO[11-16|00:08:49] Closing the database 
INFO[11-16|00:08:49] Unmounting temporary filesystems 
INFO[11-16|00:08:49] Done unmounting temporary filesystems 
+ sleep 0.5
+ cat
+ LXD_DIR=/home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/ixr lxd --logfile /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/ixr/lxd.log --verbose
INFO[11-16|00:08:50] LXD 3.18 is starting in normal mode      path=/home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/ixr
INFO[11-16|00:08:50] Kernel uid/gid map: 
INFO[11-16|00:08:50]  - u 0 0 4294967295 
INFO[11-16|00:08:50]  - g 0 0 4294967295 
INFO[11-16|00:08:50] Configured LXD uid/gid map: 
INFO[11-16|00:08:50]  - u 0 1000000 65536 
INFO[11-16|00:08:50]  - g 0 1000000 65536 
WARN[11-16|00:08:50] Couldn't find the CGroup blkio.weight, I/O weight limits will be ignored. 
WARN[11-16|00:08:50] CGroup memory swap accounting is disabled, swap limits will be ignored. 
INFO[11-16|00:08:50] Kernel features: 
INFO[11-16|00:08:50]  - netnsid-based network retrieval: yes 
INFO[11-16|00:08:50]  - uevent injection: yes 
INFO[11-16|00:08:50]  - seccomp listener: yes 
INFO[11-16|00:08:50]  - seccomp listener continue syscalls: yes 
INFO[11-16|00:08:50]  - unprivileged file capabilities: yes 
INFO[11-16|00:08:50]  - shiftfs support: yes 
INFO[11-16|00:08:50] Initializing local database 
INFO[11-16|00:08:50] Starting /dev/lxd handler: 
INFO[11-16|00:08:50]  - binding devlxd socket                 socket=/home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/ixr/devlxd/sock
INFO[11-16|00:08:50] REST API daemon: 
INFO[11-16|00:08:50]  - binding Unix socket                   socket=/home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/ixr/unix.socket
INFO[11-16|00:08:50]  - binding TCP socket                    socket=127.0.0.1:34875
INFO[11-16|00:08:50] Initializing global database 
INFO[11-16|00:08:50] Updating the LXD global schema. Backup made as "global.bak" 
EROR[11-16|00:08:50] Failed to start the daemon: failed to open cluster database: failed to ensure schema: failed to execute queries from /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/ixr/database/patch.global.sql: no such table: broken 
INFO[11-16|00:08:50] Starting shutdown sequence 
INFO[11-16|00:08:50] Stopping REST API handler: 
INFO[11-16|00:08:50]  - closing socket                        socket=127.0.0.1:34875
INFO[11-16|00:08:50]  - closing socket                        socket=/home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/ixr/unix.socket
INFO[11-16|00:08:50] Stopping /dev/lxd handler: 
INFO[11-16|00:08:50]  - closing socket                        socket=/home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/ixr/devlxd/sock
Error: failed to open cluster database: failed to ensure schema: failed to execute queries from /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/ixr/database/patch.global.sql: no such table: broken
+ rm -f /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/ixr/database/patch.global.sql
+ rm -rf /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/ixr/database/global
+ cp -a /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/ixr/database/global.bak /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/ixr/database/global
+ respawn_lxd /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/ixr true
+ set +x
==> Spawning lxd in /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/ixr
==> Spawned LXD (PID is 19014)
==> Confirming lxd is responsive
INFO[11-16|00:08:50] LXD 3.18 is starting in normal mode      path=/home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/ixr
INFO[11-16|00:08:50] Kernel uid/gid map: 
INFO[11-16|00:08:50]  - u 0 0 4294967295 
INFO[11-16|00:08:50]  - g 0 0 4294967295 
INFO[11-16|00:08:50] Configured LXD uid/gid map: 
INFO[11-16|00:08:50]  - u 0 1000000 65536 
INFO[11-16|00:08:50]  - g 0 1000000 65536 
WARN[11-16|00:08:50] Couldn't find the CGroup blkio.weight, I/O weight limits will be ignored. 
WARN[11-16|00:08:50] CGroup memory swap accounting is disabled, swap limits will be ignored. 
INFO[11-16|00:08:50] Kernel features: 
INFO[11-16|00:08:50]  - netnsid-based network retrieval: yes 
INFO[11-16|00:08:50]  - uevent injection: yes 
INFO[11-16|00:08:50]  - seccomp listener: yes 
INFO[11-16|00:08:50]  - seccomp listener continue syscalls: yes 
INFO[11-16|00:08:50]  - unprivileged file capabilities: yes 
INFO[11-16|00:08:50]  - shiftfs support: yes 
INFO[11-16|00:08:50] Initializing local database 
INFO[11-16|00:08:50] Starting /dev/lxd handler: 
INFO[11-16|00:08:50]  - binding devlxd socket                 socket=/home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/ixr/devlxd/sock
INFO[11-16|00:08:50] REST API daemon: 
INFO[11-16|00:08:50]  - binding Unix socket                   socket=/home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/ixr/unix.socket
INFO[11-16|00:08:50]  - binding TCP socket                    socket=127.0.0.1:34875
INFO[11-16|00:08:50] Initializing global database 
INFO[11-16|00:08:50] Initializing storage pools 
INFO[11-16|00:08:50] Initializing networks 
INFO[11-16|00:08:50] Pruning leftover image files 
INFO[11-16|00:08:50] Done pruning leftover image files 
INFO[11-16|00:08:50] Loading daemon configuration 
INFO[11-16|00:08:50] Started seccomp handler                  path=/home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/ixr/seccomp.socket
INFO[11-16|00:08:50] Pruning expired images 
INFO[11-16|00:08:50] Done pruning expired images 
INFO[11-16|00:08:50] Pruning expired container backups 
INFO[11-16|00:08:50] Done pruning expired container backups 
INFO[11-16|00:08:50] Updating images 
INFO[11-16|00:08:50] Updating instance types 
INFO[11-16|00:08:50] Done updating images 
INFO[11-16|00:08:50] Expiring log files 
INFO[11-16|00:08:50] Done updating instance types 
INFO[11-16|00:08:50] Done expiring log files 
+ eval /home/dinah/go/bin/lxc "config" "get" "core.https_allowed_credentials" --verbose
+ /home/dinah/go/bin/lxc config get core.https_allowed_credentials --verbose
==> Killing LXD at /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/ixr
==> Deleting all containers
+ eval /home/dinah/go/bin/lxc "list" "--fast" --verbose
+ /home/dinah/go/bin/lxc list --fast --verbose
==> Deleting all images
+ eval /home/dinah/go/bin/lxc "image" "list" --verbose
+ /home/dinah/go/bin/lxc image list --verbose
==> Deleting all networks
+ eval /home/dinah/go/bin/lxc "network" "list" --verbose
+ /home/dinah/go/bin/lxc network list --verbose
==> Deleting all profiles
+ eval /home/dinah/go/bin/lxc "profile" "list" --verbose
+ /home/dinah/go/bin/lxc profile list --verbose
+ eval /home/dinah/go/bin/lxc "profile" "delete" "default" --verbose
+ /home/dinah/go/bin/lxc profile delete default --verbose
Error: The 'default' profile cannot be deleted
+ true
+ echo ==> Clearing config of default profile
==> Clearing config of default profile
+ printf config: {}\ndevices: {}
+ lxc profile edit default
+ LXC_LOCAL=1 lxc_remote profile edit default
+ set +x
+ eval /home/dinah/go/bin/lxc "profile" "edit" "default" --verbose
+ /home/dinah/go/bin/lxc profile edit default --verbose
+ echo ==> Deleting all storage pools
==> Deleting all storage pools
+ lxc storage list --force-local
+ LXC_LOCAL=1 lxc_remote storage list --force-local
+ set +x
+ grep ^| 
+ tail -n+3
+ cut -d  -f2
+ eval /home/dinah/go/bin/lxc "storage" "list" --verbose
+ /home/dinah/go/bin/lxc storage list --verbose
+ lxc storage delete lxdtest-ixr --force-local
+ LXC_LOCAL=1 lxc_remote storage delete lxdtest-ixr --force-local
+ set +x
+ eval /home/dinah/go/bin/lxc "storage" "delete" "lxdtest-ixr" --verbose
+ /home/dinah/go/bin/lxc storage delete lxdtest-ixr --verbose
Storage pool lxdtest-ixr deleted
+ echo ==> Checking for locked DB tables
==> Checking for locked DB tables
+ echo .tables
+ sqlite3 /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/ixr/local.db
+ lxd shutdown
INFO[11-16|00:08:51] Asked to shutdown by API, shutting down containers 
INFO[11-16|00:08:51] Starting shutdown sequence 
INFO[11-16|00:08:51] Stopping REST API handler: 
INFO[11-16|00:08:51]  - closing socket                        socket=127.0.0.1:34875
INFO[11-16|00:08:51]  - closing socket                        socket=/home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/ixr/unix.socket
INFO[11-16|00:08:51] Stopping /dev/lxd handler: 
INFO[11-16|00:08:51]  - closing socket                        socket=/home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/ixr/devlxd/sock
WARN[11-16|00:08:51] Failed to update instance types: Get https://us.images.linuxcontainers.org/meta/instance-types/.yaml: context canceled 
INFO[11-16|00:08:51] Closing the database 
INFO[11-16|00:08:51] Unmounting temporary filesystems 
INFO[11-16|00:08:51] Done unmounting temporary filesystems 
+ sleep 2
+ find /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/ixr -name shmounts -exec umount -l {} ;
+ find /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/ixr -name devlxd -exec umount -l {} ;
+ check_leftovers=true
+ [ -n  ]
+ [ true = true ]
+ echo ==> Checking for leftover files
==> Checking for leftover files
+ rm -f /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/ixr/containers/lxc-monitord.log
+ apparmor_parser --help
+ grep -q -- --print-cache.dir
+ apparmor_parser -L /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/ixr/security/apparmor/cache --print-cache-dir
+ apparmor_cache_dir=/home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/ixr/security/apparmor/cache/26b63962.0
+ rm -f /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/ixr/security/apparmor/cache/26b63962.0/.features
+ check_empty /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/ixr/containers/
+ find /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/ixr/containers/+ 
wc -l
+ [ 1 -gt 1 ]
+ check_empty /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/ixr/devices/
+ find /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/ixr/devices/
+ wc -l
+ [ 1 -gt 1 ]
+ check_empty /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/ixr/images/
+ find+  /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/ixr/images/
wc -l
+ [ 1 -gt 1 ]
+ check_empty /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/ixr/security/apparmor/cache/26b63962.0
+ find /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/ixr/security/apparmor/cache/26b63962.0
+ wc -l
+ [ 0 -gt 1 ]
+ check_empty /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/ixr/security/apparmor/profiles/
+ find /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/ixr/security/apparmor/profiles/
+ wc -l
+ [ 0 -gt 1 ]
+ check_empty /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/ixr/security/seccomp/
+ wc -l
+ find /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/ixr/security/seccomp/
+ [ 0 -gt 1 ]
+ check_empty /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/ixr/shmounts/
+ find /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/ixr/shmounts/
+ wc -l
+ [ 1 -gt 1 ]
+ check_empty /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/ixr/snapshots/
+ find /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/ixr/snapshots/
+ wc -l
+ [ 1 -gt 1 ]
+ echo ==> Checking for leftover DB entries
==> Checking for leftover DB entries
+ check_empty_table /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/ixr/database/global/db.bin instances
+ [ instances = profiles ]
+ sqlite3 /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/ixr/database/global/db.bin SELECT * FROM instances;
+ [ -n  ]
+ check_empty_table /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/ixr/database/global/db.bin instances_config
+ [ instances_config = profiles ]
+ sqlite3 /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/ixr/database/global/db.bin SELECT * FROM instances_config;
+ [ -n  ]
+ check_empty_table /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/ixr/database/global/db.bin instances_devices
+ [ instances_devices = profiles ]
+ sqlite3 /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/ixr/database/global/db.bin SELECT * FROM instances_devices;
+ [ -n  ]
+ check_empty_table /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/ixr/database/global/db.bin instances_devices_config
+ [ instances_devices_config = profiles ]
+ sqlite3 /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/ixr/database/global/db.bin SELECT * FROM instances_devices_config;
+ [ -n  ]
+ check_empty_table /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/ixr/database/global/db.bin instances_profiles
+ [ instances_profiles = profiles ]
+ sqlite3 /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/ixr/database/global/db.bin SELECT * FROM instances_profiles;
+ [ -n  ]
+ check_empty_table /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/ixr/database/global/db.bin images
+ [ images = profiles ]
+ sqlite3 /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/ixr/database/global/db.bin SELECT * FROM images;
+ [ -n  ]
+ check_empty_table /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/ixr/database/global/db.bin images_aliases
+ [ images_aliases = profiles ]
+ sqlite3 /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/ixr/database/global/db.bin SELECT * FROM images_aliases;
+ [ -n  ]
+ check_empty_table /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/ixr/database/global/db.bin images_properties
+ [ images_properties = profiles ]
+ sqlite3 /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/ixr/database/global/db.bin SELECT * FROM images_properties;
+ [ -n  ]
+ check_empty_table /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/ixr/database/global/db.bin images_source
+ [ images_source = profiles ]
+ sqlite3 /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/ixr/database/global/db.bin SELECT * FROM images_source;
+ [ -n  ]
+ check_empty_table /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/ixr/database/global/db.bin images_nodes
+ [ images_nodes = profiles ]
+ sqlite3 /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/ixr/database/global/db.bin SELECT * FROM images_nodes;
+ [ -n  ]
+ check_empty_table /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/ixr/database/global/db.bin networks
+ [ networks = profiles ]
+ sqlite3 /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/ixr/database/global/db.bin SELECT * FROM networks;
+ [ -n  ]
+ check_empty_table /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/ixr/database/global/db.bin networks_config
+ [ networks_config = profiles ]
+ sqlite3 /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/ixr/database/global/db.bin SELECT * FROM networks_config;
+ [ -n  ]
+ check_empty_table /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/ixr/database/global/db.bin profiles
+ [ profiles = profiles ]
+ sqlite3 /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/ixr/database/global/db.bin SELECT * FROM profiles WHERE name != 'default';
+ [ -n  ]
+ return 0
+ check_empty_table /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/ixr/database/global/db.bin profiles_config
+ [ profiles_config = profiles ]
+ sqlite3 /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/ixr/database/global/db.bin SELECT * FROM profiles_config;
+ [ -n  ]
+ check_empty_table /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/ixr/database/global/db.bin profiles_devices
+ [ profiles_devices = profiles ]
+ sqlite3 /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/ixr/database/global/db.bin SELECT * FROM profiles_devices;
+ [ -n  ]
+ check_empty_table /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/ixr/database/global/db.bin profiles_devices_config
+ [ profiles_devices_config = profiles ]
+ sqlite3 /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/ixr/database/global/db.bin SELECT * FROM profiles_devices_config;
+ [ -n  ]
+ check_empty_table /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/ixr/database/global/db.bin storage_pools
+ [ storage_pools = profiles ]
+ sqlite3 /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/ixr/database/global/db.bin SELECT * FROM storage_pools;
+ [ -n  ]
+ check_empty_table /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/ixr/database/global/db.bin storage_pools_nodes
+ [ storage_pools_nodes = profiles ]
+ sqlite3 /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/ixr/database/global/db.bin SELECT * FROM storage_pools_nodes;
+ [ -n  ]
+ check_empty_table /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/ixr/database/global/db.bin storage_pools_config
+ [ storage_pools_config = profiles ]
+ sqlite3 /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/ixr/database/global/db.bin SELECT * FROM storage_pools_config;
+ [ -n  ]
+ check_empty_table /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/ixr/database/global/db.bin storage_volumes
+ [ storage_volumes = profiles ]
+ sqlite3 /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/ixr/database/global/db.bin SELECT * FROM storage_volumes;
+ [ -n  ]
+ check_empty_table /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/ixr/database/global/db.bin storage_volumes_config
+ [ storage_volumes_config = profiles ]
+ sqlite3 /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/ixr/database/global/db.bin SELECT * FROM storage_volumes_config;
+ [ -n  ]
+ dir_teardown /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/ixr
+ local LXD_DIR
+ LXD_DIR=/home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/ixr
+ echo ==> Tearing down directory backend in /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/ixr
==> Tearing down directory backend in /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/ixr
+ wipe /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/ixr
+ which btrfs
+ rm -Rf /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/ixr
+ [ -d /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/ixr ]
+ local pid
+ ps aux
+ grep /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/ixr
+ awk {print $2}
+ + read -r pidgrep
 lxc-monitord
+ mountpoint -q /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/ixr
+ rm -Rf /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/ixr
+ sed \|^/home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/ixr|d -i /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/daemons
+ date +%s
+ END_TIME=1573862933
+ echo ==> TEST DONE: database restore (5s)
==> TEST DONE: database restore (5s)
+ run_test test_sql lxd sql
+ TEST_CURRENT=test_sql
+ TEST_CURRENT_DESCRIPTION=lxd sql
+ echo ==> TEST BEGIN: lxd sql
==> TEST BEGIN: lxd sql
+ date +%s
+ START_TIME=1573862933
+ test_sql
+ lxd sql foo SELECT * FROM CONFIG
Description:
  Execute a SQL query against the LXD local or global database

  The local database is specific to the LXD cluster member you target the
  command to, and contains member-specific data (such as the member network
  address).

  The global database is common to all LXD members in the cluster, and contains
  cluster-specific data (such as profiles, containers, etc).

  If you are running a non-clustered LXD instance, the same applies, as that
  instance is effectively a single-member cluster.

  If <query> is the special value "-", then the query is read from
  standard input.

  If <query> is the special value ".dump", the the command returns a SQL text
  dump of the given database.

  If <query> is the special value ".schema", the the command returns the SQL
  text schema of the given database.

  This internal command is mostly useful for debugging and disaster
  recovery. The LXD team will occasionally provide hotfixes to users as a
  set of database queries to fix some data inconsistency.

  This command targets the global LXD database and works in both local
  and cluster mode.

Usage:
  lxd sql <local|global> <query> [flags]

Global Flags:
  -d, --debug     Show all debug messages
  -h, --help      Print help
      --logfile   Path to the log file
      --syslog    Log to syslog
      --trace     Log tracing targets
  -v, --verbose   Show all information messages
      --version   Print version number
Error: Invalid database type
+ lxd sql global 
Error: No query provided
+ + lxdgrep sql -q local core.https_address SELECT * FROM config

+ + greplxd -q core.trust_password sql
 global SELECT * FROM config
+ lxd sql global INSERT INTO config(key,value) VALUES('core.https_allowed_credentials','true')+ 
grep -q Rows affected: 1
+ + greplxd -q sql Rows affected: 1 global
 DELETE FROM config WHERE key='core.https_allowed_credentials'
+ lxd sql global -
+ + echogrep SELECT * FROM config -q
 core.trust_password
+ lxd sql global SELECT * FROM config; SELECT * FROM instances
+ grep -q => Query 0
+ SQLITE_DUMP=/home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/dump.db
+ lxd sql local+  .dump
sqlite3 /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/dump.db
+ + sqlite3grep /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/dump.db -q SELECT * FROM patches invalid_profile_names

+ rm -f /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/dump.db
+ SQLITE_DUMP=/home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/dump.db
+ lxd sql local .schema
+ sqlite3 /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/dump.db
+ grep -q 1+ 
sqlite3 /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/dump.db SELECT * FROM schema
+ sqlite3+  /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/dump.db SELECT * FROM patches
wc -l
+ [ 0 = 0 ]
+ rm -f /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/dump.db
+ SQLITE_DUMP=/home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/dump.db
+ + lxd sql global .dumpsqlite3
 /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/dump.db
+ sqlite3 /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/dump.db SELECT * FROM profiles
+ grep -q Default LXD profile
+ rm -f /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/dump.db
+ SQLITE_DUMP=/home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/dump.db
+ lxd sql global .schema
+ sqlite3 /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/dump.db
+ sqlite3 /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/dump.db SELECT * FROM schema
+ grep -q 1
+ sqlite3 /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/dump.db SELECT * FROM profiles
+ wc -l
+ [ 0 = 0 ]
+ rm -f /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/dump.db
+ SQLITE_SYNC=/home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/yaf/database/global/db.bin
+ echo SYNC /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/yaf/database/global/db.bin
SYNC /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/yaf/database/global/db.bin
+ lxd sql global .sync
+ grep -q 1
+ sqlite3 /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/yaf/database/global/db.bin SELECT * FROM schema
+ sqlite3 /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/yaf/database/global/db.bin SELECT * FROM profiles
+ grep -q Default LXD profile
+ date +%s
+ END_TIME=1573862933
+ echo ==> TEST DONE: lxd sql (0s)
==> TEST DONE: lxd sql (0s)
+ run_test test_basic_usage basic usage
+ TEST_CURRENT=test_basic_usage
+ TEST_CURRENT_DESCRIPTION=basic usage
+ echo ==> TEST BEGIN: basic usage
==> TEST BEGIN: basic usage
+ date +%s
+ START_TIME=1573862933
+ test_basic_usage
+ local lxd_backend
+ storage_backend /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/yaf
+ cat /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/yaf/lxd.backend
+ lxd_backend=dir
+ ensure_import_testimage
+ lxc image alias list
+ LXC_LOCAL=1 lxc_remote image alias list
+ set +x
+ grep -q ^| testimage\s*|.*$
+ eval /home/dinah/go/bin/lxc "image" "alias" "list" --verbose
+ /home/dinah/go/bin/lxc image alias list --verbose
+ [ -e  ]
+ [ ! -e /bin/busybox ]
+ ldd /bin/busybox
+ deps/import-busybox --alias testimage
Image imported as: d3245ce429109d5a5fb7d3119f4fd7216bdfe890aeb3b478be9533b7300d78a7
Setup alias: testimage
+ ensure_has_localhost_remote 127.0.0.1:54153
+ local addr=127.0.0.1:54153
+ lxc remote list
+ LXC_LOCAL=1 lxc_remote remote list
+ set +x
+ grep -q localhost
+ eval /home/dinah/go/bin/lxc "remote" "list" --verbose
+ /home/dinah/go/bin/lxc remote list --verbose
+ lxc remote add localhost https://127.0.0.1:54153 --accept-certificate --password foo
+ LXC_LOCAL=1 lxc_remote remote add localhost https://127.0.0.1:54153 --accept-certificate --password foo
+ set +x
+ eval /home/dinah/go/bin/lxc "remote" "add" "localhost" "https://127.0.0.1:54153" "--accept-certificate" "--password" "foo" --verbose
+ /home/dinah/go/bin/lxc remote add localhost https://127.0.0.1:54153 --accept-certificate --password foo --verbose
Generating a client certificate. This may take a minute...
2019/11/16 00:08:55 http: TLS handshake error from 127.0.0.1:50136: EOF
2019/11/16 00:08:55 http: TLS handshake error from 127.0.0.1:50138: remote error: tls: bad certificate
Client certificate stored at server:  localhost
+ lxc image info testimage
+ LXC_LOCAL=1 lxc_remote image info testimage
+ set +x
+ grep ^Fingerprint
+ cut -d  -f2
+ eval /home/dinah/go/bin/lxc "image" "info" "testimage" --verbose
+ /home/dinah/go/bin/lxc image info testimage --verbose
+ sum=d3245ce429109d5a5fb7d3119f4fd7216bdfe890aeb3b478be9533b7300d78a7
+ lxc image export testimage /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/yaf/
+ LXC_LOCAL=1 lxc_remote image export testimage /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/yaf/
+ set +x
+ eval /home/dinah/go/bin/lxc "image" "export" "testimage" "/home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/yaf/" --verbose
+ /home/dinah/go/bin/lxc image export testimage /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/yaf/ --verbose
Image exported successfully!           
+ sha256sum /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/yaf/d3245ce429109d5a5fb7d3119f4fd7216bdfe890aeb3b478be9533b7300d78a7.tar.xz
+ cut -d  -f1
+ [ d3245ce429109d5a5fb7d3119f4fd7216bdfe890aeb3b478be9533b7300d78a7 = d3245ce429109d5a5fb7d3119f4fd7216bdfe890aeb3b478be9533b7300d78a7 ]
+ lxc image show d3245ce429109d5a5fb7d3119f4fd7216bdfe890aeb3b478be9533b7300d78a7
+ LXC_LOCAL=1 lxc_remote image show d3245ce429109d5a5fb7d3119f4fd7216bdfe890aeb3b478be9533b7300d78a7
+ set +x
+ eval /home/dinah/go/bin/lxc "image" "show" "d3245ce429109d5a5fb7d3119f4fd7216bdfe890aeb3b478be9533b7300d78a7" --verbose
+ /home/dinah/go/bin/lxc image show d3245ce429109d5a5fb7d3119f4fd7216bdfe890aeb3b478be9533b7300d78a7 --verbose
auto_update: false
properties:
  architecture: x86_64
  description: Busybox x86_64
  name: busybox-x86_64
  os: Busybox
public: false
expires_at: 1970-01-01T00:00:00Z
+ lxc image alias create a/b/ d3245ce429109d5a5fb7d3119f4fd7216bdfe890aeb3b478be9533b7300d78a7
+ LXC_LOCAL=1 lxc_remote image alias create a/b/ d3245ce429109d5a5fb7d3119f4fd7216bdfe890aeb3b478be9533b7300d78a7
+ set +x
+ eval /home/dinah/go/bin/lxc "image" "alias" "create" "a/b/" "d3245ce429109d5a5fb7d3119f4fd7216bdfe890aeb3b478be9533b7300d78a7" --verbose
+ /home/dinah/go/bin/lxc image alias create a/b/ d3245ce429109d5a5fb7d3119f4fd7216bdfe890aeb3b478be9533b7300d78a7 --verbose
+ lxc image alias delete a/b/
+ LXC_LOCAL=1 lxc_remote image alias delete a/b/
+ set +x
+ eval /home/dinah/go/bin/lxc "image" "alias" "delete" "a/b/" --verbose
+ /home/dinah/go/bin/lxc image alias delete a/b/ --verbose
+ lxc image alias create foo d3245ce429109d5a5fb7d3119f4fd7216bdfe890aeb3b478be9533b7300d78a7
+ LXC_LOCAL=1 lxc_remote image alias create foo d3245ce429109d5a5fb7d3119f4fd7216bdfe890aeb3b478be9533b7300d78a7
+ set +x
+ eval /home/dinah/go/bin/lxc "image" "alias" "create" "foo" "d3245ce429109d5a5fb7d3119f4fd7216bdfe890aeb3b478be9533b7300d78a7" --verbose
+ /home/dinah/go/bin/lxc image alias create foo d3245ce429109d5a5fb7d3119f4fd7216bdfe890aeb3b478be9533b7300d78a7 --verbose
+ lxc image alias create bar d3245ce429109d5a5fb7d3119f4fd7216bdfe890aeb3b478be9533b7300d78a7
+ LXC_LOCAL=1 lxc_remote image alias create bar d3245ce429109d5a5fb7d3119f4fd7216bdfe890aeb3b478be9533b7300d78a7
+ set +x
+ eval /home/dinah/go/bin/lxc "image" "alias" "create" "bar" "d3245ce429109d5a5fb7d3119f4fd7216bdfe890aeb3b478be9533b7300d78a7" --verbose
+ /home/dinah/go/bin/lxc image alias create bar d3245ce429109d5a5fb7d3119f4fd7216bdfe890aeb3b478be9533b7300d78a7 --verbose
+ + grep -q foo
lxc image alias list local:
+ LXC_LOCAL=1 lxc_remote image alias list local:
+ set +x
+ eval /home/dinah/go/bin/lxc "image" "alias" "list" "local:" --verbose
+ /home/dinah/go/bin/lxc image alias list local: --verbose
+ lxc image alias list local:
+ + LXC_LOCAL=1 lxc_remotegrep image -q alias bar list
 local:
+ set +x
+ eval /home/dinah/go/bin/lxc "image" "alias" "list" "local:" --verbose
+ /home/dinah/go/bin/lxc image alias list local: --verbose
+ + greplxc -q image -v alias bar list
 local: foo
+ LXC_LOCAL=1 lxc_remote image alias list local: foo
+ set +x
+ eval /home/dinah/go/bin/lxc "image" "alias" "list" "local:" "foo" --verbose
+ /home/dinah/go/bin/lxc image alias list local: foo --verbose
+ lxc image alias list local: d3245ce429109d5a5fb7d3119f4fd7216bdfe890aeb3b478be9533b7300d78a7
+ LXC_LOCAL=1 lxc_remote image+  alias list local: d3245ce429109d5a5fb7d3119f4fd7216bdfe890aeb3b478be9533b7300d78a7
grep -q+  foo
set +x
+ eval /home/dinah/go/bin/lxc "image" "alias" "list" "local:" "d3245ce429109d5a5fb7d3119f4fd7216bdfe890aeb3b478be9533b7300d78a7" --verbose
+ /home/dinah/go/bin/lxc image alias list local: d3245ce429109d5a5fb7d3119f4fd7216bdfe890aeb3b478be9533b7300d78a7 --verbose
+ lxc image alias list local: non-existent
+ LXC_LOCAL=1 lxc_remote image alias list local: non-existent
+ + set +x
grep -q -v non-existent
+ eval /home/dinah/go/bin/lxc "image" "alias" "list" "local:" "non-existent" --verbose
+ /home/dinah/go/bin/lxc image alias list local: non-existent --verbose
+ lxc image alias delete foo
+ LXC_LOCAL=1 lxc_remote image alias delete foo
+ set +x
+ eval /home/dinah/go/bin/lxc "image" "alias" "delete" "foo" --verbose
+ /home/dinah/go/bin/lxc image alias delete foo --verbose
+ lxc image alias delete bar
+ LXC_LOCAL=1 lxc_remote image alias delete bar
+ set +x
+ eval /home/dinah/go/bin/lxc "image" "alias" "delete" "bar" --verbose
+ /home/dinah/go/bin/lxc image alias delete bar --verbose
+ lxc image alias create foo d3245ce429109d5a5fb7d3119f4fd7216bdfe890aeb3b478be9533b7300d78a7
+ LXC_LOCAL=1 lxc_remote image alias create foo d3245ce429109d5a5fb7d3119f4fd7216bdfe890aeb3b478be9533b7300d78a7
+ set +x
+ eval /home/dinah/go/bin/lxc "image" "alias" "create" "foo" "d3245ce429109d5a5fb7d3119f4fd7216bdfe890aeb3b478be9533b7300d78a7" --verbose
+ /home/dinah/go/bin/lxc image alias create foo d3245ce429109d5a5fb7d3119f4fd7216bdfe890aeb3b478be9533b7300d78a7 --verbose
+ lxc image alias rename foo bar
+ LXC_LOCAL=1 lxc_remote image alias rename foo bar
+ set +x
+ eval /home/dinah/go/bin/lxc "image" "alias" "rename" "foo" "bar" --verbose
+ /home/dinah/go/bin/lxc image alias rename foo bar --verbose
+ lxc image alias list
+ LXC_LOCAL=1 lxc_remote image alias list
+ set +x
+ grep -qv foo
+ eval /home/dinah/go/bin/lxc "image" "alias" "list" --verbose
+ /home/dinah/go/bin/lxc image alias list --verbose
+ lxc image alias delete bar
+ LXC_LOCAL=1 lxc_remote image alias delete bar
+ set +x
+ eval /home/dinah/go/bin/lxc "image" "alias" "delete" "bar" --verbose
+ /home/dinah/go/bin/lxc image alias delete bar --verbose
+ + grep -q testimage
lxc image list --format table
+ LXC_LOCAL=1 lxc_remote image list --format table
+ set +x
+ eval /home/dinah/go/bin/lxc "image" "list" "--format" "table" --verbose
+ /home/dinah/go/bin/lxc image list --format table --verbose
+ jq .[]|select(.alias[0].name="testimage")+ 
grep+  -q "name": "testimage"
lxc image list --format json
+ LXC_LOCAL=1 lxc_remote image list --format json
+ set +x
+ eval /home/dinah/go/bin/lxc "image" "list" "--format" "json" --verbose
+ /home/dinah/go/bin/lxc image list --format json --verbose
+ lxc image delete testimage
+ LXC_LOCAL=1 lxc_remote image delete testimage
+ set +x
+ eval /home/dinah/go/bin/lxc "image" "delete" "testimage" --verbose
+ /home/dinah/go/bin/lxc image delete testimage --verbose
+ my_curl -f -X GET https://127.0.0.1:54153/1.0
+ curl -k -s --cert /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/6bR/client.crt --key /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/6bR/client.key -f -X GET https://127.0.0.1:54153/1.0
{"type":"sync","status":"Success","status_code":200,"operation":"","error_code":0,"error":"","metadata":{"config":{"core.https_address":"127.0.0.1:54153","core.trust_password":true},"api_extensions":["storage_zfs_remove_snapshots","container_host_shutdown_timeout","container_stop_priority","container_syscall_filtering","auth_pki","container_last_used_at","etag","patch","usb_devices","https_allowed_credentials","image_compression_algorithm","directory_manipulation","container_cpu_time","storage_zfs_use_refquota","storage_lvm_mount_options","network","profile_usedby","container_push","container_exec_recording","certificate_update","container_exec_signal_handling","gpu_devices","container_image_properties","migration_progress","id_map","network_firewall_filtering","network_routes","storage","file_delete","file_append","network_dhcp_expiry","storage_lvm_vg_rename","storage_lvm_thinpool_rename","network_vlan","image_create_aliases","container_stateless_copy","container_only_migration","storage_zfs_clone_copy","unix_device_rename","storage_lvm_use_thinpool","storage_rsync_bwlimit","network_vxlan_interface","storage_btrfs_mount_options","entity_description","image_force_refresh","storage_lvm_lv_resizing","id_map_base","file_symlinks","container_push_target","network_vlan_physical","storage_images_delete","container_edit_metadata","container_snapshot_stateful_migration","storage_driver_ceph","storage_ceph_user_name","resource_limits","storage_volatile_initial_source","storage_ceph_force_osd_reuse","storage_block_filesystem_btrfs","resources","kernel_limits","storage_api_volume_rename","macaroon_authentication","network_sriov","console","restrict_devlxd","migration_pre_copy","infiniband","maas_network","devlxd_events","proxy","network_dhcp_gateway","file_get_symlink","network_leases","unix_device_hotplug","storage_api_local_volume_handling","operation_description","clustering","event_lifecycle","storage_api_remote_volume_handling","nvidia_runtime","container_mount_propagation","container_backup","devlxd_images","container_local_cross_pool_handling","proxy_unix","proxy_udp","clustering_join","proxy_tcp_udp_multi_port_handling","network_state","proxy_unix_dac_properties","container_protection_delete","unix_priv_drop","pprof_http","proxy_haproxy_protocol","network_hwaddr","proxy_nat","network_nat_order","container_full","candid_authentication","backup_compression","candid_config","nvidia_runtime_config","storage_api_volume_snapshots","storage_unmapped","projects","candid_config_key","network_vxlan_ttl","container_incremental_copy","usb_optional_vendorid","snapshot_scheduling","container_copy_project","clustering_server_address","clustering_image_replication","container_protection_shift","snapshot_expiry","container_backup_override_pool","snapshot_expiry_creation","network_leases_location","resources_cpu_socket","resources_gpu","resources_numa","kernel_features","id_map_current","event_location","storage_api_remote_volume_snapshots","network_nat_address","container_nic_routes","rbac","cluster_internal_copy","seccomp_notify","lxc_features","container_nic_ipvlan","network_vlan_sriov","storage_cephfs","container_nic_ipfilter","resources_v2","container_exec_user_group_cwd","container_syscall_intercept","container_disk_shift","storage_shifted","resources_infiniband","daemon_storage","instances","image_types","resources_disk_sata","clustering_roles","images_expiry","resources_network_firmware","backup_compression_algorithm","ceph_data_pool_name","container_syscall_intercept_mount","compression_squashfs","container_raw_mount","container_nic_routed","container_syscall_intercept_mount_fuse"],"api_status":"stable","api_version":"1.0","auth":"trusted","public":false,"auth_methods":["tls"],"environment":{"addresses":["127.0.0.1:54153"],"architectures":["x86_64","i686"],"certificate":"-----BEGIN CERTIFICATE-----\nMIIFzjCCA7igAwIBAgIRAKnCQRdpkZ86oXYOd9hGrPgwCwYJKoZIhvcNAQELMB4x\nHDAaBgNVBAoTE2xpbnV4Y29udGFpbmVycy5vcmcwHhcNMTUwNzE1MDQ1NjQ0WhcN\nMjUwNzEyMDQ1NjQ0WjAeMRwwGgYDVQQKExNsaW51eGNvbnRhaW5lcnMub3JnMIIC\nIjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAyViJkCzoxa1NYilXqGJog6xz\nlSm4xt8KIzayc0JdB9VxEdIVdJqUzBAUtyCS4KZ9MbPmMEOX9NbBASL0tRK58/7K\nScq99Kj4XbVMLU1P/y5aW0ymnF0OpKbG6unmgAI2k/duRlbYHvGRdhlswpKl0Yst\nl8i2kXOK0Rxcz90FewcEXGSnIYW21sz8YpBLfIZqOx6XEV36mOdi3MLrhUSAhXDw\nPay33Y7NonCQUBtiO7BT938cqI14FJrWdKon1UnODtzONcVBLTWtoe7D41+mx7EE\nTaq5OPxBSe0DD6KQcPOZ7ZSJEhIqVKMvzLyiOJpyShmhm4OuGNoAG6jAuSij/9Kc\naLU4IitcrvFOuAo8M9OpiY9ZCR7Gb/qaPAXPAxE7Ci3f9DDNKXtPXDjhj3YG01+h\nfNXMW3kCkMImn0A/+mZUMdCL87GWN2AN3Do5qaIc5XVEt1gp+LVqJeMoZ/lAeZWT\nIbzcnkneOzE25m+bjw3r3WlR26amhyrWNwjGzRkgfEpw336kniX/GmwaCNgdNk+g\n5aIbVxIHO0DbgkDBtdljR3VOic4djW/LtUIYIQ2egnPPyRR3fcFI+x5EQdVQYUXf\njpGIwovUDyG0Lkam2tpdeEXvLMZr8+Lhzu+H6vUFSj3cz6gcw/Xepw40FOkYdAI9\nLYB6nwpZLTVaOqZCJ2ECAwEAAaOCAQkwggEFMA4GA1UdDwEB/wQEAwIAoDATBgNV\nHSUEDDAKBggrBgEFBQcDATAMBgNVHRMBAf8EAjAAMIHPBgNVHREEgccwgcSCCVVi\ndW50dVByb4IRMTAuMTY3LjE2MC4xODMvMjSCHzIwMDE6MTVjMDo2NzM1OmVlMDA6\nOmU6ZTMxMy8xMjiCKWZkNTc6Yzg3ZDpmMWVlOmVlMDA6MjFkOjdkZmY6ZmUwOToz\nNzUzLzY0gikyMDAxOjE1YzA6NjczNTplZTAwOjIxZDo3ZGZmOmZlMDk6Mzc1My82\nNIIbZmU4MDo6MjFkOjdkZmY6ZmUwOTozNzUzLzY0ghAxOTIuMTY4LjEyMi4xLzI0\nMAsGCSqGSIb3DQEBCwOCAgEAmcJUSBH7cLw3auEEV1KewtdqY1ARVB/pafAtbe9F\n7ZKBbxUcS7cP3P1hRs5FH1bH44bIJKHxckctNUPqvC+MpXSryKinQ5KvGPNjGdlW\n6EPlQr23btizC6hRdQ6RjEkCnQxhyTLmQ9n78nt47hjA96rFAhCUyfPdv9dI4Zux\nbBTJekhCx5taamQKoxr7tql4Y2TchVlwASZvOfar8I0GxBRFT8w9IjckOSLoT9/s\nOhlvXpeoxxFT7OHwqXEXdRUvw/8MGBo6JDnw+J/NGDBw3Z0goebG4FMT//xGSHia\nczl3A0M0flk4/45L7N6vctwSqi+NxVaJRKeiYPZyzOO9K/d+No+WVBPwKmyP8icQ\nb7FGTelPJOUolC6kmoyM+vyaNUoU4nz6lgOSHAtuqGNDWZWuX/gqzZw77hzDIgkN\nqisOHZWPVlG/iUh1JBkbglBaPeaa3zf0XwSdgwwf4v8Z+YtEiRqkuFgQY70eQKI/\nCIkj1p0iW5IBEsEAGUGklz4ZwqJwH3lQIqDBzIgHe3EP4cXaYsx6oYhPSDdHLPv4\nHMZhl05DP75CEkEWRD0AIaL7SHdyuYUmCZ2zdrMI7TEDrAqcUuPbYpHcdJ2wnYmi\n2G8XHJibfu4PCpIm1J8kPL8rqpdgW3moKR8Mp0HJQOH4tSBr1Ep7xNLP1wg6PIe+\np7U=\n-----END CERTIFICATE-----\n","certificate_fingerprint":"5325b921edf26720953bff015b69900943315b5db69f3c6e17c25a694875c5b8","driver":"lxc","driver_version":"3.0.3","kernel":"Linux","kernel_architecture":"x86_64","kernel_features":{"netnsid_getifaddrs":"true","seccomp_listener":"true","seccomp_listener_continue":"true","shiftfs":"true","uevent_injection":"true","unpriv_fscaps":"true"},"kernel_version":"5.0.0-36-generic","lxc_features":{"mount_injection_file":"false","network_gateway_device_route":"false","network_ipvlan":"false","network_l2proxy":"false","network_phys_macvlan_mtu":"false","network_veth_router":"false","seccomp_notify":"false"},"project":"default","server":"lxd","server_clustered":false,"server_name":"liopleurodon","server_pid":16352,"server_version":"3.18","storage":"dir","storage_version":"1"}}}
+ my_curl -f -X GET https://127.0.0.1:54153/1.0/containers
+ curl -k -s --cert /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/6bR/client.crt --key /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/6bR/client.key -f -X GET https://127.0.0.1:54153/1.0/containers
{"type":"sync","status":"Success","status_code":200,"operation":"","error_code":0,"error":"","metadata":[]}
+ mv /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/yaf/d3245ce429109d5a5fb7d3119f4fd7216bdfe890aeb3b478be9533b7300d78a7.tar.xz /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/yaf/testimage.tar.xz
+ lxc image import /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/yaf/testimage.tar.xz --alias testimage user.foo=bar --public
+ LXC_LOCAL=1 lxc_remote image import /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/yaf/testimage.tar.xz --alias testimage user.foo=bar --public
+ set +x
+ eval /home/dinah/go/bin/lxc "image" "import" "/home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/yaf/testimage.tar.xz" "--alias" "testimage" "user.foo=bar" "--public" --verbose
+ /home/dinah/go/bin/lxc image import /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/yaf/testimage.tar.xz --alias testimage user.foo=bar --public --verbose
+ lxc image show testimage
+ LXC_LOCAL=1 lxc_remote image show testimage
+ set +x+ 
grep -q user.foo: bar
+ eval /home/dinah/go/bin/lxc "image" "show" "testimage" --verbose
+ /home/dinah/go/bin/lxc image show testimage --verbose
+ + grep -q public: truelxc
 image show testimage
+ LXC_LOCAL=1 lxc_remote image show testimage
+ set +x
+ eval /home/dinah/go/bin/lxc "image" "show" "testimage" --verbose
+ /home/dinah/go/bin/lxc image show testimage --verbose
+ lxc image delete testimage
+ LXC_LOCAL=1 lxc_remote image delete testimage
+ set +x
+ eval /home/dinah/go/bin/lxc "image" "delete" "testimage" --verbose
+ /home/dinah/go/bin/lxc image delete testimage --verbose
+ lxc image import /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/yaf/testimage.tar.xz --alias testimage
+ LXC_LOCAL=1 lxc_remote image import /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/yaf/testimage.tar.xz --alias testimage
+ set +x
+ eval /home/dinah/go/bin/lxc "image" "import" "/home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/yaf/testimage.tar.xz" "--alias" "testimage" --verbose
+ /home/dinah/go/bin/lxc image import /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/yaf/testimage.tar.xz --alias testimage --verbose
+ rm /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/yaf/testimage.tar.xz
+ lxc image export testimage /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/yaf/
+ LXC_LOCAL=1 lxc_remote image export testimage /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/yaf/
+ set +x
+ eval /home/dinah/go/bin/lxc "image" "export" "testimage" "/home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/yaf/" --verbose
+ /home/dinah/go/bin/lxc image export testimage /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/yaf/ --verbose
Image exported successfully!           
+ sha256sum /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/yaf/d3245ce429109d5a5fb7d3119f4fd7216bdfe890aeb3b478be9533b7300d78a7.tar.xz
+ cut -d  -f1
+ [ d3245ce429109d5a5fb7d3119f4fd7216bdfe890aeb3b478be9533b7300d78a7 = d3245ce429109d5a5fb7d3119f4fd7216bdfe890aeb3b478be9533b7300d78a7 ]
+ rm /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/yaf/d3245ce429109d5a5fb7d3119f4fd7216bdfe890aeb3b478be9533b7300d78a7.tar.xz
+ lxc image export testimage /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/yaf/foo
+ LXC_LOCAL=1 lxc_remote image export testimage /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/yaf/foo
+ set +x
+ eval /home/dinah/go/bin/lxc "image" "export" "testimage" "/home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/yaf/foo" --verbose
+ /home/dinah/go/bin/lxc image export testimage /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/yaf/foo --verbose
Image exported successfully!           
+ sha256sum /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/yaf/foo.tar.xz
+ cut -d  -f1
+ [ d3245ce429109d5a5fb7d3119f4fd7216bdfe890aeb3b478be9533b7300d78a7 = d3245ce429109d5a5fb7d3119f4fd7216bdfe890aeb3b478be9533b7300d78a7 ]
+ rm /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/yaf/foo.tar.xz
+ deps/import-busybox --split --alias splitimage
Image imported as: 847f9111bd14d63c5d0ffd59e6fdaf0a46c91e738eeb39829c5889149b0a660e
Setup alias: splitimage
+ lxc image info splitimage
+ LXC_LOCAL=1 lxc_remote image info splitimage
+ set +x
+ grep ^Fingerprint
+ cut -d  -f2
+ eval /home/dinah/go/bin/lxc "image" "info" "splitimage" --verbose
+ /home/dinah/go/bin/lxc image info splitimage --verbose
+ sum=847f9111bd14d63c5d0ffd59e6fdaf0a46c91e738eeb39829c5889149b0a660e
+ lxc image export splitimage /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/yaf
+ LXC_LOCAL=1 lxc_remote image export splitimage /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/yaf
+ set +x
+ eval /home/dinah/go/bin/lxc "image" "export" "splitimage" "/home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/yaf" --verbose
+ /home/dinah/go/bin/lxc image export splitimage /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/yaf --verbose
Image exported successfully!           
+ cat /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/yaf/meta-847f9111bd14d63c5d0ffd59e6fdaf0a46c91e738eeb39829c5889149b0a660e.tar.xz /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/yaf/847f9111bd14d63c5d0ffd59e6fdaf0a46c91e738eeb39829c5889149b0a660e.tar.xz
+ sha256sum
+ cut -d  -f1
+ [ 847f9111bd14d63c5d0ffd59e6fdaf0a46c91e738eeb39829c5889149b0a660e = 847f9111bd14d63c5d0ffd59e6fdaf0a46c91e738eeb39829c5889149b0a660e ]
+ rm /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/yaf/847f9111bd14d63c5d0ffd59e6fdaf0a46c91e738eeb39829c5889149b0a660e.tar.xz
+ rm /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/yaf/meta-847f9111bd14d63c5d0ffd59e6fdaf0a46c91e738eeb39829c5889149b0a660e.tar.xz
+ lxc image delete splitimage
+ LXC_LOCAL=1 lxc_remote image delete splitimage
+ set +x
+ eval /home/dinah/go/bin/lxc "image" "delete" "splitimage" --verbose
+ /home/dinah/go/bin/lxc image delete splitimage --verbose
+ deps/import-busybox --split --filename --alias splitimage
Image imported as: 847f9111bd14d63c5d0ffd59e6fdaf0a46c91e738eeb39829c5889149b0a660e
Setup alias: splitimage
+ lxc image export splitimage /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/yaf
+ LXC_LOCAL=1 lxc_remote image export splitimage /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/yaf
+ set +x
+ eval /home/dinah/go/bin/lxc "image" "export" "splitimage" "/home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/yaf" --verbose
+ /home/dinah/go/bin/lxc image export splitimage /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/yaf --verbose
Image exported successfully!           
+ cat /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/yaf/meta-847f9111bd14d63c5d0ffd59e6fdaf0a46c91e738eeb39829c5889149b0a660e.tar.xz /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/yaf/847f9111bd14d63c5d0ffd59e6fdaf0a46c91e738eeb39829c5889149b0a660e.tar.xz
+ sha256sum
+ cut -d  -f1
+ [ 847f9111bd14d63c5d0ffd59e6fdaf0a46c91e738eeb39829c5889149b0a660e = 847f9111bd14d63c5d0ffd59e6fdaf0a46c91e738eeb39829c5889149b0a660e ]
+ rm /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/yaf/847f9111bd14d63c5d0ffd59e6fdaf0a46c91e738eeb39829c5889149b0a660e.tar.xz
+ rm /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/yaf/meta-847f9111bd14d63c5d0ffd59e6fdaf0a46c91e738eeb39829c5889149b0a660e.tar.xz
+ lxc image delete splitimage
+ LXC_LOCAL=1 lxc_remote image delete splitimage
+ set +x
+ eval /home/dinah/go/bin/lxc "image" "delete" "splitimage" --verbose
+ /home/dinah/go/bin/lxc image delete splitimage --verbose
+ lxc init testimage foo
+ LXC_LOCAL=1 lxc_remote init testimage foo
+ set +x
+ eval /home/dinah/go/bin/lxc "init" "testimage" "foo" --verbose
+ /home/dinah/go/bin/lxc init testimage foo --verbose
Creating foo
INFO[11-16|00:08:58] Creating container                       name=foo ephemeral=false project=default
INFO[11-16|00:08:58] Created container                        project=default name=foo ephemeral=false
+ lxc list                                
+ LXC_LOCAL=1 lxc_remote list
+ set +x
+ grep foo
+ grep STOPPED
+ eval /home/dinah/go/bin/lxc "list" --verbose
+ /home/dinah/go/bin/lxc list --verbose
| foo  | STOPPED |      |      | CONTAINER | 0         |
+ lxc list fo
+ LXC_LOCAL=1 lxc_remote list fo
+ set +x
+ grep foo
+ grep STOPPED
+ eval /home/dinah/go/bin/lxc "list" "fo" --verbose
+ /home/dinah/go/bin/lxc list fo --verbose
| foo  | STOPPED |      |      | CONTAINER | 0         |
+ lxc list --format json
+ LXC_LOCAL=1 lxc_remote list --format json
+ set +x
+ grep "name": "foo"
+ jq .[]|select(.name="foo")
+ eval /home/dinah/go/bin/lxc "list" "--format" "json" --verbose
+ /home/dinah/go/bin/lxc list --format json --verbose
  "name": "foo",
+ lxc list --columns=nsp --fast
+ LXC_LOCAL=1 lxc_remote list --columns=nsp --fast
+ set +x
+ eval /home/dinah/go/bin/lxc "list" "--columns=nsp" "--fast" --verbose
+ /home/dinah/go/bin/lxc list --columns=nsp --fast --verbose
Error: Can't specify --fast with --columns
+ lxc move foo bar
+ LXC_LOCAL=1 lxc_remote move foo bar
+ set +x
+ eval /home/dinah/go/bin/lxc "move" "foo" "bar" --verbose
+ /home/dinah/go/bin/lxc move foo bar --verbose
INFO[11-16|00:08:58] Renaming container                       newname=bar project=default name=foo created=2019-11-16T00:08:58+0000 ephemeral=false used=1970-01-01T00:00:00+0000
INFO[11-16|00:08:58] Renamed container                        project=default name=foo created=2019-11-16T00:08:58+0000 ephemeral=false used=1970-01-01T00:00:00+0000 newname=bar
+ lxc list
+ LXC_LOCAL=1 lxc_remote list
+ set +x
+ grep -v foo
+ eval /home/dinah/go/bin/lxc "list" --verbose
+ /home/dinah/go/bin/lxc list --verbose
+------+---------+------+------+-----------+-----------+
| NAME |  STATE  | IPV4 | IPV6 |   TYPE    | SNAPSHOTS |
+------+---------+------+------+-----------+-----------+
| bar  | STOPPED |      |      | CONTAINER | 0         |
+------+---------+------+------+-----------+-----------+
+ + grep bar
lxc list
+ LXC_LOCAL=1 lxc_remote list
+ set +x
+ eval /home/dinah/go/bin/lxc "list" --verbose
+ /home/dinah/go/bin/lxc list --verbose
| bar  | STOPPED |      |      | CONTAINER | 0         |
+ lxc rename bar foo
+ LXC_LOCAL=1 lxc_remote rename bar foo
+ set +x
+ eval /home/dinah/go/bin/lxc "rename" "bar" "foo" --verbose
+ /home/dinah/go/bin/lxc rename bar foo --verbose
INFO[11-16|00:08:58] Renaming container                       project=default name=bar created=2019-11-16T00:08:58+0000 ephemeral=false used=1970-01-01T00:00:00+0000 newname=foo
INFO[11-16|00:08:58] Renamed container                        project=default name=bar created=2019-11-16T00:08:58+0000 ephemeral=false used=1970-01-01T00:00:00+0000 newname=foo
+ lxc list
+ LXC_LOCAL=1 lxc_remote list
+ set +x
+ grep -v bar
+ eval /home/dinah/go/bin/lxc "list" --verbose
+ /home/dinah/go/bin/lxc list --verbose
+------+---------+------+------+-----------+-----------+
| NAME |  STATE  | IPV4 | IPV6 |   TYPE    | SNAPSHOTS |
+------+---------+------+------+-----------+-----------+
| foo  | STOPPED |      |      | CONTAINER | 0         |
+------+---------+------+------+-----------+-----------+
+ lxc list
+ LXC_LOCAL=1 lxc_remote list
+ set +x
+ grep foo
+ eval /home/dinah/go/bin/lxc "list" --verbose
+ /home/dinah/go/bin/lxc list --verbose
| foo  | STOPPED |      |      | CONTAINER | 0         |
+ lxc rename foo bar
+ LXC_LOCAL=1 lxc_remote rename foo bar
+ set +x
+ eval /home/dinah/go/bin/lxc "rename" "foo" "bar" --verbose
+ /home/dinah/go/bin/lxc rename foo bar --verbose
INFO[11-16|00:08:58] Renaming container                       project=default name=foo created=2019-11-16T00:08:58+0000 ephemeral=false used=1970-01-01T00:00:00+0000 newname=bar
INFO[11-16|00:08:58] Renamed container                        used=1970-01-01T00:00:00+0000 newname=bar project=default name=foo created=2019-11-16T00:08:58+0000 ephemeral=false
+ lxc copy bar foo
+ LXC_LOCAL=1 lxc_remote copy bar foo
+ set +x
+ eval /home/dinah/go/bin/lxc "copy" "bar" "foo" --verbose
+ /home/dinah/go/bin/lxc copy bar foo --verbose
INFO[11-16|00:08:58] Creating container                       name=foo ephemeral=false project=default
INFO[11-16|00:08:58] Created container                        project=default name=foo ephemeral=false
+ lxc delete foo
+ LXC_LOCAL=1 lxc_remote delete foo
+ set +x
+ eval /home/dinah/go/bin/lxc "delete" "foo" --verbose
+ /home/dinah/go/bin/lxc delete foo --verbose
INFO[11-16|00:08:58] Deleting container                       created=2019-11-16T00:08:58+0000 ephemeral=false used=1970-01-01T00:00:00+0000 project=default name=foo
INFO[11-16|00:08:58] Deleted container                        ephemeral=false used=1970-01-01T00:00:00+0000 project=default name=foo created=2019-11-16T00:08:58+0000
+ gen_cert client3
+ [ -f /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/6bR/client3.crt ]
+ mv /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/6bR/client.crt /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/6bR/client.crt.bak
+ mv /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/6bR/client.key /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/6bR/client.key.bak
+ echo y
+ uuidgen
+ lxc_remote remote add ae7ff170-a76b-4747-aa8d-9b76f17ab9f2 https://0.0.0.0
+ set +x
+ eval /home/dinah/go/bin/lxc "remote" "add" "ae7ff170-a76b-4747-aa8d-9b76f17ab9f2" "https://0.0.0.0" --verbose
+ /home/dinah/go/bin/lxc remote add ae7ff170-a76b-4747-aa8d-9b76f17ab9f2 https://0.0.0.0 --verbose
Generating a client certificate. This may take a minute...
Error: Get https://0.0.0.0:8443: Unable to connect to: 0.0.0.0:8443
+ true
+ mv /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/6bR/client.crt /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/6bR/client3.crt
+ mv /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/6bR/client.key /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/6bR/client3.key
+ mv /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/6bR/client.crt.bak /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/6bR/client.crt
+ mv /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/6bR/client.key.bak /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/6bR/client.key
+ + curl -k -s -X GETgrep https://127.0.0.1:54153/1.0/containers/foo 403

WARN[11-16|00:08:59] Rejecting request from untrusted client  ip=127.0.0.1:50160
{"error":"not authorized","error_code":403,"type":"error"}
+ lxc publish bar --alias=foo-image prop1=val1
+ LXC_LOCAL=1 lxc_remote publish bar --alias=foo-image prop1=val1
+ set +x
+ eval /home/dinah/go/bin/lxc "publish" "bar" "--alias=foo-image" "prop1=val1" --verbose
+ /home/dinah/go/bin/lxc publish bar --alias=foo-image prop1=val1 --verbose
INFO[11-16|00:08:59] Exporting container                      project=default name=bar created=2019-11-16T00:08:58+0000 ephemeral=false used=1970-01-01T00:00:00+0000
Publishing container: Image pack: 100% (15.23MB/s)INFO[11-16|00:08:59] Exported container                       name=bar created=2019-11-16T00:08:58+0000 ephemeral=false used=1970-01-01T00:00:00+0000 project=default
Container published with fingerprint: ff769d89609cbe68fe675d04e5f406b43e8e490fd2fed7327d3cd15ebb0eb258
+ lxc image show foo-image
+ LXC_LOCAL=1 lxc_remote image show foo-image
+ set +x+ 
grep val1
+ eval /home/dinah/go/bin/lxc "image" "show" "foo-image" --verbose
+ /home/dinah/go/bin/lxc image show foo-image --verbose
  prop1: val1
+ + grep /1.0/images/curl
 -k -s --cert /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/6bR/client3.crt --key /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/6bR/client3.key -X GET https://127.0.0.1:54153/1.0/images
+ lxc image delete foo-image
+ LXC_LOCAL=1 lxc_remote image delete foo-image
+ set +x
+ eval /home/dinah/go/bin/lxc "image" "delete" "foo-image" --verbose
+ /home/dinah/go/bin/lxc image delete foo-image --verbose
+ lxc publish bar --alias=foo-image --alias=foo-image2
+ LXC_LOCAL=1 lxc_remote publish bar --alias=foo-image --alias=foo-image2
+ set +x
+ eval /home/dinah/go/bin/lxc "publish" "bar" "--alias=foo-image" "--alias=foo-image2" --verbose
+ /home/dinah/go/bin/lxc publish bar --alias=foo-image --alias=foo-image2 --verbose
INFO[11-16|00:08:59] Exporting container                      project=default name=bar created=2019-11-16T00:08:58+0000 ephemeral=false used=1970-01-01T00:00:00+0000
Publishing container: Image pack: 100% (15.09MB/s)INFO[11-16|00:08:59] Exported container                       project=default name=bar created=2019-11-16T00:08:58+0000 ephemeral=false used=1970-01-01T00:00:00+0000
Container published with fingerprint: 5a06c479e60d061b7296ccbf4f71abf34d650c546e2290db205a912d28e7ebb5
+ lxc launch testimage baz
+ LXC_LOCAL=1 lxc_remote launch testimage baz
+ set +x
+ eval /home/dinah/go/bin/lxc "launch" "testimage" "baz" --verbose
+ /home/dinah/go/bin/lxc launch testimage baz --verbose
Creating baz
INFO[11-16|00:08:59] Creating container                       project=default name=baz ephemeral=false
INFO[11-16|00:08:59] Created container                        project=default name=baz ephemeral=false
Starting baz                               
INFO[11-16|00:08:59] Starting container                       name=baz action=start created=2019-11-16T00:08:59+0000 ephemeral=false used=1970-01-01T00:00:00+0000 stateful=false project=default
EROR[11-16|00:09:00] Failed starting container                action=start created=2019-11-16T00:08:59+0000 ephemeral=false used=1970-01-01T00:00:00+0000 stateful=false project=default name=baz
Error: Failed to run: /home/dinah/go/bin/lxd forkstart baz /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/yaf/containers /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/yaf/logs/baz/lxc.conf: 
Try `lxc info --show-log local:baz` for more info
+ cleanup
+ set +ex
==> Cleaning up
==> Killing LXD at /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/yaf
==> Deleting all containers
INFO[11-16|00:09:00] Deleting container                       project=default name=bar created=2019-11-16T00:08:58+0000 ephemeral=false used=1970-01-01T00:00:00+0000
INFO[11-16|00:09:00] Deleted container                        name=bar created=2019-11-16T00:08:58+0000 ephemeral=false used=1970-01-01T00:00:00+0000 project=default
INFO[11-16|00:09:00] Deleting container                       project=default name=baz created=2019-11-16T00:08:59+0000 ephemeral=false used=2019-11-16T00:09:00+0000
Error: remove /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/yaf/storage-pools/lxdtest-yaf/containers/baz/rootfs: device or resource busy
==> Deleting all images
==> Deleting all networks
==> Deleting all profiles
Error: The 'default' profile cannot be deleted
==> Clearing config of default profile
Error: At least one container relies on this profile's root disk device
==> Deleting all storage pools
Error: storage pool "lxdtest-yaf" has volumes attached to it
==> Checking for locked DB tables
INFO[11-16|00:09:01] Asked to shutdown by API, shutting down containers 
INFO[11-16|00:09:01] Starting shutdown sequence 
INFO[11-16|00:09:01] Stopping REST API handler: 
INFO[11-16|00:09:01]  - closing socket                        socket=127.0.0.1:54153
INFO[11-16|00:09:01]  - closing socket                        socket=/home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/yaf/unix.socket
INFO[11-16|00:09:01] Stopping /dev/lxd handler: 
INFO[11-16|00:09:01]  - closing socket                        socket=/home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/yaf/devlxd/sock
INFO[11-16|00:09:01] Closing the database 
WARN[11-16|00:09:01] Failed to delete operation 2a1f6eaf-2e8f-41c4-8aec-d40013225d12: failed to begin transaction: sql: database is closed 
WARN[11-16|00:09:01] Failed to delete operation f97fdc01-aa73-4f23-aa13-e5208baa90ca: failed to begin transaction: sql: database is closed 
INFO[11-16|00:09:01] Unmounting temporary filesystems 
INFO[11-16|00:09:01] Done unmounting temporary filesystems 
==> Checking for leftover files
/home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/yaf/containers/ is not empty, content:
/home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/yaf/containers/
/home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/yaf/containers/baz
/home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/yaf/devices/ is not empty, content:
/home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/yaf/devices/
/home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/yaf/devices/baz
/home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/yaf/security/apparmor/cache/26b63962.0 is not empty, content:
/home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/yaf/security/apparmor/cache/26b63962.0
/home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/yaf/security/apparmor/cache/26b63962.0/lxd-baz
/home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/yaf/security/apparmor/profiles/ is not empty, content:
/home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/yaf/security/apparmor/profiles/
/home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/yaf/security/apparmor/profiles/lxd-baz
/home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/yaf/security/seccomp/ is not empty, content:
/home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/yaf/security/seccomp/
/home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/yaf/security/seccomp/baz
==> Checking for leftover DB entries
DB table instances is not empty, content:
3|1|baz|2|0|0|2019-11-16 00:08:59.534380145+00:00|0|2019-11-16 00:09:00.634247876+00:00||1|0001-01-01 00:00:00+00:00
DB table instances_config is not empty, content:
21|3|image.name|busybox-x86_64
22|3|image.os|Busybox
23|3|image.architecture|x86_64
24|3|image.description|Busybox x86_64
25|3|volatile.base_image|d3245ce429109d5a5fb7d3119f4fd7216bdfe890aeb3b478be9533b7300d78a7
26|3|volatile.idmap.next|[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":65536},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":65536}]
27|3|volatile.idmap.base|0
28|3|volatile.last_state.idmap|[]
29|3|volatile.eth0.hwaddr|00:16:3e:bd:4b:4a
31|3|volatile.idmap.current|[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":65536},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":65536}]
34|3|volatile.last_state.power|STOPPED
DB table instances_profiles is not empty, content:
3|3|1|1
DB table profiles_devices is not empty, content:
2|1|root|2
3|1|eth0|1
DB table profiles_devices_config is not empty, content:
3|2|path|/
4|2|pool|lxdtest-yaf
5|3|nictype|p2p
6|3|name|eth0
DB table storage_pools is not empty, content:
1|lxdtest-yaf|dir||1
DB table storage_pools_nodes is not empty, content:
1|1|1
DB table storage_pools_config is not empty, content:
1|1|1|source|/home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/yaf/storage-pools/lxdtest-yaf
DB table storage_volumes is not empty, content:
3|baz|1|1|0||0|1
==> Tearing down directory backend in /home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/yaf
rm: cannot remove '/home/dinah/go/src/github.com/lxc/lxd/test/tmp.jIc/yaf/storage-pools/lxdtest-yaf/containers/baz/rootfs': Device or resource busy


==> TEST DONE: basic usage
==> Test result: failure
@DBaum1

This comment has been minimized.

Copy link

@DBaum1 DBaum1 commented Nov 16, 2019

I am not running it in a VM or container.

@stgraber

This comment has been minimized.

Copy link
Member Author

@stgraber stgraber commented Nov 16, 2019

What OS and kernel is that system running?

The error above looks suspiciously like a kernel bug from the 5.1 kernel.

@DBaum1

This comment has been minimized.

Copy link

@DBaum1 DBaum1 commented Nov 16, 2019

That might be the problem - I updated my system a couple days ago, so now I'm running Ubuntu 19.04, kernel v5.0.0-36-generic. I can revert to an earlier version.

@stgraber

This comment has been minimized.

Copy link
Member Author

@stgraber stgraber commented Nov 16, 2019

Hmm, no, that should be fine, it's very similar to what we run on Jenkins actually.

Ah, I wonder if it's just path traversal being a problem. Can you try running chmod +x /home/dinah see if that takes care of the problem?

@DBaum1

This comment has been minimized.

Copy link

@DBaum1 DBaum1 commented Nov 16, 2019

chmod +x /home/dinah seems to help a lot - it is getting past basic usage and failing on container devices - nic - bridged.

Results
root@liopleurodon:~/go/src/github.com/lxc/lxd/test# ./main.sh
==> Checking for dependencies
==> Available storage backends: dir btrfs lvm zfs
==> Using storage backend dir
==> Setting up directory backend in /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/Gke
==> Spawning lxd in /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/Gke
==> Spawned LXD (PID is 4919)
==> Confirming lxd is responsive
WARN[11-16|20:18:55] CGroup memory swap accounting is disabled, swap limits will be ignored. 
==> Binding to network
If this is your first time running LXD on this machine, you should also run: lxd init
To start your first container, try: lxc launch ubuntu:18.04

==> Bound to 127.0.0.1:35483
==> Setting trust password
==> Setting up networking
Device eth0 added to default
==> Configuring storage backend
==> Configuring directory backend in /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/Gke
Storage pool lxdtest-Gke created
Device root added to default
==> TEST BEGIN: checking dependencies
2019/11/16 20:19:10 auth - running at http://127.0.0.1:42867
==> TEST DONE: checking dependencies (0s)
==> TEST BEGIN: static analysis
On branch master
Your branch is up to date with 'origin/master'.

Changes to be committed:
  (use "git reset HEAD <file>..." to unstage)

	modified:   po/bg.po
	modified:   po/de.po
	modified:   po/el.po
	modified:   po/es.po
	modified:   po/fa.po
	modified:   po/fi.po
	modified:   po/fr.po
	modified:   po/hi.po
	modified:   po/id.po
	modified:   po/it.po
	modified:   po/ja.po
	modified:   po/ko.po
	modified:   po/nb_NO.po
	modified:   po/nl.po
	modified:   po/pa.po
	modified:   po/pl.po
	modified:   po/pt_BR.po
	modified:   po/ru.po
	modified:   po/sl.po
	modified:   po/sr.po
	modified:   po/sv.po
	modified:   po/te.po
	modified:   po/tr.po
	modified:   po/uk.po
	modified:   po/zh_Hans.po

Untracked files:
  (use "git add <file>..." to include in what will be committed)

	test/tmp.b2f/

WORK=/tmp/go-build301354816
rm -r $WORK/b001/
................................................................................................. done.
.......................................................................................................... done.
.............................................................................................................. done.
........................................................................................................... done.
............................................................................................................. done.
................................................................................................. done.
.............................................................................................. done.
.................................................................................................. done.
............................................................................................... done.
................................................................................................ done.
................................................................................................ done.
..................................................................................................... done.
.............................................................................................. done.
................................................................................................ done.
.............................................................................................. done.
.................................................................................................. done.
............................................................................................. done.
................................................................................................. done.
................................................................................................. done.
............................................................................................... done.
................................................................................................. done.
................................................................................................ done.
.................................................................................................. done.
..................................................................................................... done.
............................................................................................... done.
==> TEST DONE: static analysis (36s)
==> TEST BEGIN: database schema updates
==> Setting up directory backend in /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/KvU
==> Spawning lxd in /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/KvU
==> Spawned LXD (PID is 6982)
==> Confirming lxd is responsive
WARN[11-16|20:19:47] CGroup memory swap accounting is disabled, swap limits will be ignored. 
==> Binding to network
==> Bound to 127.0.0.1:43253
==> Setting trust password
==> Setting up networking
Device eth0 added to default
==> Configuring storage backend
==> Configuring directory backend in /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/KvU
Storage pool lxdtest-KvU created
Device root added to default
==> Killing LXD at /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/KvU
==> Deleting all containers
==> Deleting all images
==> Deleting all networks
==> Deleting all profiles
Error: The 'default' profile cannot be deleted
Profile docker deleted
==> Clearing config of default profile
==> Deleting all storage pools
Storage pool lxdtest-KvU deleted
==> Checking for locked DB tables
WARN[11-16|20:19:52] Failed to update instance types: Get https://images.linuxcontainers.org/meta/instance-types/aws.yaml: context canceled 
==> Checking for leftover files
==> Checking for leftover DB entries
==> Tearing down directory backend in /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/KvU
==> TEST DONE: database schema updates (8s)
==> TEST BEGIN: database restore
==> Setting up directory backend in /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/2rS
==> Spawning lxd in /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/2rS
==> Spawned LXD (PID is 7286)
==> Confirming lxd is responsive
WARN[11-16|20:19:55] CGroup memory swap accounting is disabled, swap limits will be ignored. 
==> Binding to network
==> Bound to 127.0.0.1:42935
==> Setting trust password
==> Setting up networking
Device eth0 added to default
==> Configuring storage backend
==> Configuring directory backend in /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/2rS
Storage pool lxdtest-2rS created
Device root added to default
==> Killing LXD at /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/2rS
WARN[11-16|20:19:59] Failed to update instance types: Get https://us.images.linuxcontainers.org/meta/instance-types/aws.yaml: context canceled 
WARN[11-16|20:19:59] CGroup memory swap accounting is disabled, swap limits will be ignored. 
EROR[11-16|20:20:00] Failed to start the daemon: failed to open cluster database: failed to ensure schema: failed to execute queries from /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/2rS/database/patch.global.sql: no such table: broken 
Error: failed to open cluster database: failed to ensure schema: failed to execute queries from /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/2rS/database/patch.global.sql: no such table: broken
==> Spawning lxd in /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/2rS
==> Spawned LXD (PID is 7426)
==> Confirming lxd is responsive
WARN[11-16|20:20:00] CGroup memory swap accounting is disabled, swap limits will be ignored. 
==> Killing LXD at /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/2rS
==> Deleting all containers
==> Deleting all images
==> Deleting all networks
==> Deleting all profiles
Error: The 'default' profile cannot be deleted
==> Clearing config of default profile
==> Deleting all storage pools
Storage pool lxdtest-2rS deleted
==> Checking for locked DB tables
WARN[11-16|20:20:01] Failed to update instance types: Get https://images.linuxcontainers.org/meta/instance-types/.yaml: context canceled 
==> Checking for leftover files
==> Checking for leftover DB entries
==> Tearing down directory backend in /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/2rS
==> TEST DONE: database restore (10s)
==> TEST BEGIN: lxd sql
Description:
  Execute a SQL query against the LXD local or global database

  The local database is specific to the LXD cluster member you target the
  command to, and contains member-specific data (such as the member network
  address).

  The global database is common to all LXD members in the cluster, and contains
  cluster-specific data (such as profiles, containers, etc).

  If you are running a non-clustered LXD instance, the same applies, as that
  instance is effectively a single-member cluster.

  If <query> is the special value "-", then the query is read from
  standard input.

  If <query> is the special value ".dump", the the command returns a SQL text
  dump of the given database.

  If <query> is the special value ".schema", the the command returns the SQL
  text schema of the given database.

  This internal command is mostly useful for debugging and disaster
  recovery. The LXD team will occasionally provide hotfixes to users as a
  set of database queries to fix some data inconsistency.

  This command targets the global LXD database and works in both local
  and cluster mode.

Usage:
  lxd sql <local|global> <query> [flags]

Global Flags:
  -d, --debug     Show all debug messages
  -h, --help      Print help
      --logfile   Path to the log file
      --syslog    Log to syslog
      --trace     Log tracing targets
  -v, --verbose   Show all information messages
      --version   Print version number
Error: Invalid database type
Error: No query provided
SYNC /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/Gke/database/global/db.bin
==> TEST DONE: lxd sql (2s)
==> TEST BEGIN: basic usage
Image imported as: d3245ce429109d5a5fb7d3119f4fd7216bdfe890aeb3b478be9533b7300d78a7
Setup alias: testimage
Generating a client certificate. This may take a minute...
2019/11/16 20:20:07 http: TLS handshake error from 127.0.0.1:47450: EOF
2019/11/16 20:20:07 http: TLS handshake error from 127.0.0.1:47452: remote error: tls: bad certificate
Client certificate stored at server:  localhost
Image exported successfully!           
auto_update: false
properties:
  architecture: x86_64
  description: Busybox x86_64
  name: busybox-x86_64
  os: Busybox
public: false
expires_at: 1970-01-01T00:00:00Z
{"type":"sync","status":"Success","status_code":200,"operation":"","error_code":0,"error":"","metadata":{"config":{"core.https_address":"127.0.0.1:35483","core.trust_password":true},"api_extensions":["storage_zfs_remove_snapshots","container_host_shutdown_timeout","container_stop_priority","container_syscall_filtering","auth_pki","container_last_used_at","etag","patch","usb_devices","https_allowed_credentials","image_compression_algorithm","directory_manipulation","container_cpu_time","storage_zfs_use_refquota","storage_lvm_mount_options","network","profile_usedby","container_push","container_exec_recording","certificate_update","container_exec_signal_handling","gpu_devices","container_image_properties","migration_progress","id_map","network_firewall_filtering","network_routes","storage","file_delete","file_append","network_dhcp_expiry","storage_lvm_vg_rename","storage_lvm_thinpool_rename","network_vlan","image_create_aliases","container_stateless_copy","container_only_migration","storage_zfs_clone_copy","unix_device_rename","storage_lvm_use_thinpool","storage_rsync_bwlimit","network_vxlan_interface","storage_btrfs_mount_options","entity_description","image_force_refresh","storage_lvm_lv_resizing","id_map_base","file_symlinks","container_push_target","network_vlan_physical","storage_images_delete","container_edit_metadata","container_snapshot_stateful_migration","storage_driver_ceph","storage_ceph_user_name","resource_limits","storage_volatile_initial_source","storage_ceph_force_osd_reuse","storage_block_filesystem_btrfs","resources","kernel_limits","storage_api_volume_rename","macaroon_authentication","network_sriov","console","restrict_devlxd","migration_pre_copy","infiniband","maas_network","devlxd_events","proxy","network_dhcp_gateway","file_get_symlink","network_leases","unix_device_hotplug","storage_api_local_volume_handling","operation_description","clustering","event_lifecycle","storage_api_remote_volume_handling","nvidia_runtime","container_mount_propagation","container_backup","devlxd_images","container_local_cross_pool_handling","proxy_unix","proxy_udp","clustering_join","proxy_tcp_udp_multi_port_handling","network_state","proxy_unix_dac_properties","container_protection_delete","unix_priv_drop","pprof_http","proxy_haproxy_protocol","network_hwaddr","proxy_nat","network_nat_order","container_full","candid_authentication","backup_compression","candid_config","nvidia_runtime_config","storage_api_volume_snapshots","storage_unmapped","projects","candid_config_key","network_vxlan_ttl","container_incremental_copy","usb_optional_vendorid","snapshot_scheduling","container_copy_project","clustering_server_address","clustering_image_replication","container_protection_shift","snapshot_expiry","container_backup_override_pool","snapshot_expiry_creation","network_leases_location","resources_cpu_socket","resources_gpu","resources_numa","kernel_features","id_map_current","event_location","storage_api_remote_volume_snapshots","network_nat_address","container_nic_routes","rbac","cluster_internal_copy","seccomp_notify","lxc_features","container_nic_ipvlan","network_vlan_sriov","storage_cephfs","container_nic_ipfilter","resources_v2","container_exec_user_group_cwd","container_syscall_intercept","container_disk_shift","storage_shifted","resources_infiniband","daemon_storage","instances","image_types","resources_disk_sata","clustering_roles","images_expiry","resources_network_firmware","backup_compression_algorithm","ceph_data_pool_name","container_syscall_intercept_mount","compression_squashfs","container_raw_mount","container_nic_routed","container_syscall_intercept_mount_fuse"],"api_status":"stable","api_version":"1.0","auth":"trusted","public":false,"auth_methods":["tls"],"environment":{"addresses":["127.0.0.1:35483"],"architectures":["x86_64","i686"],"certificate":"-----BEGIN CERTIFICATE-----\nMIIFzjCCA7igAwIBAgIRAKnCQRdpkZ86oXYOd9hGrPgwCwYJKoZIhvcNAQELMB4x\nHDAaBgNVBAoTE2xpbnV4Y29udGFpbmVycy5vcmcwHhcNMTUwNzE1MDQ1NjQ0WhcN\nMjUwNzEyMDQ1NjQ0WjAeMRwwGgYDVQQKExNsaW51eGNvbnRhaW5lcnMub3JnMIIC\nIjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAyViJkCzoxa1NYilXqGJog6xz\nlSm4xt8KIzayc0JdB9VxEdIVdJqUzBAUtyCS4KZ9MbPmMEOX9NbBASL0tRK58/7K\nScq99Kj4XbVMLU1P/y5aW0ymnF0OpKbG6unmgAI2k/duRlbYHvGRdhlswpKl0Yst\nl8i2kXOK0Rxcz90FewcEXGSnIYW21sz8YpBLfIZqOx6XEV36mOdi3MLrhUSAhXDw\nPay33Y7NonCQUBtiO7BT938cqI14FJrWdKon1UnODtzONcVBLTWtoe7D41+mx7EE\nTaq5OPxBSe0DD6KQcPOZ7ZSJEhIqVKMvzLyiOJpyShmhm4OuGNoAG6jAuSij/9Kc\naLU4IitcrvFOuAo8M9OpiY9ZCR7Gb/qaPAXPAxE7Ci3f9DDNKXtPXDjhj3YG01+h\nfNXMW3kCkMImn0A/+mZUMdCL87GWN2AN3Do5qaIc5XVEt1gp+LVqJeMoZ/lAeZWT\nIbzcnkneOzE25m+bjw3r3WlR26amhyrWNwjGzRkgfEpw336kniX/GmwaCNgdNk+g\n5aIbVxIHO0DbgkDBtdljR3VOic4djW/LtUIYIQ2egnPPyRR3fcFI+x5EQdVQYUXf\njpGIwovUDyG0Lkam2tpdeEXvLMZr8+Lhzu+H6vUFSj3cz6gcw/Xepw40FOkYdAI9\nLYB6nwpZLTVaOqZCJ2ECAwEAAaOCAQkwggEFMA4GA1UdDwEB/wQEAwIAoDATBgNV\nHSUEDDAKBggrBgEFBQcDATAMBgNVHRMBAf8EAjAAMIHPBgNVHREEgccwgcSCCVVi\ndW50dVByb4IRMTAuMTY3LjE2MC4xODMvMjSCHzIwMDE6MTVjMDo2NzM1OmVlMDA6\nOmU6ZTMxMy8xMjiCKWZkNTc6Yzg3ZDpmMWVlOmVlMDA6MjFkOjdkZmY6ZmUwOToz\nNzUzLzY0gikyMDAxOjE1YzA6NjczNTplZTAwOjIxZDo3ZGZmOmZlMDk6Mzc1My82\nNIIbZmU4MDo6MjFkOjdkZmY6ZmUwOTozNzUzLzY0ghAxOTIuMTY4LjEyMi4xLzI0\nMAsGCSqGSIb3DQEBCwOCAgEAmcJUSBH7cLw3auEEV1KewtdqY1ARVB/pafAtbe9F\n7ZKBbxUcS7cP3P1hRs5FH1bH44bIJKHxckctNUPqvC+MpXSryKinQ5KvGPNjGdlW\n6EPlQr23btizC6hRdQ6RjEkCnQxhyTLmQ9n78nt47hjA96rFAhCUyfPdv9dI4Zux\nbBTJekhCx5taamQKoxr7tql4Y2TchVlwASZvOfar8I0GxBRFT8w9IjckOSLoT9/s\nOhlvXpeoxxFT7OHwqXEXdRUvw/8MGBo6JDnw+J/NGDBw3Z0goebG4FMT//xGSHia\nczl3A0M0flk4/45L7N6vctwSqi+NxVaJRKeiYPZyzOO9K/d+No+WVBPwKmyP8icQ\nb7FGTelPJOUolC6kmoyM+vyaNUoU4nz6lgOSHAtuqGNDWZWuX/gqzZw77hzDIgkN\nqisOHZWPVlG/iUh1JBkbglBaPeaa3zf0XwSdgwwf4v8Z+YtEiRqkuFgQY70eQKI/\nCIkj1p0iW5IBEsEAGUGklz4ZwqJwH3lQIqDBzIgHe3EP4cXaYsx6oYhPSDdHLPv4\nHMZhl05DP75CEkEWRD0AIaL7SHdyuYUmCZ2zdrMI7TEDrAqcUuPbYpHcdJ2wnYmi\n2G8XHJibfu4PCpIm1J8kPL8rqpdgW3moKR8Mp0HJQOH4tSBr1Ep7xNLP1wg6PIe+\np7U=\n-----END CERTIFICATE-----\n","certificate_fingerprint":"5325b921edf26720953bff015b69900943315b5db69f3c6e17c25a694875c5b8","driver":"lxc","driver_version":"3.0.3","kernel":"Linux","kernel_architecture":"x86_64","kernel_features":{"netnsid_getifaddrs":"false","seccomp_listener":"false","seccomp_listener_continue":"false","shiftfs":"false","uevent_injection":"false","unpriv_fscaps":"true"},"kernel_version":"4.15.0-70-generic","lxc_features":{"mount_injection_file":"false","network_gateway_device_route":"false","network_ipvlan":"false","network_l2proxy":"false","network_phys_macvlan_mtu":"false","network_veth_router":"false","seccomp_notify":"false"},"project":"default","server":"lxd","server_clustered":false,"server_name":"liopleurodon","server_pid":4919,"server_version":"3.18","storage":"dir","storage_version":"1"}}}
{"type":"sync","status":"Success","status_code":200,"operation":"","error_code":0,"error":"","metadata":[]}
Image exported successfully!           
Image exported successfully!           
Image imported as: 847f9111bd14d63c5d0ffd59e6fdaf0a46c91e738eeb39829c5889149b0a660e
Setup alias: splitimage
Image exported successfully!           
Image imported as: 847f9111bd14d63c5d0ffd59e6fdaf0a46c91e738eeb39829c5889149b0a660e
Setup alias: splitimage
Image exported successfully!           
Creating foo
| foo  | STOPPED |      |      | CONTAINER | 0         |
| foo  | STOPPED |      |      | CONTAINER | 0         |
  "name": "foo",
Error: Can't specify --fast with --columns
+------+---------+------+------+-----------+-----------+
| NAME |  STATE  | IPV4 | IPV6 |   TYPE    | SNAPSHOTS |
+------+---------+------+------+-----------+-----------+
| bar  | STOPPED |      |      | CONTAINER | 0         |
+------+---------+------+------+-----------+-----------+
| bar  | STOPPED |      |      | CONTAINER | 0         |
+------+---------+------+------+-----------+-----------+
| NAME |  STATE  | IPV4 | IPV6 |   TYPE    | SNAPSHOTS |
+------+---------+------+------+-----------+-----------+
| foo  | STOPPED |      |      | CONTAINER | 0         |
+------+---------+------+------+-----------+-----------+
| foo  | STOPPED |      |      | CONTAINER | 0         |
Generating a client certificate. This may take a minute...
Error: Get https://0.0.0.0:8443: Unable to connect to: 0.0.0.0:8443
WARN[11-16|20:20:14] Rejecting request from untrusted client  ip=127.0.0.1:47474
{"error":"not authorized","error_code":403,"type":"error"}
Container published with fingerprint: c3d9a48d68c2243c85457235f4e21571936ef472004767129e5b0733557ecb45
  prop1: val1
Container published with fingerprint: e2ec0b29c06f5cd52fd22e5135e27e3b728875aba6a48ebdb939c7c7ec6f4323
Creating baz
Starting baz                              
WARN[11-16|20:20:18] Detected poll(POLLNVAL) event. 
Container published with fingerprint: 5fed95f115d712536ea30a9e35a2e4ee3914441ddd52f570de424e84d23a0f5d
Container published with fingerprint: 0173edb5a21978bae93d1abfe9a7eddab5b16d3c52489f8a37cb3fb00d851f2b
  prop: val1
Profile priv created
Creating barpriv
Container published with fingerprint: 715ebb593fa0806e1b8db91f4c9f2867cd57a2309eac76dd3c695a20357bc9dc
  prop1: val1
Profile priv deleted
Container published with fingerprint: e2ec0b29c06f5cd52fd22e5135e27e3b728875aba6a48ebdb939c7c7ec6f4323
{"type":"sync","status":"Success","status_code":200,"operation":"","error_code":0,"error":"","metadata":["/1.0/images/e2ec0b29c06f5cd52fd22e5135e27e3b728875aba6a48ebdb939c7c7ec6f4323"]}
Error: unknown shorthand flag: 'a' in -abc
Creating abc-
Error: Create instance: Container name isn't a valid hostname
Creating 1234
Error: Create instance: Container name isn't a valid hostname
Creating 12test
Error: Create instance: Container name isn't a valid hostname
Creating a_b_c
Error: Create instance: Container name isn't a valid hostname
Creating aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
Error: Create instance: Container name isn't a valid hostname
Container published with fingerprint: e2ec0b29c06f5cd52fd22e5135e27e3b728875aba6a48ebdb939c7c7ec6f4323
Creating bar2
| bar2 | STOPPED |      |      | CONTAINER | 0         |
Creating foo
Creating the instance                     
Instance name is: lenient-sunbird         
Starting lenient-sunbird
{"type":"sync","status":"Success","status_code":200,"operation":"","error_code":0,"error":"","metadata":{"id":"4d0ff941-cd49-4ddc-bf0b-304870a0a0e9","class":"task","description":"Creating container","created_at":"2019-11-16T20:20:32.600807561Z","updated_at":"2019-11-16T20:20:32.600807561Z","status":"Success","status_code":200,"resources":{"containers":["/1.0/containers/nonetype"],"instances":["/1.0/instances/nonetype"]},"metadata":null,"may_cancel":false,"err":"","location":"none"}}
{"type":"sync","status":"Success","status_code":200,"operation":"","error_code":0,"error":"","metadata":{"id":"563987bd-eca4-442d-b4e2-e2eab119e1f9","class":"task","description":"Creating container","created_at":"2019-11-16T20:20:33.332240795Z","updated_at":"2019-11-16T20:20:33.332240795Z","status":"Success","status_code":200,"resources":{"containers":["/1.0/containers/configtest"],"instances":["/1.0/instances/configtest"]},"metadata":null,"may_cancel":false,"err":"","location":"none"}}
==> Setting up directory backend in /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/cAy
==> Spawning lxd in /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/cAy
==> Spawned LXD (PID is 9308)
==> Confirming lxd is responsive
WARN[11-16|20:20:33] CGroup memory swap accounting is disabled, swap limits will be ignored. 
==> Binding to network
==> Bound to 127.0.0.1:57243
==> Setting trust password
==> Setting up networking
Device eth0 added to default
==> Configuring storage backend
==> Configuring directory backend in /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/cAy
Storage pool lxdtest-cAy created
Device root added to default
Image imported as: d3245ce429109d5a5fb7d3119f4fd7216bdfe890aeb3b478be9533b7300d78a7
Setup alias: testimage
Creating autostart
==> Killing LXD at /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/cAy
==> Spawning lxd in /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/cAy
==> Spawned LXD (PID is 9490)
==> Confirming lxd is responsive
WARN[11-16|20:20:41] CGroup memory swap accounting is disabled, swap limits will be ignored. 
==> Killing LXD at /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/cAy
==> Spawning lxd in /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/cAy
==> Spawned LXD (PID is 9873)
==> Confirming lxd is responsive
WARN[11-16|20:20:54] CGroup memory swap accounting is disabled, swap limits will be ignored. 
==> Killing LXD at /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/cAy
==> Deleting all containers
==> Deleting all images
==> Deleting all networks
==> Deleting all profiles
Error: The 'default' profile cannot be deleted
==> Clearing config of default profile
==> Deleting all storage pools
Storage pool lxdtest-cAy deleted
==> Checking for locked DB tables
==> Checking for leftover files
==> Checking for leftover DB entries
==> Tearing down directory backend in /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/cAy
Creating foo
Starting foo                              
| foo  | RUNNING |      |      | CONTAINER | 0         |
Creating test-limits
Starting test-limits                      
Creating last-used-at-test
1970-01-01T00:00:00Z                      
2019-11-16T20:21:15.696650548Z
WARN[11-16|20:21:18] Detected poll(POLLNVAL) event. 
                                                    /root
BEST_BAND=meshuggah
21: eth0@if22: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue qlen 1000
abc
WARN[11-16|20:21:20] Detected poll(POLLNVAL) event. 
                                                    abc
WARN[11-16|20:21:20] Detected poll(POLLNVAL) event. 
                                                    WARN[11-16|20:21:20] Detected poll(POLLNVAL) event. 
                        WARN[11-16|20:21:20] Detected poll(POLLNVAL) event. 
                                                                            foo
Creating deleterunning
Starting deleterunning                     
{"error":"container is running","error_code":400,"type":"error"}
Creating lxd-apparmor-test
Starting lxd-apparmor-test                 
Creating lxd-seccomp-test
Starting lxd-seccomp-test                 
Profile unconfined created
Creating foo2
                                          
The container you are starting doesn't have any network attached to it.
  To create a new network, use: lxc network create
  To attach a network to a container, use: lxc network attach

Profile unconfined deleted
Creating configtest
Creating c1                               
Starting c1                               
Creating c2
Starting c2                               
Creating c3
Starting c3                               
WARN[11-16|20:21:49] Detected poll(POLLNVAL) event. 
                                                    WARN[11-16|20:21:49] Detected poll(POLLNVAL) event. 
Container published with fingerprint: b6801668ffd3eca925a4ff8333d15c4718022534b0a6046f67e6c0483381a0ff
Container published with fingerprint: 27950b4a39941c192a2ffc1f8652ac7c2e2fc5aec5f587cd04b63c0461cd862f
Container published with fingerprint: e95d74037723dbdee0f6005fc5b73e79f523af35d05f705510e983e4aa941412
Creating c1
Creating c2                               
| c1   | RUNNING |      |      | CONTAINER | 0         |
| c2   | RUNNING |      |      | CONTAINER | 0         |
Error: Both --all and container name given
| c1   | STOPPED |      |      | CONTAINER | 0         |
| c2   | STOPPED |      |      | CONTAINER | 0         |
Creating foo
Starting foo                              
Error: The 'default' profile cannot be renamed
Error: The 'default' profile cannot be deleted
==> TEST DONE: basic usage (135s)
==> TEST BEGIN: remote url handling
2019/11/16 20:22:22 http: TLS handshake error from 127.0.0.1:47556: EOF
2019/11/16 20:22:22 http: TLS handshake error from 127.0.0.1:47558: remote error: tls: bad certificate
config:
  core.https_address: 127.0.0.1:35483
  core.trust_password: true
api_extensions:
- storage_zfs_remove_snapshots
- container_host_shutdown_timeout
- container_stop_priority
- container_syscall_filtering
- auth_pki
- container_last_used_at
- etag
- patch
- usb_devices
- https_allowed_credentials
- image_compression_algorithm
- directory_manipulation
- container_cpu_time
- storage_zfs_use_refquota
- storage_lvm_mount_options
- network
- profile_usedby
- container_push
- container_exec_recording
- certificate_update
- container_exec_signal_handling
- gpu_devices
- container_image_properties
- migration_progress
- id_map
- network_firewall_filtering
- network_routes
- storage
- file_delete
- file_append
- network_dhcp_expiry
- storage_lvm_vg_rename
- storage_lvm_thinpool_rename
- network_vlan
- image_create_aliases
- container_stateless_copy
- container_only_migration
- storage_zfs_clone_copy
- unix_device_rename
- storage_lvm_use_thinpool
- storage_rsync_bwlimit
- network_vxlan_interface
- storage_btrfs_mount_options
- entity_description
- image_force_refresh
- storage_lvm_lv_resizing
- id_map_base
- file_symlinks
- container_push_target
- network_vlan_physical
- storage_images_delete
- container_edit_metadata
- container_snapshot_stateful_migration
- storage_driver_ceph
- storage_ceph_user_name
- resource_limits
- storage_volatile_initial_source
- storage_ceph_force_osd_reuse
- storage_block_filesystem_btrfs
- resources
- kernel_limits
- storage_api_volume_rename
- macaroon_authentication
- network_sriov
- console
- restrict_devlxd
- migration_pre_copy
- infiniband
- maas_network
- devlxd_events
- proxy
- network_dhcp_gateway
- file_get_symlink
- network_leases
- unix_device_hotplug
- storage_api_local_volume_handling
- operation_description
- clustering
- event_lifecycle
- storage_api_remote_volume_handling
- nvidia_runtime
- container_mount_propagation
- container_backup
- devlxd_images
- container_local_cross_pool_handling
- proxy_unix
- proxy_udp
- clustering_join
- proxy_tcp_udp_multi_port_handling
- network_state
- proxy_unix_dac_properties
- container_protection_delete
- unix_priv_drop
- pprof_http
- proxy_haproxy_protocol
- network_hwaddr
- proxy_nat
- network_nat_order
- container_full
- candid_authentication
- backup_compression
- candid_config
- nvidia_runtime_config
- storage_api_volume_snapshots
- storage_unmapped
- projects
- candid_config_key
- network_vxlan_ttl
- container_incremental_copy
- usb_optional_vendorid
- snapshot_scheduling
- container_copy_project
- clustering_server_address
- clustering_image_replication
- container_protection_shift
- snapshot_expiry
- container_backup_override_pool
- snapshot_expiry_creation
- network_leases_location
- resources_cpu_socket
- resources_gpu
- resources_numa
- kernel_features
- id_map_current
- event_location
- storage_api_remote_volume_snapshots
- network_nat_address
- container_nic_routes
- rbac
- cluster_internal_copy
- seccomp_notify
- lxc_features
- container_nic_ipvlan
- network_vlan_sriov
- storage_cephfs
- container_nic_ipfilter
- resources_v2
- container_exec_user_group_cwd
- container_syscall_intercept
- container_disk_shift
- storage_shifted
- resources_infiniband
- daemon_storage
- instances
- image_types
- resources_disk_sata
- clustering_roles
- images_expiry
- resources_network_firmware
- backup_compression_algorithm
- ceph_data_pool_name
- container_syscall_intercept_mount
- compression_squashfs
- container_raw_mount
- container_nic_routed
- container_syscall_intercept_mount_fuse
api_status: stable
api_version: "1.0"
auth: trusted
public: false
auth_methods:
- tls
environment:
  addresses:
  - 127.0.0.1:35483
  architectures:
  - x86_64
  - i686
  certificate: |
    -----BEGIN CERTIFICATE-----
    MIIFzjCCA7igAwIBAgIRAKnCQRdpkZ86oXYOd9hGrPgwCwYJKoZIhvcNAQELMB4x
    HDAaBgNVBAoTE2xpbnV4Y29udGFpbmVycy5vcmcwHhcNMTUwNzE1MDQ1NjQ0WhcN
    MjUwNzEyMDQ1NjQ0WjAeMRwwGgYDVQQKExNsaW51eGNvbnRhaW5lcnMub3JnMIIC
    IjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAyViJkCzoxa1NYilXqGJog6xz
    lSm4xt8KIzayc0JdB9VxEdIVdJqUzBAUtyCS4KZ9MbPmMEOX9NbBASL0tRK58/7K
    Scq99Kj4XbVMLU1P/y5aW0ymnF0OpKbG6unmgAI2k/duRlbYHvGRdhlswpKl0Yst
    l8i2kXOK0Rxcz90FewcEXGSnIYW21sz8YpBLfIZqOx6XEV36mOdi3MLrhUSAhXDw
    Pay33Y7NonCQUBtiO7BT938cqI14FJrWdKon1UnODtzONcVBLTWtoe7D41+mx7EE
    Taq5OPxBSe0DD6KQcPOZ7ZSJEhIqVKMvzLyiOJpyShmhm4OuGNoAG6jAuSij/9Kc
    aLU4IitcrvFOuAo8M9OpiY9ZCR7Gb/qaPAXPAxE7Ci3f9DDNKXtPXDjhj3YG01+h
    fNXMW3kCkMImn0A/+mZUMdCL87GWN2AN3Do5qaIc5XVEt1gp+LVqJeMoZ/lAeZWT
    IbzcnkneOzE25m+bjw3r3WlR26amhyrWNwjGzRkgfEpw336kniX/GmwaCNgdNk+g
    5aIbVxIHO0DbgkDBtdljR3VOic4djW/LtUIYIQ2egnPPyRR3fcFI+x5EQdVQYUXf
    jpGIwovUDyG0Lkam2tpdeEXvLMZr8+Lhzu+H6vUFSj3cz6gcw/Xepw40FOkYdAI9
    LYB6nwpZLTVaOqZCJ2ECAwEAAaOCAQkwggEFMA4GA1UdDwEB/wQEAwIAoDATBgNV
    HSUEDDAKBggrBgEFBQcDATAMBgNVHRMBAf8EAjAAMIHPBgNVHREEgccwgcSCCVVi
    dW50dVByb4IRMTAuMTY3LjE2MC4xODMvMjSCHzIwMDE6MTVjMDo2NzM1OmVlMDA6
    OmU6ZTMxMy8xMjiCKWZkNTc6Yzg3ZDpmMWVlOmVlMDA6MjFkOjdkZmY6ZmUwOToz
    NzUzLzY0gikyMDAxOjE1YzA6NjczNTplZTAwOjIxZDo3ZGZmOmZlMDk6Mzc1My82
    NIIbZmU4MDo6MjFkOjdkZmY6ZmUwOTozNzUzLzY0ghAxOTIuMTY4LjEyMi4xLzI0
    MAsGCSqGSIb3DQEBCwOCAgEAmcJUSBH7cLw3auEEV1KewtdqY1ARVB/pafAtbe9F
    7ZKBbxUcS7cP3P1hRs5FH1bH44bIJKHxckctNUPqvC+MpXSryKinQ5KvGPNjGdlW
    6EPlQr23btizC6hRdQ6RjEkCnQxhyTLmQ9n78nt47hjA96rFAhCUyfPdv9dI4Zux
    bBTJekhCx5taamQKoxr7tql4Y2TchVlwASZvOfar8I0GxBRFT8w9IjckOSLoT9/s
    OhlvXpeoxxFT7OHwqXEXdRUvw/8MGBo6JDnw+J/NGDBw3Z0goebG4FMT//xGSHia
    czl3A0M0flk4/45L7N6vctwSqi+NxVaJRKeiYPZyzOO9K/d+No+WVBPwKmyP8icQ
    b7FGTelPJOUolC6kmoyM+vyaNUoU4nz6lgOSHAtuqGNDWZWuX/gqzZw77hzDIgkN
    qisOHZWPVlG/iUh1JBkbglBaPeaa3zf0XwSdgwwf4v8Z+YtEiRqkuFgQY70eQKI/
    CIkj1p0iW5IBEsEAGUGklz4ZwqJwH3lQIqDBzIgHe3EP4cXaYsx6oYhPSDdHLPv4
    HMZhl05DP75CEkEWRD0AIaL7SHdyuYUmCZ2zdrMI7TEDrAqcUuPbYpHcdJ2wnYmi
    2G8XHJibfu4PCpIm1J8kPL8rqpdgW3moKR8Mp0HJQOH4tSBr1Ep7xNLP1wg6PIe+
    p7U=
    -----END CERTIFICATE-----
  certificate_fingerprint: 5325b921edf26720953bff015b69900943315b5db69f3c6e17c25a694875c5b8
  driver: lxc
  driver_version: 3.0.3
  kernel: Linux
  kernel_architecture: x86_64
  kernel_features:
    netnsid_getifaddrs: "false"
    seccomp_listener: "false"
    seccomp_listener_continue: "false"
    shiftfs: "false"
    uevent_injection: "false"
    unpriv_fscaps: "true"
  kernel_version: 4.15.0-70-generic
  lxc_features:
    mount_injection_file: "false"
    network_gateway_device_route: "false"
    network_ipvlan: "false"
    network_l2proxy: "false"
    network_phys_macvlan_mtu: "false"
    network_veth_router: "false"
    seccomp_notify: "false"
  project: default
  server: lxd
  server_clustered: false
  server_name: liopleurodon
  server_pid: 4919
  server_version: "3.18"
  storage: dir
  storage_version: "1"
2019/11/16 20:22:22 http: TLS handshake error from 127.0.0.1:47570: EOF
2019/11/16 20:22:22 http: TLS handshake error from 127.0.0.1:47572: remote error: tls: bad certificate
Client certificate stored at server:  test
config:
  core.https_address: 127.0.0.1:35483
  core.trust_password: true
api_extensions:
- storage_zfs_remove_snapshots
- container_host_shutdown_timeout
- container_stop_priority
- container_syscall_filtering
- auth_pki
- container_last_used_at
- etag
- patch
- usb_devices
- https_allowed_credentials
- image_compression_algorithm
- directory_manipulation
- container_cpu_time
- storage_zfs_use_refquota
- storage_lvm_mount_options
- network
- profile_usedby
- container_push
- container_exec_recording
- certificate_update
- container_exec_signal_handling
- gpu_devices
- container_image_properties
- migration_progress
- id_map
- network_firewall_filtering
- network_routes
- storage
- file_delete
- file_append
- network_dhcp_expiry
- storage_lvm_vg_rename
- storage_lvm_thinpool_rename
- network_vlan
- image_create_aliases
- container_stateless_copy
- container_only_migration
- storage_zfs_clone_copy
- unix_device_rename
- storage_lvm_use_thinpool
- storage_rsync_bwlimit
- network_vxlan_interface
- storage_btrfs_mount_options
- entity_description
- image_force_refresh
- storage_lvm_lv_resizing
- id_map_base
- file_symlinks
- container_push_target
- network_vlan_physical
- storage_images_delete
- container_edit_metadata
- container_snapshot_stateful_migration
- storage_driver_ceph
- storage_ceph_user_name
- resource_limits
- storage_volatile_initial_source
- storage_ceph_force_osd_reuse
- storage_block_filesystem_btrfs
- resources
- kernel_limits
- storage_api_volume_rename
- macaroon_authentication
- network_sriov
- console
- restrict_devlxd
- migration_pre_copy
- infiniband
- maas_network
- devlxd_events
- proxy
- network_dhcp_gateway
- file_get_symlink
- network_leases
- unix_device_hotplug
- storage_api_local_volume_handling
- operation_description
- clustering
- event_lifecycle
- storage_api_remote_volume_handling
- nvidia_runtime
- container_mount_propagation
- container_backup
- devlxd_images
- container_local_cross_pool_handling
- proxy_unix
- proxy_udp
- clustering_join
- proxy_tcp_udp_multi_port_handling
- network_state
- proxy_unix_dac_properties
- container_protection_delete
- unix_priv_drop
- pprof_http
- proxy_haproxy_protocol
- network_hwaddr
- proxy_nat
- network_nat_order
- container_full
- candid_authentication
- backup_compression
- candid_config
- nvidia_runtime_config
- storage_api_volume_snapshots
- storage_unmapped
- projects
- candid_config_key
- network_vxlan_ttl
- container_incremental_copy
- usb_optional_vendorid
- snapshot_scheduling
- container_copy_project
- clustering_server_address
- clustering_image_replication
- container_protection_shift
- snapshot_expiry
- container_backup_override_pool
- snapshot_expiry_creation
- network_leases_location
- resources_cpu_socket
- resources_gpu
- resources_numa
- kernel_features
- id_map_current
- event_location
- storage_api_remote_volume_snapshots
- network_nat_address
- container_nic_routes
- rbac
- cluster_internal_copy
- seccomp_notify
- lxc_features
- container_nic_ipvlan
- network_vlan_sriov
- storage_cephfs
- container_nic_ipfilter
- resources_v2
- container_exec_user_group_cwd
- container_syscall_intercept
- container_disk_shift
- storage_shifted
- resources_infiniband
- daemon_storage
- instances
- image_types
- resources_disk_sata
- clustering_roles
- images_expiry
- resources_network_firmware
- backup_compression_algorithm
- ceph_data_pool_name
- container_syscall_intercept_mount
- compression_squashfs
- container_raw_mount
- container_nic_routed
- container_syscall_intercept_mount_fuse
api_status: stable
api_version: "1.0"
auth: trusted
public: false
auth_methods:
- tls
environment:
  addresses:
  - 127.0.0.1:35483
  architectures:
  - x86_64
  - i686
  certificate: |
    -----BEGIN CERTIFICATE-----
    MIIFzjCCA7igAwIBAgIRAKnCQRdpkZ86oXYOd9hGrPgwCwYJKoZIhvcNAQELMB4x
    HDAaBgNVBAoTE2xpbnV4Y29udGFpbmVycy5vcmcwHhcNMTUwNzE1MDQ1NjQ0WhcN
    MjUwNzEyMDQ1NjQ0WjAeMRwwGgYDVQQKExNsaW51eGNvbnRhaW5lcnMub3JnMIIC
    IjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAyViJkCzoxa1NYilXqGJog6xz
    lSm4xt8KIzayc0JdB9VxEdIVdJqUzBAUtyCS4KZ9MbPmMEOX9NbBASL0tRK58/7K
    Scq99Kj4XbVMLU1P/y5aW0ymnF0OpKbG6unmgAI2k/duRlbYHvGRdhlswpKl0Yst
    l8i2kXOK0Rxcz90FewcEXGSnIYW21sz8YpBLfIZqOx6XEV36mOdi3MLrhUSAhXDw
    Pay33Y7NonCQUBtiO7BT938cqI14FJrWdKon1UnODtzONcVBLTWtoe7D41+mx7EE
    Taq5OPxBSe0DD6KQcPOZ7ZSJEhIqVKMvzLyiOJpyShmhm4OuGNoAG6jAuSij/9Kc
    aLU4IitcrvFOuAo8M9OpiY9ZCR7Gb/qaPAXPAxE7Ci3f9DDNKXtPXDjhj3YG01+h
    fNXMW3kCkMImn0A/+mZUMdCL87GWN2AN3Do5qaIc5XVEt1gp+LVqJeMoZ/lAeZWT
    IbzcnkneOzE25m+bjw3r3WlR26amhyrWNwjGzRkgfEpw336kniX/GmwaCNgdNk+g
    5aIbVxIHO0DbgkDBtdljR3VOic4djW/LtUIYIQ2egnPPyRR3fcFI+x5EQdVQYUXf
    jpGIwovUDyG0Lkam2tpdeEXvLMZr8+Lhzu+H6vUFSj3cz6gcw/Xepw40FOkYdAI9
    LYB6nwpZLTVaOqZCJ2ECAwEAAaOCAQkwggEFMA4GA1UdDwEB/wQEAwIAoDATBgNV
    HSUEDDAKBggrBgEFBQcDATAMBgNVHRMBAf8EAjAAMIHPBgNVHREEgccwgcSCCVVi
    dW50dVByb4IRMTAuMTY3LjE2MC4xODMvMjSCHzIwMDE6MTVjMDo2NzM1OmVlMDA6
    OmU6ZTMxMy8xMjiCKWZkNTc6Yzg3ZDpmMWVlOmVlMDA6MjFkOjdkZmY6ZmUwOToz
    NzUzLzY0gikyMDAxOjE1YzA6NjczNTplZTAwOjIxZDo3ZGZmOmZlMDk6Mzc1My82
    NIIbZmU4MDo6MjFkOjdkZmY6ZmUwOTozNzUzLzY0ghAxOTIuMTY4LjEyMi4xLzI0
    MAsGCSqGSIb3DQEBCwOCAgEAmcJUSBH7cLw3auEEV1KewtdqY1ARVB/pafAtbe9F
    7ZKBbxUcS7cP3P1hRs5FH1bH44bIJKHxckctNUPqvC+MpXSryKinQ5KvGPNjGdlW
    6EPlQr23btizC6hRdQ6RjEkCnQxhyTLmQ9n78nt47hjA96rFAhCUyfPdv9dI4Zux
    bBTJekhCx5taamQKoxr7tql4Y2TchVlwASZvOfar8I0GxBRFT8w9IjckOSLoT9/s
    OhlvXpeoxxFT7OHwqXEXdRUvw/8MGBo6JDnw+J/NGDBw3Z0goebG4FMT//xGSHia
    czl3A0M0flk4/45L7N6vctwSqi+NxVaJRKeiYPZyzOO9K/d+No+WVBPwKmyP8icQ
    b7FGTelPJOUolC6kmoyM+vyaNUoU4nz6lgOSHAtuqGNDWZWuX/gqzZw77hzDIgkN
    qisOHZWPVlG/iUh1JBkbglBaPeaa3zf0XwSdgwwf4v8Z+YtEiRqkuFgQY70eQKI/
    CIkj1p0iW5IBEsEAGUGklz4ZwqJwH3lQIqDBzIgHe3EP4cXaYsx6oYhPSDdHLPv4
    HMZhl05DP75CEkEWRD0AIaL7SHdyuYUmCZ2zdrMI7TEDrAqcUuPbYpHcdJ2wnYmi
    2G8XHJibfu4PCpIm1J8kPL8rqpdgW3moKR8Mp0HJQOH4tSBr1Ep7xNLP1wg6PIe+
    p7U=
    -----END CERTIFICATE-----
  certificate_fingerprint: 5325b921edf26720953bff015b69900943315b5db69f3c6e17c25a694875c5b8
  driver: lxc
  driver_version: 3.0.3
  kernel: Linux
  kernel_architecture: x86_64
  kernel_features:
    netnsid_getifaddrs: "false"
    seccomp_listener: "false"
    seccomp_listener_continue: "false"
    shiftfs: "false"
    uevent_injection: "false"
    unpriv_fscaps: "true"
  kernel_version: 4.15.0-70-generic
  lxc_features:
    mount_injection_file: "false"
    network_gateway_device_route: "false"
    network_ipvlan: "false"
    network_l2proxy: "false"
    network_phys_macvlan_mtu: "false"
    network_veth_router: "false"
    seccomp_notify: "false"
  project: default
  server: lxd
  server_clustered: false
  server_name: liopleurodon
  server_pid: 4919
  server_version: "3.18"
  storage: dir
  storage_version: "1"
Error: Invalid protocol: foo
==> TEST DONE: remote url handling (5s)
==> TEST BEGIN: remote administration
2019/11/16 20:22:26 http: TLS handshake error from 127.0.0.1:47620: EOF
2019/11/16 20:22:26 http: TLS handshake error from 127.0.0.1:47622: remote error: tls: bad certificate
WARN[11-16|20:22:27] Bad trust password                       url=/1.0/certificates ip=127.0.0.1:47630
Error: not authorized
Error: The remote "badpass" doesn't exist
2019/11/16 20:22:27 http: TLS handshake error from 127.0.0.1:47632: EOF
2019/11/16 20:22:27 http: TLS handshake error from 127.0.0.1:47634: remote error: tls: bad certificate
Client certificate stored at server:  foo
| foo             | https://127.0.0.1:35483                  | lxd           | tls         | NO     | NO     |
| bar (default) | https://127.0.0.1:35483                  | lxd           | tls         | NO     | NO     |
+---------------+------------------------------------------+---------------+-------------+--------+--------+
|     NAME      |                   URL                    |   PROTOCOL    |  AUTH TYPE  | PUBLIC | STATIC |
+---------------+------------------------------------------+---------------+-------------+--------+--------+
| bar (default) | https://127.0.0.1:35483                  | lxd           | tls         | NO     | NO     |
+---------------+------------------------------------------+---------------+-------------+--------+--------+
| images        | https://images.linuxcontainers.org       | simplestreams | none        | YES    | NO     |
+---------------+------------------------------------------+---------------+-------------+--------+--------+
| local         | unix://                                  | lxd           | file access | NO     | YES    |
+---------------+------------------------------------------+---------------+-------------+--------+--------+
| localhost     | https://127.0.0.1:35483                  | lxd           | tls         | NO     | NO     |
+---------------+------------------------------------------+---------------+-------------+--------+--------+
| ubuntu        | https://cloud-images.ubuntu.com/releases | simplestreams | none        | YES    | YES    |
+---------------+------------------------------------------+---------------+-------------+--------+--------+
| ubuntu-daily  | https://cloud-images.ubuntu.com/daily    | simplestreams | none        | YES    | YES    |
+---------------+------------------------------------------+---------------+-------------+--------+--------+
Error: Can't remove the default remote
2019/11/16 20:22:27 http: TLS handshake error from 127.0.0.1:47646: EOF
2019/11/16 20:22:27 http: TLS handshake error from 127.0.0.1:47648: remote error: tls: bad certificate
Certificate fingerprint: 5325b921edf26720953bff015b69900943315b5db69f3c6e17c25a694875c5b8
ok (y/n)? Generating a client certificate. This may take a minute...
Error: Get https://0.0.0.0:8443: Unable to connect to: 0.0.0.0:8443
2019/11/16 20:22:28 http: TLS handshake error from 127.0.0.1:47662: EOF
2019/11/16 20:22:28 http: TLS handshake error from 127.0.0.1:47664: remote error: tls: bad certificate
==> TEST DONE: remote administration (6s)
==> TEST BEGIN: remote usage
==> Setting up directory backend in /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/pUg
==> Spawning lxd in /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/pUg
==> Spawned LXD (PID is 14570)
==> Confirming lxd is responsive
WARN[11-16|20:22:32] CGroup memory swap accounting is disabled, swap limits will be ignored. 
==> Binding to network
==> Bound to 127.0.0.1:55867
==> Setting trust password
==> Setting up networking
Device eth0 added to default
==> Configuring storage backend
==> Configuring directory backend in /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/pUg
Storage pool lxdtest-pUg created
Device root added to default
2019/11/16 20:22:36 http: TLS handshake error from 127.0.0.1:47644: EOF
2019/11/16 20:22:36 http: TLS handshake error from 127.0.0.1:47646: remote error: tls: bad certificate
Client certificate stored at server:  lxd2
Image exported successfully!           
Error: not found
Image copied successfully!          
Image copied successfully!          
Fingerprint: d3245ce429109d5a5fb7d3119f4fd7216bdfe890aeb3b478be9533b7300d78a7
Size: 0.82MB
Architecture: x86_64
Type: container
Public: yes
Timestamps:
    Created: 2019/11/13 18:56 UTC
    Uploaded: 2019/11/16 20:22 UTC
    Expires: never
    Last used: never
Properties:
    architecture: x86_64
    description: Busybox x86_64
    name: busybox-x86_64
    os: Busybox
Aliases:
    - testimage
Cached: no
Auto update: disabled
Image copied successfully!          
Image copied successfully!          
Image copied successfully!          
Creating c1
Creating pub                              
Container published with fingerprint: bd8109a3e02a3d10a596c0632fbf3ad0c3bcf20f370e56fe977d833df6de2dc5
Error: not found
2019/11/16 20:22:46 http: TLS handshake error from 127.0.0.1:47948: EOF
2019/11/16 20:22:46 http: TLS handshake error from 127.0.0.1:47950: remote error: tls: bad certificate
Creating pub
Creating c2                                
Creating c1
Creating c1                               
Starting c1                               
| c1   | RUNNING |      |      | CONTAINER | 0         |
Name: c1
Location: none
Remote: https://127.0.0.1:55867
Architecture: x86_64
Created: 2019/11/16 20:22 UTC
Status: Running
Type: container
Profiles: default
Pid: 15337
Ips:
  lo:	inet	127.0.0.1
  lo:	inet6	::1
  eth0:	inet6	fe80::216:3eff:fef9:996e	veth537f06f9
Resources:
  Processes: 1
  CPU usage:
    CPU usage (in seconds): 0
  Memory usage:
    Memory (current): 2.06MB
    Memory (peak): 5.10MB
  Network usage:
    eth0:
      Bytes received: 90B
      Bytes sent: 90B
      Packets received: 1
      Packets sent: 1
    lo:
      Bytes received: 0B
      Bytes sent: 0B
      Packets received: 0
      Packets sent: 0
474
3472
==> Killing LXD at /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/pUg
==> Deleting all containers
==> Deleting all images
==> Deleting all networks
==> Deleting all profiles
Error: The 'default' profile cannot be deleted
==> Clearing config of default profile
==> Deleting all storage pools
Storage pool lxdtest-pUg deleted
==> Checking for locked DB tables
==> Checking for leftover files
==> Checking for leftover DB entries
==> Tearing down directory backend in /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/pUg
==> TEST DONE: remote usage (41s)
==> TEST BEGIN: clustering enable
==> Setting up directory backend in /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/Ryg
==> Spawning lxd in /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/Ryg
==> Spawned LXD (PID is 15754)
==> Confirming lxd is responsive
WARN[11-16|20:23:13] CGroup memory swap accounting is disabled, swap limits will be ignored. 
==> Binding to network
==> Bound to 127.0.0.1:49199
==> Setting trust password
==> Setting up networking
Device eth0 added to default
Image imported as: d3245ce429109d5a5fb7d3119f4fd7216bdfe890aeb3b478be9533b7300d78a7
Setup alias: testimage
Storage pool default created
Device root added to default
Creating c1
Starting c1                               
EROR[11-16|20:23:21] Failed to get leader node address: Node is not clustered 
Clustering enabled
Error: This LXD instance is already clustered
==> Killing LXD at /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/Ryg
==> Deleting all containers
==> Deleting all images
==> Deleting all networks
==> Deleting all profiles
Error: The 'default' profile cannot be deleted
==> Clearing config of default profile
==> Deleting all storage pools
Storage pool default deleted
==> Checking for locked DB tables
WARN[11-16|20:23:25] Dqlite client proxy TLS -> Unix: read tcp 127.0.0.1:40978->127.0.0.1:49199: use of closed network connection 
WARN[11-16|20:23:25] Dqlite server proxy Unix -> TLS: read unix @->@0001f: use of closed network connection 
==> Checking for leftover files
==> Checking for leftover DB entries
==> Tearing down directory backend in /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/Ryg
==> TEST DONE: clustering enable (14s)
==> TEST BEGIN: clustering membership
==> Setup clustering bridge br4881
==> Setup clustering netns lxd48811
==> Spawn bootstrap cluster node in lxd48811 with storage driver dir
==> Setting up directory backend in /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/5Ga
==> Spawning lxd in /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/5Ga
==> Spawned LXD (PID is 16514)
==> Confirming lxd is responsive
WARN[11-16|20:23:28] CGroup memory swap accounting is disabled, swap limits will be ignored. 
WARN[11-16|20:23:31] Failed to update instance types: Get https://images.linuxcontainers.org/meta/instance-types/.yaml: lookup images.linuxcontainers.org on 127.0.0.53:53: read udp 127.0.0.1:47981->127.0.0.53:53: read: connection refused 
==> Setting trust password
EROR[11-16|20:23:32] Failed to get leader node address: Node is not clustered 
==> Setup clustering netns lxd48812
==> Spawn additional cluster node in lxd48812 with storage driver dir
==> Setting up directory backend in /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/Qxo
==> Spawning lxd in /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/Qxo
==> Spawned LXD (PID is 16673)
==> Confirming lxd is responsive
WARN[11-16|20:23:33] CGroup memory swap accounting is disabled, swap limits will be ignored. 
WARN[11-16|20:23:36] Failed to update instance types: Get https://images.linuxcontainers.org/meta/instance-types/.yaml: lookup images.linuxcontainers.org on 127.0.0.53:53: read udp 127.0.0.1:59283->127.0.0.53:53: read: connection refused 
==> Setting trust password
EROR[11-16|20:23:38] Failed to get leader node address: Node is not clustered 
EROR[11-16|20:23:40] Unaccounted raft node(s) not found in 'nodes' table for heartbeat: map[10.1.1.101:8443:{ID:1 Address:10.1.1.101:8443} 10.1.1.102:8443:{ID:2 Address:10.1.1.102:8443}] 
EROR[11-16|20:23:40] Empty raft node set received 
WARN[11-16|20:23:40] Dqlite client proxy TLS -> Unix: read tcp 10.1.1.102:42522->10.1.1.101:8443: use of closed network connection 
WARN[11-16|20:23:40] Dqlite server proxy Unix -> TLS: read unix @->@00022: use of closed network connection 
Storage pool pool1 pending on member node1
Network net1 pending on member node2
==> Setup clustering netns lxd48813
==> Spawn additional cluster node in lxd48813 with storage driver dir
==> Setting up directory backend in /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/RJ8
==> Spawning lxd in /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/RJ8
==> Spawned LXD (PID is 16863)
==> Confirming lxd is responsive
WARN[11-16|20:23:41] CGroup memory swap accounting is disabled, swap limits will be ignored. 
==> Setting trust password
WARN[11-16|20:23:45] Failed to update instance types: Get https://images.linuxcontainers.org/meta/instance-types/.yaml: lookup images.linuxcontainers.org on 127.0.0.53:53: read udp 127.0.0.1:49352->127.0.0.53:53: read: connection refused 
EROR[11-16|20:23:46] Failed to get leader node address: Node is not clustered 
EROR[11-16|20:23:48] Unaccounted raft node(s) not found in 'nodes' table for heartbeat: map[10.1.1.101:8443:{ID:1 Address:10.1.1.101:8443} 10.1.1.102:8443:{ID:2 Address:10.1.1.102:8443} 10.1.1.103:8443:{ID:3 Address:10.1.1.103:8443}] 
EROR[11-16|20:23:48] Empty raft node set received 
EROR[11-16|20:23:48] Empty raft node set received 
WARN[11-16|20:23:48] Dqlite server proxy Unix -> TLS: read unix @->@00022: use of closed network connection 
WARN[11-16|20:23:48] Dqlite client proxy TLS -> Unix: read tcp 10.1.1.103:33114->10.1.1.101:8443: use of closed network connection 
==> Setup clustering netns lxd48814
==> Spawn additional cluster node in lxd48814 with storage driver dir
==> Setting up directory backend in /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/be0
==> Spawning lxd in /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/be0
==> Spawned LXD (PID is 16999)
==> Confirming lxd is responsive
WARN[11-16|20:23:48] CGroup memory swap accounting is disabled, swap limits will be ignored. 
WARN[11-16|20:23:52] Failed to update instance types: Get https://images.linuxcontainers.org/meta/instance-types/.yaml: lookup images.linuxcontainers.org on 127.0.0.53:53: read udp 127.0.0.1:35447->127.0.0.53:53: read: connection refused 
==> Setting trust password
EROR[11-16|20:23:54] Failed to get leader node address: Node is not clustered 
EROR[11-16|20:23:54] Unaccounted raft node(s) not found in 'nodes' table for heartbeat: map[10.1.1.102:8443:{ID:2 Address:10.1.1.102:8443} 10.1.1.103:8443:{ID:3 Address:10.1.1.103:8443} 10.1.1.101:8443:{ID:1 Address:10.1.1.101:8443}] 
EROR[11-16|20:23:54] Empty raft node set received 
EROR[11-16|20:23:54] Empty raft node set received 
EROR[11-16|20:23:54] Empty raft node set received 
==> Setup clustering netns lxd48815
==> Spawn additional cluster node in lxd48815 with storage driver dir
==> Setting up directory backend in /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/H4p
==> Spawning lxd in /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/H4p
==> Spawned LXD (PID is 17133)
==> Confirming lxd is responsive
WARN[11-16|20:23:54] CGroup memory swap accounting is disabled, swap limits will be ignored. 
WARN[11-16|20:23:58] Failed to update instance types: Get https://images.linuxcontainers.org/meta/instance-types/.yaml: lookup images.linuxcontainers.org on 127.0.0.53:53: read udp 127.0.0.1:50782->127.0.0.53:53: read: connection refused 
==> Setting trust password
EROR[11-16|20:24:00] Failed to get leader node address: Node is not clustered 
EROR[11-16|20:24:00] Unaccounted raft node(s) not found in 'nodes' table for heartbeat: map[10.1.1.101:8443:{ID:1 Address:10.1.1.101:8443} 10.1.1.102:8443:{ID:2 Address:10.1.1.102:8443} 10.1.1.103:8443:{ID:3 Address:10.1.1.103:8443}] 
EROR[11-16|20:24:00] Empty raft node set received 
EROR[11-16|20:24:00] Empty raft node set received 
EROR[11-16|20:24:00] Empty raft node set received 
+-------+-------------------------+----------+--------+-------------------+
| NAME  |           URL           | DATABASE | STATE  |      MESSAGE      |
+-------+-------------------------+----------+--------+-------------------+
| node1 | https://10.1.1.101:8443 | YES      | ONLINE | fully operational |
+-------+-------------------------+----------+--------+-------------------+
| node2 | https://10.1.1.102:8443 | YES      | ONLINE | fully operational |
+-------+-------------------------+----------+--------+-------------------+
| node3 | https://10.1.1.103:8443 | YES      | ONLINE | fully operational |
+-------+-------------------------+----------+--------+-------------------+
| node4 | https://10.1.1.104:8443 | NO       | ONLINE | fully operational |
+-------+-------------------------+----------+--------+-------------------+
| node5 | https://10.1.1.105:8443 | NO       | ONLINE | fully operational |
+-------+-------------------------+----------+--------+-------------------+
2019/11/16 20:24:01 http: TLS handshake error from 10.1.1.1:34566: EOF
2019/11/16 20:24:01 http: TLS handshake error from 10.1.1.1:34568: remote error: tls: bad certificate
Client certificate stored at server:  cluster
WARN[11-16|20:24:02] Dqlite client proxy TLS -> Unix: read tcp 10.1.1.103:33122->10.1.1.101:8443: use of closed network connection 
WARN[11-16|20:24:02] Dqlite server proxy Unix -> TLS: read unix @->@00022: use of closed network connection 
WARN[11-16|20:24:02] Dqlite server proxy TLS -> Unix: read tcp 10.1.1.103:8443->10.1.1.1:46218: use of closed network connection 
WARN[11-16|20:24:02] Dqlite client proxy Unix -> TLS: read unix @->@0002d: use of closed network connection 
WARN[11-16|20:24:02] Dqlite server proxy Unix -> TLS: read unix @->@00022: use of closed network connection 
WARN[11-16|20:24:02] Dqlite client proxy TLS -> Unix: read tcp 10.1.1.103:33118->10.1.1.101:8443: use of closed network connection 
WARN[11-16|20:24:06] Excluding offline node from refresh: {ID:2 Address:10.1.1.102:8443 RaftID:2 Raft:true LastHeartbeat:2019-11-16 20:23:43.11165916 +0000 UTC Online:false updated:false} 
WARN[11-16|20:24:06] Excluding offline node from refresh: {ID:3 Address:10.1.1.103:8443 RaftID:3 Raft:true LastHeartbeat:2019-11-16 20:23:46 +0000 UTC Online:false updated:false} 
WARN[11-16|20:24:06] Excluding offline node from refresh: {ID:3 Address:10.1.1.103:8443 RaftID:3 Raft:true LastHeartbeat:2019-11-16 20:23:46 +0000 UTC Online:false updated:false} 
WARN[11-16|20:24:07] Excluding offline node from refresh: {ID:3 Address:10.1.1.103:8443 RaftID:3 Raft:true LastHeartbeat:2019-11-16 20:23:46 +0000 UTC Online:false updated:false} 
WARN[11-16|20:24:07] Excluding offline node from refresh: {ID:3 Address:10.1.1.103:8443 RaftID:3 Raft:true LastHeartbeat:2019-11-16 20:23:46 +0000 UTC Online:false updated:false} 
WARN[11-16|20:24:19] Excluding offline node from refresh: {ID:3 Address:10.1.1.103:8443 RaftID:3 Raft:true LastHeartbeat:2019-11-16 20:23:46 +0000 UTC Online:false updated:false} 
WARN[11-16|20:24:19] Excluding offline node from refresh: {ID:3 Address:10.1.1.103:8443 RaftID:3 Raft:true LastHeartbeat:2019-11-16 20:23:46 +0000 UTC Online:false updated:false} 
WARN[11-16|20:24:19] Excluding offline node from refresh: {ID:3 Address:10.1.1.103:8443 RaftID:3 Raft:true LastHeartbeat:2019-11-16 20:23:46 +0000 UTC Online:false updated:false} 
WARN[11-16|20:24:19] Excluding offline node from refresh: {ID:3 Address:10.1.1.103:8443 RaftID:3 Raft:true LastHeartbeat:2019-11-16 20:23:46 +0000 UTC Online:false updated:false} 
WARN[11-16|20:24:25] Excluding offline node from refresh: {ID:3 Address:10.1.1.103:8443 RaftID:3 Raft:true LastHeartbeat:2019-11-16 20:23:46 +0000 UTC Online:false updated:false} 
WARN[11-16|20:24:28] Excluding offline node from refresh: {ID:3 Address:10.1.1.103:8443 RaftID:3 Raft:true LastHeartbeat:2019-11-16 20:23:46 +0000 UTC Online:false updated:false} 
WARN[11-16|20:24:28] Excluding offline node from refresh: {ID:3 Address:10.1.1.103:8443 RaftID:3 Raft:true LastHeartbeat:2019-11-16 20:23:46 +0000 UTC Online:false updated:false} 
WARN[11-16|20:24:28] Excluding offline node from refresh: {ID:3 Address:10.1.1.103:8443 RaftID:3 Raft:true LastHeartbeat:2019-11-16 20:23:46 +0000 UTC Online:false updated:false} 
+-------+-------------------------+----------+---------+----------------------------------+
| NAME  |           URL           | DATABASE |  STATE  |             MESSAGE              |
+-------+-------------------------+----------+---------+----------------------------------+
| node1 | https://10.1.1.101:8443 | YES      | ONLINE  | fully operational                |
+-------+-------------------------+----------+---------+----------------------------------+
| node2 | https://10.1.1.102:8443 | YES      | ONLINE  | fully operational                |
+-------+-------------------------+----------+---------+----------------------------------+
| node3 | https://10.1.1.103:8443 | YES      | OFFLINE | no heartbeat since 46.989496963s |
+-------+-------------------------+----------+---------+----------------------------------+
| node4 | https://10.1.1.104:8443 | NO       | ONLINE  | fully operational                |
+-------+-------------------------+----------+---------+----------------------------------+
| node5 | https://10.1.1.105:8443 | NO       | ONLINE  | fully operational                |
+-------+-------------------------+----------+---------+----------------------------------+
WARN[11-16|20:24:33] Dqlite client proxy TLS -> Unix: read tcp 10.1.1.105:33640->10.1.1.101:8443: use of closed network connection 
WARN[11-16|20:24:33] Dqlite server proxy Unix -> TLS: read unix @->@00022: use of closed network connection 
WARN[11-16|20:24:33] Dqlite client proxy TLS -> Unix: read tcp 10.1.1.104:38920->10.1.1.101:8443: use of closed network connection 
WARN[11-16|20:24:33] Dqlite server proxy Unix -> TLS: read unix @->@00022: use of closed network connection 
WARN[11-16|20:24:33] Dqlite client proxy TLS -> Unix: read tcp 10.1.1.102:42530->10.1.1.101:8443: use of closed network connection 
WARN[11-16|20:24:33] Dqlite server proxy Unix -> TLS: read unix @->@00022: use of closed network connection 
WARN[11-16|20:24:33] Dqlite server proxy TLS -> Unix: read tcp 10.1.1.102:8443->10.1.1.1:32874: use of closed network connection 
WARN[11-16|20:24:33] Dqlite client proxy Unix -> TLS: read unix @->@00027: use of closed network connection 
WARN[11-16|20:24:33] Dqlite server proxy Unix -> TLS: read unix @->@00022: use of closed network connection 
WARN[11-16|20:24:33] Dqlite client proxy TLS -> Unix: read tcp 10.1.1.102:42526->10.1.1.101:8443: use of closed network connection 
WARN[11-16|20:24:33] Failed to get events from node 10.1.1.104:8443: Unable to connect to: 10.1.1.104:8443 
WARN[11-16|20:24:33] Failed to get events from node 10.1.1.105:8443: Unable to connect to: 10.1.1.105:8443 
WARN[11-16|20:24:33] Dqlite client proxy TLS -> Unix: read tcp 10.1.1.101:40520->10.1.1.101:8443: use of closed network connection 
WARN[11-16|20:24:33] Dqlite server proxy Unix -> TLS: read unix @->@00022: use of closed network connection 
==> Teardown clustering netns lxd48811
==> Teardown clustering netns lxd48812
==> Teardown clustering netns lxd48813
==> Teardown clustering netns lxd48814
==> Teardown clustering netns lxd48815
==> Teardown clustering bridge br4881
==> Killing LXD at /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/5Ga
==> Tearing down directory backend in /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/5Ga
==> Killing LXD at /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/Qxo
==> Tearing down directory backend in /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/Qxo
==> Killing LXD at /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/RJ8
==> Tearing down directory backend in /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/RJ8
==> Killing LXD at /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/be0
==> Tearing down directory backend in /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/be0
==> Killing LXD at /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/H4p
==> Tearing down directory backend in /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/H4p
==> TEST DONE: clustering membership (68s)
==> TEST BEGIN: clustering containers
==> Setup clustering bridge br4881
==> Setup clustering netns lxd48811
==> Spawn bootstrap cluster node in lxd48811 with storage driver dir
==> Setting up directory backend in /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/aYi
==> Spawning lxd in /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/aYi
==> Spawned LXD (PID is 17836)
==> Confirming lxd is responsive
WARN[11-16|20:24:36] CGroup memory swap accounting is disabled, swap limits will be ignored. 
WARN[11-16|20:24:39] Failed to update instance types: Get https://images.linuxcontainers.org/meta/instance-types/.yaml: lookup images.linuxcontainers.org on 127.0.0.53:53: read udp 127.0.0.1:34246->127.0.0.53:53: read: connection refused 
==> Setting trust password
EROR[11-16|20:24:40] Failed to get leader node address: Node is not clustered 
==> Setup clustering netns lxd48812
==> Spawn additional cluster node in lxd48812 with storage driver dir
==> Setting up directory backend in /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/o37
==> Spawning lxd in /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/o37
==> Spawned LXD (PID is 17972)
==> Confirming lxd is responsive
WARN[11-16|20:24:41] CGroup memory swap accounting is disabled, swap limits will be ignored. 
WARN[11-16|20:24:43] Failed to update instance types: Get https://images.linuxcontainers.org/meta/instance-types/.yaml: lookup images.linuxcontainers.org on 127.0.0.53:53: read udp 127.0.0.1:48561->127.0.0.53:53: read: connection refused 
==> Setting trust password
EROR[11-16|20:24:45] Failed to get leader node address: Node is not clustered 
EROR[11-16|20:24:47] Unaccounted raft node(s) not found in 'nodes' table for heartbeat: map[10.1.1.101:8443:{ID:1 Address:10.1.1.101:8443} 10.1.1.102:8443:{ID:2 Address:10.1.1.102:8443}] 
EROR[11-16|20:24:47] Empty raft node set received 
WARN[11-16|20:24:47] Dqlite client proxy TLS -> Unix: read tcp 10.1.1.102:42878->10.1.1.101:8443: use of closed network connection 
WARN[11-16|20:24:47] Dqlite server proxy Unix -> TLS: read unix @->@00035: use of closed network connection 
==> Setup clustering netns lxd48813
==> Spawn additional cluster node in lxd48813 with storage driver dir
==> Setting up directory backend in /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/kb1
==> Spawning lxd in /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/kb1
==> Spawned LXD (PID is 18110)
==> Confirming lxd is responsive
WARN[11-16|20:24:47] CGroup memory swap accounting is disabled, swap limits will be ignored. 
WARN[11-16|20:24:51] Failed to update instance types: Get https://images.linuxcontainers.org/meta/instance-types/.yaml: lookup images.linuxcontainers.org on 127.0.0.53:53: read udp 127.0.0.1:36634->127.0.0.53:53: read: connection refused 
==> Setting trust password
EROR[11-16|20:24:52] Failed to get leader node address: Node is not clustered 
EROR[11-16|20:24:54] Unaccounted raft node(s) not found in 'nodes' table for heartbeat: map[10.1.1.101:8443:{ID:1 Address:10.1.1.101:8443} 10.1.1.102:8443:{ID:2 Address:10.1.1.102:8443} 10.1.1.103:8443:{ID:3 Address:10.1.1.103:8443}] 
EROR[11-16|20:24:54] Empty raft node set received 
EROR[11-16|20:24:54] Empty raft node set received 
WARN[11-16|20:24:54] Dqlite client proxy TLS -> Unix: read tcp 10.1.1.103:33462->10.1.1.101:8443: use of closed network connection 
WARN[11-16|20:24:54] Dqlite server proxy Unix -> TLS: read unix @->@00035: use of closed network connection 
Image imported as: d3245ce429109d5a5fb7d3119f4fd7216bdfe890aeb3b478be9533b7300d78a7
Setup alias: testimage
Creating foo
                                          
The container you are starting doesn't have any network attached to it.
  To create a new network, use: lxc network create
  To attach a network to a container, use: lxc network attach

Error: Node still has the following containers: foo
Error: not found
Error: not found
Image exported successfully!           
Creating bar
                                           
The container you are starting doesn't have any network attached to it.
  To create a new network, use: lxc network create
  To attach a network to a container, use: lxc network attach

Starting bar
Network lxd4881 deleted
WARN[11-16|20:25:29] Dqlite client proxy TLS -> Unix: read tcp 10.1.1.102:42886->10.1.1.101:8443: use of closed network connection 
WARN[11-16|20:25:29] Dqlite server proxy Unix -> TLS: read unix @->@00035: use of closed network connection 
WARN[11-16|20:25:29] Dqlite client proxy TLS -> Unix: read tcp 10.1.1.102:42882->10.1.1.101:8443: use of closed network connection 
WARN[11-16|20:25:29] Dqlite server proxy TLS -> Unix: read tcp 10.1.1.102:8443->10.1.1.1:33230: use of closed network connection 
WARN[11-16|20:25:29] Dqlite server proxy Unix -> TLS: read unix @->@00035: use of closed network connection 
WARN[11-16|20:25:29] Dqlite client proxy Unix -> TLS: read unix @->@0003a: use of closed network connection 
WARN[11-16|20:25:30] Failed to get events from node 10.1.1.102:8443: Unable to connect to: 10.1.1.102:8443 
WARN[11-16|20:25:31] Failed to get events from node 10.1.1.102:8443: Unable to connect to: 10.1.1.102:8443 
WARN[11-16|20:25:31] Failed to get events from node 10.1.1.102:8443: Unable to connect to: 10.1.1.102:8443 
WARN[11-16|20:25:32] Failed to get events from node 10.1.1.102:8443: Unable to connect to: 10.1.1.102:8443 
WARN[11-16|20:25:32] Failed to get events from node 10.1.1.102:8443: Unable to connect to: 10.1.1.102:8443 
WARN[11-16|20:25:33] Failed to get events from node 10.1.1.102:8443: Unable to connect to: 10.1.1.102:8443 
WARN[11-16|20:25:33] Failed to get events from node 10.1.1.102:8443: Unable to connect to: 10.1.1.102:8443 
WARN[11-16|20:25:34] Failed to get events from node 10.1.1.102:8443: Unable to connect to: 10.1.1.102:8443 
WARN[11-16|20:25:34] Failed to get events from node 10.1.1.102:8443: Unable to connect to: 10.1.1.102:8443 
WARN[11-16|20:25:35] Failed to get events from node 10.1.1.102:8443: Unable to connect to: 10.1.1.102:8443 
WARN[11-16|20:25:35] Failed to get events from node 10.1.1.102:8443: Unable to connect to: 10.1.1.102:8443 
WARN[11-16|20:25:36] Failed to get events from node 10.1.1.102:8443: Unable to connect to: 10.1.1.102:8443 
WARN[11-16|20:25:44] Excluding offline node from refresh: {ID:2 Address:10.1.1.102:8443 RaftID:2 Raft:true LastHeartbeat:2019-11-16 20:25:24.767115515 +0000 UTC Online:false updated:false} 
WARN[11-16|20:25:44] Excluding offline node from refresh: {ID:2 Address:10.1.1.102:8443 RaftID:2 Raft:true LastHeartbeat:2019-11-16 20:25:24.767115515 +0000 UTC Online:false updated:false} 
WARN[11-16|20:25:53] Excluding offline node from refresh: {ID:2 Address:10.1.1.102:8443 RaftID:2 Raft:true LastHeartbeat:2019-11-16 20:25:24.767115515 +0000 UTC Online:false updated:false} 
WARN[11-16|20:25:56] Excluding offline node from refresh: {ID:2 Address:10.1.1.102:8443 RaftID:2 Raft:true LastHeartbeat:2019-11-16 20:25:24.767115515 +0000 UTC Online:false updated:false} 
Creating bar
                                          
The container you are starting doesn't have any network attached to it.
  To create a new network, use: lxc network create
  To attach a network to a container, use: lxc network attach

Starting bar
Creating egg
                                          
The container you are starting doesn't have any network attached to it.
  To create a new network, use: lxc network create
  To attach a network to a container, use: lxc network attach

Starting egg
WARN[11-16|20:26:04] Excluding offline node from refresh: {ID:2 Address:10.1.1.102:8443 RaftID:2 Raft:true LastHeartbeat:2019-11-16 20:25:24.767115515 +0000 UTC Online:false updated:false} 
WARN[11-16|20:26:05] Excluding offline node from refresh: {ID:2 Address:10.1.1.102:8443 RaftID:2 Raft:true LastHeartbeat:2019-11-16 20:25:24.767115515 +0000 UTC Online:false updated:false} 
WARN[11-16|20:26:09] Dqlite client proxy TLS -> Unix: read tcp 10.1.1.103:33472->10.1.1.101:8443: use of closed network connection 
WARN[11-16|20:26:09] Dqlite server proxy Unix -> TLS: read unix @->@00035: use of closed network connection 
WARN[11-16|20:26:10] Dqlite client proxy TLS -> Unix: read tcp 10.1.1.103:33466->10.1.1.101:8443: use of closed network connection 
WARN[11-16|20:26:10] Dqlite server proxy Unix -> TLS: read unix @->@00035: use of closed network connection 
WARN[11-16|20:26:10] Dqlite server proxy TLS -> Unix: read tcp 10.1.1.103:8443->10.1.1.1:46566: use of closed network connection 
WARN[11-16|20:26:10] Dqlite client proxy Unix -> TLS: read unix @->@00040: use of closed network connection 
WARN[11-16|20:26:13] Dqlite client proxy TLS -> Unix: read tcp 10.1.1.101:40876->10.1.1.101:8443: use of closed network connection 
WARN[11-16|20:26:13] Failed to get current cluster nodes: failed to begin transaction: failed to create dqlite connection: no available dqlite leader server found 
WARN[11-16|20:26:13] Dqlite server proxy Unix -> TLS: read unix @->@00035: use of closed network connection 
WARN[11-16|20:26:13] Failed to get current cluster nodes: failed to begin transaction: failed to create dqlite connection: no available dqlite leader server found 
==> Teardown clustering netns lxd48811
==> Teardown clustering netns lxd48812
==> Teardown clustering netns lxd48813
==> Teardown clustering bridge br4881
==> Killing LXD at /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/aYi
==> Tearing down directory backend in /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/aYi
==> Killing LXD at /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/o37
==> Tearing down directory backend in /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/o37
==> Killing LXD at /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/kb1
==> Tearing down directory backend in /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/kb1
==> TEST DONE: clustering containers (100s)
==> TEST BEGIN: clustering storage
==> Setup clustering bridge br4881
==> Setup clustering netns lxd48811
==> Spawn bootstrap cluster node in lxd48811 with storage driver dir
==> Setting up directory backend in /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/RjW
==> Spawning lxd in /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/RjW
==> Spawned LXD (PID is 19680)
==> Confirming lxd is responsive
WARN[11-16|20:26:15] CGroup memory swap accounting is disabled, swap limits will be ignored. 
WARN[11-16|20:26:18] Failed to update instance types: Get https://images.linuxcontainers.org/meta/instance-types/.yaml: lookup images.linuxcontainers.org on 127.0.0.53:53: read udp 127.0.0.1:44347->127.0.0.53:53: read: connection refused 
==> Setting trust password
EROR[11-16|20:26:20] Failed to get leader node address: Node is not clustered 
==> Setup clustering netns lxd48812
==> Spawn additional cluster node in lxd48812 with storage driver dir
==> Setting up directory backend in /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/y2z
==> Spawning lxd in /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/y2z
==> Spawned LXD (PID is 19828)
==> Confirming lxd is responsive
WARN[11-16|20:26:21] CGroup memory swap accounting is disabled, swap limits will be ignored. 
WARN[11-16|20:26:24] Failed to update instance types: Get https://images.linuxcontainers.org/meta/instance-types/.yaml: lookup images.linuxcontainers.org on 127.0.0.53:53: read udp 127.0.0.1:43378->127.0.0.53:53: read: connection refused 
==> Setting trust password
EROR[11-16|20:26:26] Failed to get leader node address: Node is not clustered 
EROR[11-16|20:26:27] Unaccounted raft node(s) not found in 'nodes' table for heartbeat: map[10.1.1.101:8443:{ID:1 Address:10.1.1.101:8443} 10.1.1.102:8443:{ID:2 Address:10.1.1.102:8443}] 
EROR[11-16|20:26:27] Empty raft node set received 
WARN[11-16|20:26:27] Dqlite client proxy TLS -> Unix: read tcp 10.1.1.102:43412->10.1.1.101:8443: use of closed network connection 
WARN[11-16|20:26:27] Dqlite server proxy Unix -> TLS: read unix @->@00044: use of closed network connection 
Error: the key size cannot be used with DIR storage pools
Storage pool pool1 pending on member node1
Storage pool pool1 pending on member node2
Error: Config key 'source' is node-specific
Storage pool pool1 created
Storage pool pool1 deleted
Storage volume web created
Renamed storage volume from "web" to "webbaz"
Renamed storage volume from "webbaz" to "web"
Storage volume web created
Error: more than one node has a volume named web
Error: more than one node has a volume named web
Error: more than one node has a volume named web
Renamed storage volume from "web" to "webbaz"
Renamed storage volume from "web" to "webbaz"
Storage volume webbaz deleted
Storage volume webbaz deleted
Storage pool data deleted
WARN[11-16|20:26:32] Dqlite client proxy TLS -> Unix: read tcp 10.1.1.102:43420->10.1.1.101:8443: use of closed network connection 
WARN[11-16|20:26:32] Dqlite server proxy Unix -> TLS: read unix @->@00044: use of closed network connection 
WARN[11-16|20:26:32] Dqlite client proxy TLS -> Unix: read tcp 10.1.1.102:43416->10.1.1.101:8443: use of closed network connection 
WARN[11-16|20:26:32] Dqlite client proxy Unix -> TLS: read unix @->@00049: use of closed network connection 
WARN[11-16|20:26:32] Dqlite server proxy TLS -> Unix: read tcp 10.1.1.102:8443->10.1.1.1:33764: use of closed network connection 
WARN[11-16|20:26:32] Dqlite server proxy Unix -> TLS: read unix @->@00044: use of closed network connection 
WARN[11-16|20:26:32] Dqlite client proxy TLS -> Unix: read tcp 10.1.1.101:41410->10.1.1.101:8443: use of closed network connection 
WARN[11-16|20:26:32] Dqlite server proxy Unix -> TLS: read unix @->@00044: use of closed network connection 
WARN[11-16|20:26:32] Failed to get current cluster nodes: failed to begin transaction: sql: database is closed 
==> Teardown clustering netns lxd48811
==> Teardown clustering netns lxd48812
==> Teardown clustering bridge br4881
==> Killing LXD at /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/RjW
==> Tearing down directory backend in /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/RjW
==> Killing LXD at /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/y2z
==> Tearing down directory backend in /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/y2z
==> TEST DONE: clustering storage (18s)
==> TEST BEGIN: clustering network
==> Setup clustering bridge br4881
==> Setup clustering netns lxd48811
==> Spawn bootstrap cluster node in lxd48811 with storage driver dir
==> Setting up directory backend in /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/4Wp
==> Spawning lxd in /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/4Wp
==> Spawned LXD (PID is 20500)
==> Confirming lxd is responsive
WARN[11-16|20:26:34] CGroup memory swap accounting is disabled, swap limits will be ignored. 
==> Setting trust password
WARN[11-16|20:26:37] Failed to update instance types: Get https://images.linuxcontainers.org/meta/instance-types/.yaml: lookup images.linuxcontainers.org on 127.0.0.53:53: read udp 127.0.0.1:43542->127.0.0.53:53: read: connection refused 
EROR[11-16|20:26:38] Failed to get leader node address: Node is not clustered 
==> Setup clustering netns lxd48812
==> Spawn additional cluster node in lxd48812 with storage driver dir
==> Setting up directory backend in /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/k8l
==> Spawning lxd in /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/k8l
==> Spawned LXD (PID is 20775)
==> Confirming lxd is responsive
WARN[11-16|20:26:39] CGroup memory swap accounting is disabled, swap limits will be ignored. 
WARN[11-16|20:26:42] Failed to update instance types: Get https://images.linuxcontainers.org/meta/instance-types/.yaml: lookup images.linuxcontainers.org on 127.0.0.53:53: read udp 127.0.0.1:41830->127.0.0.53:53: read: connection refused 
==> Setting trust password
EROR[11-16|20:26:44] Failed to get leader node address: Node is not clustered 
EROR[11-16|20:26:45] Unaccounted raft node(s) not found in 'nodes' table for heartbeat: map[10.1.1.102:8443:{ID:2 Address:10.1.1.102:8443} 10.1.1.101:8443:{ID:1 Address:10.1.1.101:8443}] 
EROR[11-16|20:26:45] Empty raft node set received 
WARN[11-16|20:26:45] Dqlite client proxy TLS -> Unix: read tcp 10.1.1.102:43502->10.1.1.101:8443: use of closed network connection 
WARN[11-16|20:26:45] Dqlite server proxy Unix -> TLS: read unix @->@0004d: use of closed network connection 
Error: Config key 'ipv4.address' may not be used as node-specific key
Network lxd4881x pending on member node1
Network lxd4881x pending on member node2
Error: The network already defined on node node2
Error: Config key 'bridge.external_interfaces' is node-specific
Network lxd4881x created
Error: Renaming a network not supported in LXD clusters
Network lxd4881x deleted
Network lxd4881 deleted
WARN[11-16|20:26:52] Dqlite client proxy TLS -> Unix: read tcp 10.1.1.102:43510->10.1.1.101:8443: use of closed network connection 
WARN[11-16|20:26:52] Dqlite server proxy Unix -> TLS: read unix @->@0004d: use of closed network connection 
WARN[11-16|20:26:52] Dqlite server proxy TLS -> Unix: read tcp 10.1.1.102:8443->10.1.1.1:33854: use of closed network connection 
WARN[11-16|20:26:52] Dqlite client proxy Unix -> TLS: read unix @->@00052: use of closed network connection 
WARN[11-16|20:26:52] Dqlite client proxy TLS -> Unix: read tcp 10.1.1.102:43506->10.1.1.101:8443: use of closed network connection 
WARN[11-16|20:26:52] Dqlite server proxy Unix -> TLS: read unix @->@0004d: use of closed network connection 
WARN[11-16|20:26:53] Dqlite client proxy TLS -> Unix: read tcp 10.1.1.101:41500->10.1.1.101:8443: use of closed network connection 
WARN[11-16|20:26:53] Dqlite server proxy Unix -> TLS: read unix @->@0004d: use of closed network connection 
==> Teardown clustering netns lxd48811
==> Teardown clustering netns lxd48812
==> Teardown clustering bridge br4881
==> Killing LXD at /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/4Wp
==> Tearing down directory backend in /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/4Wp
==> Killing LXD at /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/k8l
==> Tearing down directory backend in /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/k8l
==> TEST DONE: clustering network (21s)
==> TEST BEGIN: clustering publish
==> Setup clustering bridge br4881
==> Setup clustering netns lxd48811
==> Spawn bootstrap cluster node in lxd48811 with storage driver dir
==> Setting up directory backend in /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/mLm
==> Spawning lxd in /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/mLm
==> Spawned LXD (PID is 21521)
==> Confirming lxd is responsive
WARN[11-16|20:26:55] CGroup memory swap accounting is disabled, swap limits will be ignored. 
==> Setting trust password
WARN[11-16|20:26:58] Failed to update instance types: Get https://images.linuxcontainers.org/meta/instance-types/.yaml: lookup images.linuxcontainers.org on 127.0.0.53:53: read udp 127.0.0.1:54720->127.0.0.53:53: read: connection refused 
EROR[11-16|20:26:59] Failed to get leader node address: Node is not clustered 
==> Setup clustering netns lxd48812
==> Spawn additional cluster node in lxd48812 with storage driver dir
==> Setting up directory backend in /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/leZ
==> Spawning lxd in /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/leZ
==> Spawned LXD (PID is 21714)
==> Confirming lxd is responsive
WARN[11-16|20:27:00] CGroup memory swap accounting is disabled, swap limits will be ignored. 
WARN[11-16|20:27:03] Failed to update instance types: Get https://images.linuxcontainers.org/meta/instance-types/.yaml: lookup images.linuxcontainers.org on 127.0.0.53:53: read udp 127.0.0.1:40027->127.0.0.53:53: read: connection refused 
==> Setting trust password
EROR[11-16|20:27:04] Failed to get leader node address: Node is not clustered 
EROR[11-16|20:27:06] Unaccounted raft node(s) not found in 'nodes' table for heartbeat: map[10.1.1.101:8443:{ID:1 Address:10.1.1.101:8443} 10.1.1.102:8443:{ID:2 Address:10.1.1.102:8443}] 
EROR[11-16|20:27:06] Empty raft node set received 
WARN[11-16|20:27:06] Dqlite client proxy TLS -> Unix: read tcp 10.1.1.102:43574->10.1.1.101:8443: use of closed network connection 
WARN[11-16|20:27:06] Dqlite server proxy Unix -> TLS: read unix @->@00056: use of closed network connection 
Image imported as: d3245ce429109d5a5fb7d3119f4fd7216bdfe890aeb3b478be9533b7300d78a7
Setup alias: testimage
Creating foo
                                          
The container you are starting doesn't have any network attached to it.
  To create a new network, use: lxc network create
  To attach a network to a container, use: lxc network attach

Container published with fingerprint: 8c24e476240b9d5651b5c5fc90fe9dbaf1d176befce2eb23f4f2f282df4da756
Container published with fingerprint: 8c24e476240b9d5651b5c5fc90fe9dbaf1d176befce2eb23f4f2f282df4da756
WARN[11-16|20:27:15] Dqlite client proxy TLS -> Unix: read tcp 10.1.1.102:43582->10.1.1.101:8443: use of closed network connection 
WARN[11-16|20:27:15] Dqlite server proxy Unix -> TLS: read unix @->@00056: use of closed network connection 
WARN[11-16|20:27:15] Dqlite client proxy TLS -> Unix: read tcp 10.1.1.102:43578->10.1.1.101:8443: use of closed network connection 
WARN[11-16|20:27:15] Dqlite client proxy Unix -> TLS: read unix @->@0005b: use of closed network connection 
WARN[11-16|20:27:15] Dqlite server proxy TLS -> Unix: read tcp 10.1.1.102:8443->10.1.1.1:33926: use of closed network connection 
WARN[11-16|20:27:15] Dqlite server proxy Unix -> TLS: read unix @->@00056: use of closed network connection 
WARN[11-16|20:27:15] Failed to delete operation 285aa21d-8764-49ee-b47b-b91adcd734bb: failed to begin transaction: sql: database is closed 
WARN[11-16|20:27:20] Dqlite client proxy TLS -> Unix: read tcp 10.1.1.101:41572->10.1.1.101:8443: use of closed network connection 
WARN[11-16|20:27:20] Failed to delete operation 62789ad0-55fb-42cc-b236-1a739ca3cfac: failed to begin transaction: failed to create dqlite connection: no available dqlite leader server found 
WARN[11-16|20:27:20] Failed to delete operation 4f966127-59bc-47f5-9662-a5583acb82d8: failed to begin transaction: failed to create dqlite connection: no available dqlite leader server found 
WARN[11-16|20:27:20] Failed to get current cluster nodes: failed to begin transaction: failed to create dqlite connection: no available dqlite leader server found 
2019/11/16 20:27:20 http: response.WriteHeader on hijacked connection
2019/11/16 20:27:20 http: response.Write on hijacked connection
WARN[11-16|20:27:20] Failed to get current cluster nodes: failed to begin transaction: failed to create dqlite connection: no available dqlite leader server found 
WARN[11-16|20:27:20] Failed to delete operation b4f9b00c-b289-41cb-80e9-dac51ffb7e89: failed to begin transaction: failed to create dqlite connection: no available dqlite leader server found 
WARN[11-16|20:27:20] Dqlite server proxy Unix -> TLS: read unix @->@00056: use of closed network connection 
WARN[11-16|20:27:20] Failed to get current cluster nodes: failed to begin transaction: failed to create dqlite connection: no available dqlite leader server found 
==> Teardown clustering netns lxd48811
==> Teardown clustering netns lxd48812
==> Teardown clustering bridge br4881
==> Killing LXD at /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/mLm
==> Tearing down directory backend in /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/mLm
==> Killing LXD at /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/leZ
==> Tearing down directory backend in /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/leZ
==> TEST DONE: clustering publish (27s)
==> TEST BEGIN: clustering profiles
==> Setup clustering bridge br4881
==> Setup clustering netns lxd48811
==> Spawn bootstrap cluster node in lxd48811 with storage driver dir
==> Setting up directory backend in /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/8mQ
==> Spawning lxd in /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/8mQ
==> Spawned LXD (PID is 22262)
==> Confirming lxd is responsive
WARN[11-16|20:27:21] CGroup memory swap accounting is disabled, swap limits will be ignored. 
WARN[11-16|20:27:25] Failed to update instance types: Get https://images.linuxcontainers.org/meta/instance-types/.yaml: lookup images.linuxcontainers.org on 127.0.0.53:53: read udp 127.0.0.1:46563->127.0.0.53:53: read: connection refused 
==> Setting trust password
EROR[11-16|20:27:26] Failed to get leader node address: Node is not clustered 
==> Setup clustering netns lxd48812
==> Spawn additional cluster node in lxd48812 with storage driver dir
==> Setting up directory backend in /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/uLh
==> Spawning lxd in /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/uLh
==> Spawned LXD (PID is 22422)
==> Confirming lxd is responsive
WARN[11-16|20:27:27] CGroup memory swap accounting is disabled, swap limits will be ignored. 
WARN[11-16|20:27:30] Failed to update instance types: Get https://images.linuxcontainers.org/meta/instance-types/.yaml: lookup images.linuxcontainers.org on 127.0.0.53:53: read udp 127.0.0.1:38661->127.0.0.53:53: read: connection refused 
==> Setting trust password
EROR[11-16|20:27:32] Failed to get leader node address: Node is not clustered 
EROR[11-16|20:27:33] Unaccounted raft node(s) not found in 'nodes' table for heartbeat: map[10.1.1.101:8443:{ID:1 Address:10.1.1.101:8443} 10.1.1.102:8443:{ID:2 Address:10.1.1.102:8443}] 
EROR[11-16|20:27:33] Empty raft node set received 
WARN[11-16|20:27:33] Dqlite client proxy TLS -> Unix: read tcp 10.1.1.102:43684->10.1.1.101:8443: use of closed network connection 
WARN[11-16|20:27:33] Dqlite server proxy Unix -> TLS: read unix @->@0005f: use of closed network connection 
Profile web created
Image imported as: d3245ce429109d5a5fb7d3119f4fd7216bdfe890aeb3b478be9533b7300d78a7
Setup alias: testimage
Creating c1
                                          
The container you are starting doesn't have any network attached to it.
  To create a new network, use: lxc network create
  To attach a network to a container, use: lxc network attach

Starting c1
Creating c2
                                          
The container you are starting doesn't have any network attached to it.
  To create a new network, use: lxc network create
  To attach a network to a container, use: lxc network attach

Starting c2
WARN[11-16|20:27:48] Dqlite client proxy TLS -> Unix: read tcp 10.1.1.102:43692->10.1.1.101:8443: use of closed network connection 
WARN[11-16|20:27:48] Dqlite server proxy Unix -> TLS: read unix @->@0005f: use of closed network connection 
WARN[11-16|20:27:48] Dqlite client proxy Unix -> TLS: read unix @->@00064: use of closed network connection 
WARN[11-16|20:27:48] Dqlite client proxy TLS -> Unix: read tcp 10.1.1.102:43688->10.1.1.101:8443: use of closed network connection 
WARN[11-16|20:27:48] Dqlite server proxy Unix -> TLS: read unix @->@0005f: use of closed network connection 
WARN[11-16|20:27:48] Dqlite server proxy TLS -> Unix: read tcp 10.1.1.102:8443->10.1.1.1:34036: use of closed network connection 
WARN[11-16|20:27:53] Dqlite client proxy TLS -> Unix: read tcp 10.1.1.101:41682->10.1.1.101:8443: use of closed network connection 
WARN[11-16|20:27:53] Dqlite server proxy Unix -> TLS: read unix @->@0005f: use of closed network connection 
WARN[11-16|20:27:53] Failed to get current cluster nodes: failed to begin transaction: failed to create dqlite connection: no available dqlite leader server found 
WARN[11-16|20:27:53] Failed to delete operation 43e0be76-6f62-44db-96dd-96cc20703d4f: failed to begin transaction: failed to create dqlite connection: no available dqlite leader server found 
WARN[11-16|20:27:53] Failed to delete operation 475e90e6-8671-4f32-88ed-6976f011ab21: failed to begin transaction: failed to create dqlite connection: no available dqlite leader server found 
WARN[11-16|20:27:53] Failed to get current cluster nodes: failed to begin transaction: failed to create dqlite connection: no available dqlite leader server found 
WARN[11-16|20:27:53] Failed to get current cluster nodes: failed to begin transaction: failed to create dqlite connection: no available dqlite leader server found 
==> Teardown clustering netns lxd48811
==> Teardown clustering netns lxd48812
==> Teardown clustering bridge br4881
==> Killing LXD at /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/8mQ
==> Tearing down directory backend in /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/8mQ
==> Killing LXD at /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/uLh
==> Tearing down directory backend in /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/uLh
==> TEST DONE: clustering profiles (34s)
==> TEST BEGIN: clustering join api
==> Setup clustering bridge br4881
==> Setup clustering netns lxd48811
==> Spawn bootstrap cluster node in lxd48811 with storage driver dir
==> Setting up directory backend in /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/AkM
==> Spawning lxd in /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/AkM
==> Spawned LXD (PID is 23103)
==> Confirming lxd is responsive
WARN[11-16|20:27:55] CGroup memory swap accounting is disabled, swap limits will be ignored. 
WARN[11-16|20:27:58] Failed to update instance types: Get https://images.linuxcontainers.org/meta/instance-types/.yaml: lookup images.linuxcontainers.org on 127.0.0.53:53: read udp 127.0.0.1:33666->127.0.0.53:53: read: connection refused 
==> Setting trust password
EROR[11-16|20:28:00] Failed to get leader node address: Node is not clustered 
==> Setup clustering netns lxd48812
==> Setting up directory backend in /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/9xn
==> Spawning lxd in /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/9xn
==> Spawned LXD (PID is 23321)
==> Confirming lxd is responsive
WARN[11-16|20:28:01] CGroup memory swap accounting is disabled, swap limits will be ignored. 
WARN[11-16|20:28:05] Failed to update instance types: Get https://images.linuxcontainers.org/meta/instance-types/.yaml: lookup images.linuxcontainers.org on 127.0.0.53:53: read udp 127.0.0.1:60407->127.0.0.53:53: read: connection refused 
==> Setting trust password
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  2834  100   481  100  2353   4008  19608 --:--:-- --:--:-- --:--:-- 23616
EROR[11-16|20:28:06] Failed to get leader node address: Node is not clustered 
EROR[11-16|20:28:08] Unaccounted raft node(s) not found in 'nodes' table for heartbeat: map[10.1.1.101:8443:{ID:1 Address:10.1.1.101:8443} 10.1.1.102:8443:{ID:2 Address:10.1.1.102:8443}] 
EROR[11-16|20:28:08] Empty raft node set received 
WARN[11-16|20:28:08] Dqlite client proxy TLS -> Unix: read tcp 10.1.1.102:43794->10.1.1.101:8443: use of closed network connection 
WARN[11-16|20:28:08] Dqlite server proxy Unix -> TLS: read unix @->@00068: use of closed network connection 
{"type":"sync","status":"Success","status_code":200,"operation":"","error_code":0,"error":"","metadata":{"id":"b56d0a07-5942-4889-a8b9-c6fcb4887c84","class":"task","description":"Joining cluster","created_at":"2019-11-16T20:28:05.881662261Z","updated_at":"2019-11-16T20:28:05.881662261Z","status":"Success","status_code":200,"resources":{"cluster":null},"metadata":null,"may_cancel":false,"err":"","location":"node2"}}
WARN[11-16|20:28:08] Dqlite client proxy TLS -> Unix: read tcp 10.1.1.102:43802->10.1.1.101:8443: use of closed network connection 
WARN[11-16|20:28:08] Dqlite server proxy Unix -> TLS: read unix @->@00068: use of closed network connection 
WARN[11-16|20:28:08] Dqlite client proxy Unix -> TLS: read unix @->@0006d: use of closed network connection 
WARN[11-16|20:28:08] Dqlite client proxy TLS -> Unix: read tcp 10.1.1.102:43798->10.1.1.101:8443: use of closed network connection 
WARN[11-16|20:28:08] Dqlite server proxy TLS -> Unix: read tcp 10.1.1.102:8443->10.1.1.1:34146: use of closed network connection 
WARN[11-16|20:28:08] Dqlite server proxy Unix -> TLS: read unix @->@00068: use of closed network connection 
WARN[11-16|20:28:08] Dqlite client proxy TLS -> Unix: read tcp 10.1.1.101:41792->10.1.1.101:8443: use of closed network connection 
WARN[11-16|20:28:08] Dqlite server proxy Unix -> TLS: read unix @->@00068: use of closed network connection 
==> Teardown clustering netns lxd48811
==> Teardown clustering netns lxd48812
==> Teardown clustering bridge br4881
==> Killing LXD at /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/9xn
==> Tearing down directory backend in /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/9xn
==> Killing LXD at /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/AkM
==> Tearing down directory backend in /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/AkM
==> TEST DONE: clustering join api (15s)
==> TEST BEGIN: clustering shutdown
==> Setup clustering bridge br4881
==> Setup clustering netns lxd48811
==> Spawn bootstrap cluster node in lxd48811 with storage driver dir
==> Setting up directory backend in /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/wBM
==> Spawning lxd in /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/wBM
==> Spawned LXD (PID is 23665)
==> Confirming lxd is responsive
WARN[11-16|20:28:10] CGroup memory swap accounting is disabled, swap limits will be ignored. 
WARN[11-16|20:28:13] Failed to update instance types: Get https://images.linuxcontainers.org/meta/instance-types/.yaml: lookup images.linuxcontainers.org on 127.0.0.53:53: read udp 127.0.0.1:54416->127.0.0.53:53: read: connection refused 
==> Setting trust password
EROR[11-16|20:28:14] Failed to get leader node address: Node is not clustered 
==> Setup clustering netns lxd48812
==> Spawn additional cluster node in lxd48812 with storage driver dir
==> Setting up directory backend in /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/C1u
==> Spawning lxd in /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/C1u
==> Spawned LXD (PID is 23913)
==> Confirming lxd is responsive
WARN[11-16|20:28:16] CGroup memory swap accounting is disabled, swap limits will be ignored. 
WARN[11-16|20:28:19] Failed to update instance types: Get https://images.linuxcontainers.org/meta/instance-types/.yaml: lookup images.linuxcontainers.org on 127.0.0.53:53: read udp 127.0.0.1:35389->127.0.0.53:53: read: connection refused 
==> Setting trust password
EROR[11-16|20:28:21] Failed to get leader node address: Node is not clustered 
EROR[11-16|20:28:22] Unaccounted raft node(s) not found in 'nodes' table for heartbeat: map[10.1.1.101:8443:{ID:1 Address:10.1.1.101:8443} 10.1.1.102:8443:{ID:2 Address:10.1.1.102:8443}] 
EROR[11-16|20:28:22] Empty raft node set received 
WARN[11-16|20:28:22] Dqlite server proxy Unix -> TLS: read unix @->@00071: use of closed network connection 
WARN[11-16|20:28:22] Dqlite client proxy TLS -> Unix: read tcp 10.1.1.102:43840->10.1.1.101:8443: use of closed network connection 
==> Setup clustering netns lxd48813
==> Spawn additional cluster node in lxd48813 with storage driver dir
==> Setting up directory backend in /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/ETv
==> Spawning lxd in /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/ETv
==> Spawned LXD (PID is 24051)
==> Confirming lxd is responsive
WARN[11-16|20:28:23] CGroup memory swap accounting is disabled, swap limits will be ignored. 
WARN[11-16|20:28:27] Failed to update instance types: Get https://images.linuxcontainers.org/meta/instance-types/.yaml: lookup images.linuxcontainers.org on 127.0.0.53:53: read udp 127.0.0.1:49337->127.0.0.53:53: read: connection refused 
==> Setting trust password
EROR[11-16|20:28:28] Failed to get leader node address: Node is not clustered 
EROR[11-16|20:28:30] Unaccounted raft node(s) not found in 'nodes' table for heartbeat: map[10.1.1.101:8443:{ID:1 Address:10.1.1.101:8443} 10.1.1.102:8443:{ID:2 Address:10.1.1.102:8443} 10.1.1.103:8443:{ID:3 Address:10.1.1.103:8443}] 
EROR[11-16|20:28:30] Empty raft node set received 
EROR[11-16|20:28:30] Empty raft node set received 
WARN[11-16|20:28:30] Dqlite client proxy TLS -> Unix: read tcp 10.1.1.103:34426->10.1.1.101:8443: use of closed network connection 
WARN[11-16|20:28:30] Dqlite server proxy Unix -> TLS: read unix @->@00071: use of closed network connection 
Image imported as: d3245ce429109d5a5fb7d3119f4fd7216bdfe890aeb3b478be9533b7300d78a7
Setup alias: testimage
Creating foo
                                          
The container you are starting doesn't have any network attached to it.
  To create a new network, use: lxc network create
  To attach a network to a container, use: lxc network attach

Starting foo
WARN[11-16|20:28:36] Dqlite client proxy TLS -> Unix: read tcp 10.1.1.102:43848->10.1.1.101:8443: use of closed network connection 
WARN[11-16|20:28:36] Dqlite server proxy Unix -> TLS: read unix @->@00071: use of closed network connection 
WARN[11-16|20:28:36] Dqlite client proxy TLS -> Unix: read tcp 10.1.1.102:43844->10.1.1.101:8443: use of closed network connection 
WARN[11-16|20:28:36] Dqlite server proxy Unix -> TLS: read unix @->@00071: use of closed network connection 
WARN[11-16|20:28:36] Dqlite server proxy TLS -> Unix: read tcp 10.1.1.102:8443->10.1.1.1:34192: use of closed network connection 
WARN[11-16|20:28:36] Dqlite client proxy Unix -> TLS: read unix @->@00076: use of closed network connection 
WARN[11-16|20:28:36] Dqlite client proxy TLS -> Unix: read tcp 10.1.1.103:34434->10.1.1.101:8443: use of closed network connection 
WARN[11-16|20:28:36] Dqlite server proxy Unix -> TLS: read unix @->@00071: use of closed network connection 
WARN[11-16|20:28:36] Failed to get events from node 10.1.1.102:8443: Unable to connect to: 10.1.1.102:8443 
WARN[11-16|20:28:36] Dqlite client proxy TLS -> Unix: read tcp 10.1.1.103:34430->10.1.1.101:8443: use of closed network connection 
WARN[11-16|20:28:36] Dqlite server proxy Unix -> TLS: read unix @->@00071: use of closed network connection 
WARN[11-16|20:28:36] Dqlite server proxy TLS -> Unix: read tcp 10.1.1.103:8443->10.1.1.1:47530: use of closed network connection 
WARN[11-16|20:28:36] Dqlite client proxy Unix -> TLS: read unix @->@0007c: use of closed network connection 
WARN[11-16|20:28:39] Dqlite client proxy TLS -> Unix: read tcp 10.1.1.101:41838->10.1.1.101:8443: use of closed network connection 
WARN[11-16|20:28:39] Dqlite server proxy Unix -> TLS: read unix @->@00071: use of closed network connection 
WARN[11-16|20:28:49] Failed to get current cluster nodes: failed to begin transaction: failed to create dqlite connection: no available dqlite leader server found 
WARN[11-16|20:29:00] Failed to get current cluster nodes: failed to begin transaction: failed to create dqlite connection: no available dqlite leader server found 
WARN[11-16|20:29:08] Failed to delete operation 6e7263ce-aa92-4668-af65-28899753bc68: failed to begin transaction: failed to create dqlite connection: no available dqlite leader server found 
WARN[11-16|20:29:08] Failed to delete operation 803c698a-ae26-49f4-9f43-fa19e216ef81: failed to begin transaction: failed to create dqlite connection: no available dqlite leader server found 
WARN[11-16|20:29:08] Failed to get current cluster nodes: failed to begin transaction: failed to create dqlite connection: no available dqlite leader server found 
WARN[11-16|20:29:08] Failed to update heartbeat: failed to begin transaction: failed to create dqlite connection: no available dqlite leader server found 
WARN[11-16|20:29:08] Failed to delete operation 4d329e85-66da-4db5-920a-e130eb9b1e22: failed to begin transaction: failed to create dqlite connection: no available dqlite leader server found 
EROR[11-16|20:29:08] Error refreshing forkdns: failed to begin transaction: failed to create dqlite connection: no available dqlite leader server found 
WARN[11-16|20:29:08] Failed to get current cluster nodes: failed to begin transaction: failed to create dqlite connection: no available dqlite leader server found 
WARN[11-16|20:29:09] Failed to get current cluster nodes: failed to begin transaction: failed to create dqlite connection: no available dqlite leader server found 
WARN[11-16|20:29:10] Failed to get current cluster nodes: failed to begin transaction: failed to create dqlite connection: no available dqlite leader server found 
WARN[11-16|20:29:11] Failed to get current cluster nodes: failed to begin transaction: failed to create dqlite connection: no available dqlite leader server found 
WARN[11-16|20:29:12] Failed to get current cluster nodes: failed to begin transaction: failed to create dqlite connection: no available dqlite leader server found 
WARN[11-16|20:29:13] Failed to get current cluster nodes: failed to begin transaction: failed to create dqlite connection: no available dqlite leader server found 
EROR[11-16|20:29:13] Failed to load instances for snapshot expiry err="failed to begin transaction: failed to create dqlite connection: no available dqlite leader server found"
EROR[11-16|20:29:13] Failed to load containers for scheduled snapshots err="failed to begin transaction: failed to create dqlite connection: no available dqlite leader server found"
WARN[11-16|20:29:14] Failed to get current cluster nodes: failed to begin transaction: failed to create dqlite connection: no available dqlite leader server found 
WARN[11-16|20:29:15] Failed to get current cluster nodes: failed to begin transaction: failed to create dqlite connection: no available dqlite leader server found 
WARN[11-16|20:29:16] Failed to get current cluster nodes: failed to begin transaction: failed to create dqlite connection: no available dqlite leader server found 
WARN[11-16|20:29:17] Failed to get current cluster nodes: failed to begin transaction: failed to create dqlite connection: no available dqlite leader server found 
WARN[11-16|20:29:18] Failed to get current cluster nodes: failed to begin transaction: failed to create dqlite connection: no available dqlite leader server found 
WARN[11-16|20:29:19] Failed to get current cluster nodes: failed to begin transaction: failed to create dqlite connection: no available dqlite leader server found 
WARN[11-16|20:29:20] Failed to get current cluster nodes: failed to begin transaction: failed to create dqlite connection: no available dqlite leader server found 
WARN[11-16|20:29:21] Failed to get current cluster nodes: failed to begin transaction: failed to create dqlite connection: no available dqlite leader server found 
WARN[11-16|20:29:22] Failed to get current cluster nodes: failed to begin transaction: failed to create dqlite connection: no available dqlite leader server found 
WARN[11-16|20:29:23] Failed to get current cluster nodes: failed to begin transaction: failed to create dqlite connection: no available dqlite leader server found 
WARN[11-16|20:29:24] Failed to get current cluster nodes: failed to begin transaction: failed to create dqlite connection: no available dqlite leader server found 
WARN[11-16|20:29:25] Failed to get current cluster nodes: failed to begin transaction: failed to create dqlite connection: no available dqlite leader server found 
WARN[11-16|20:29:26] Failed to get current cluster nodes: failed to begin transaction: failed to create dqlite connection: no available dqlite leader server found 
WARN[11-16|20:29:27] Failed to get current cluster nodes: failed to begin transaction: failed to create dqlite connection: no available dqlite leader server found 
WARN[11-16|20:29:28] Failed to get current cluster nodes: failed to begin transaction: failed to create dqlite connection: no available dqlite leader server found 
WARN[11-16|20:29:29] Failed to get current cluster nodes: failed to begin transaction: failed to create dqlite connection: no available dqlite leader server found 
WARN[11-16|20:29:30] Failed to get current cluster nodes: failed to begin transaction: failed to create dqlite connection: no available dqlite leader server found 
WARN[11-16|20:29:31] Failed to get current cluster nodes: failed to begin transaction: failed to create dqlite connection: no available dqlite leader server found 
WARN[11-16|20:29:32] Failed to get current cluster nodes: failed to begin transaction: failed to create dqlite connection: no available dqlite leader server found 
WARN[11-16|20:29:33] Failed to get current cluster nodes: failed to begin transaction: failed to create dqlite connection: no available dqlite leader server found 
WARN[11-16|20:29:34] Failed to get current cluster nodes: failed to begin transaction: failed to create dqlite connection: no available dqlite leader server found 
WARN[11-16|20:29:35] Failed to get current cluster nodes: failed to begin transaction: failed to create dqlite connection: no available dqlite leader server found 
WARN[11-16|20:29:36] Failed to get current cluster nodes: failed to begin transaction: failed to create dqlite connection: no available dqlite leader server found 
WARN[11-16|20:29:37] Failed to get current cluster nodes: failed to begin transaction: failed to create dqlite connection: no available dqlite leader server found 
==> Teardown clustering netns lxd48811
==> Teardown clustering netns lxd48812
==> Teardown clustering netns lxd48813
==> Teardown clustering bridge br4881
==> Killing LXD at /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/C1u
==> Tearing down directory backend in /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/C1u
==> Killing LXD at /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/wBM
==> Tearing down directory backend in /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/wBM
==> Killing LXD at /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/ETv
==> Tearing down directory backend in /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/ETv
==> TEST DONE: clustering shutdown (88s)
==> TEST BEGIN: clustering projects
==> Setup clustering bridge br4881
==> Setup clustering netns lxd48811
==> Spawn bootstrap cluster node in lxd48811 with storage driver dir
==> Setting up directory backend in /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/wG1
==> Spawning lxd in /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/wG1
==> Spawned LXD (PID is 24717)
==> Confirming lxd is responsive
WARN[11-16|20:29:39] CGroup memory swap accounting is disabled, swap limits will be ignored. 
WARN[11-16|20:29:42] Failed to update instance types: Get https://images.linuxcontainers.org/meta/instance-types/.yaml: lookup images.linuxcontainers.org on 127.0.0.53:53: read udp 127.0.0.1:35882->127.0.0.53:53: read: connection refused 
==> Setting trust password
EROR[11-16|20:29:44] Failed to get leader node address: Node is not clustered 
==> Setup clustering netns lxd48812
==> Spawn additional cluster node in lxd48812 with storage driver dir
==> Setting up directory backend in /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/cQS
==> Spawning lxd in /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/cQS
==> Spawned LXD (PID is 24849)
==> Confirming lxd is responsive
WARN[11-16|20:29:45] CGroup memory swap accounting is disabled, swap limits will be ignored. 
WARN[11-16|20:29:49] Failed to update instance types: Get https://images.linuxcontainers.org/meta/instance-types/.yaml: lookup images.linuxcontainers.org on 127.0.0.53:53: read udp 127.0.0.1:55951->127.0.0.53:53: read: connection refused 
==> Setting trust password
EROR[11-16|20:29:50] Failed to get leader node address: Node is not clustered 
EROR[11-16|20:29:52] Unaccounted raft node(s) not found in 'nodes' table for heartbeat: map[10.1.1.101:8443:{ID:1 Address:10.1.1.101:8443} 10.1.1.102:8443:{ID:2 Address:10.1.1.102:8443}] 
EROR[11-16|20:29:52] Empty raft node set received 
WARN[11-16|20:29:52] Dqlite client proxy TLS -> Unix: read tcp 10.1.1.102:44414->10.1.1.101:8443: use of closed network connection 
WARN[11-16|20:29:52] Dqlite server proxy Unix -> TLS: read unix @->@00080: use of closed network connection 
Project p1 created
Device root added to default
Image imported as: d3245ce429109d5a5fb7d3119f4fd7216bdfe890aeb3b478be9533b7300d78a7
Setup alias: testimage
Creating c1
                                          
The container you are starting doesn't have any network attached to it.
  To create a new network, use: lxc network create
  To attach a network to a container, use: lxc network attach

WARN[11-16|20:29:57] Dqlite client proxy TLS -> Unix: read tcp 10.1.1.102:44422->10.1.1.101:8443: use of closed network connection 
WARN[11-16|20:29:57] Dqlite server proxy Unix -> TLS: read unix @->@00080: use of closed network connection 
WARN[11-16|20:29:57] Dqlite server proxy TLS -> Unix: read tcp 10.1.1.102:8443->10.1.1.1:34766: use of closed network connection 
WARN[11-16|20:29:57] Dqlite client proxy Unix -> TLS: read unix @->@00085: use of closed network connection 
WARN[11-16|20:29:57] Dqlite client proxy TLS -> Unix: read tcp 10.1.1.102:44418->10.1.1.101:8443: use of closed network connection 
WARN[11-16|20:29:57] Dqlite server proxy Unix -> TLS: read unix @->@00080: use of closed network connection 
WARN[11-16|20:29:58] Dqlite client proxy TLS -> Unix: read tcp 10.1.1.101:42412->10.1.1.101:8443: use of closed network connection 
WARN[11-16|20:29:58] Dqlite server proxy Unix -> TLS: read unix @->@00080: use of closed network connection 
==> Teardown clustering netns lxd48811
==> Teardown clustering netns lxd48812
==> Teardown clustering bridge br4881
==> Killing LXD at /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/wG1
==> Tearing down directory backend in /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/wG1
==> Killing LXD at /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/cQS
==> Tearing down directory backend in /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/cQS
==> TEST DONE: clustering projects (21s)
==> TEST BEGIN: clustering address
==> Setup clustering bridge br4881
==> Setup clustering netns lxd48811
==> Spawn bootstrap cluster node in lxd48811 with storage driver dir
==> Setting up directory backend in /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/fw7
==> Spawning lxd in /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/fw7
==> Spawned LXD (PID is 25367)
==> Confirming lxd is responsive
WARN[11-16|20:29:59] CGroup memory swap accounting is disabled, swap limits will be ignored. 
==> Setting trust password
WARN[11-16|20:30:02] Failed to update instance types: Get https://images.linuxcontainers.org/meta/instance-types/.yaml: lookup images.linuxcontainers.org on 127.0.0.53:53: read udp 127.0.0.1:37961->127.0.0.53:53: read: connection refused 
EROR[11-16|20:30:04] Failed to get leader node address: Node is not clustered 
2019/11/16 20:30:05 http: TLS handshake error from 10.1.1.1:36380: EOF
2019/11/16 20:30:05 http: TLS handshake error from 10.1.1.1:36382: remote error: tls: bad certificate
Client certificate stored at server:  cluster
==> Setup clustering netns lxd48812
==> Spawn additional cluster node in lxd48812 with storage driver dir
==> Setting up directory backend in /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/AKA
==> Spawning lxd in /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/AKA
==> Spawned LXD (PID is 25594)
==> Confirming lxd is responsive
WARN[11-16|20:30:05] CGroup memory swap accounting is disabled, swap limits will be ignored. 
==> Setting trust password
WARN[11-16|20:30:09] Failed to update instance types: Get https://images.linuxcontainers.org/meta/instance-types/.yaml: lookup images.linuxcontainers.org on 127.0.0.53:53: read udp 127.0.0.1:47828->127.0.0.53:53: read: connection refused 
EROR[11-16|20:30:10] Failed to get leader node address: Node is not clustered 
EROR[11-16|20:30:13] Unaccounted raft node(s) not found in 'nodes' table for heartbeat: map[10.1.1.102:8444:{ID:2 Address:10.1.1.102:8444} 10.1.1.101:8444:{ID:1 Address:10.1.1.101:8444}] 
EROR[11-16|20:30:13] Empty raft node set received 
WARN[11-16|20:30:13] Dqlite client proxy TLS -> Unix: read tcp 10.1.1.102:39824->10.1.1.101:8444: use of closed network connection 
WARN[11-16|20:30:13] Dqlite server proxy Unix -> TLS: read unix @->@00089: use of closed network connection 
Error: Changing cluster.https_address is currently not supported
Image imported as: d3245ce429109d5a5fb7d3119f4fd7216bdfe890aeb3b478be9533b7300d78a7
Setup alias: testimage
Creating c1
                                          
The container you are starting doesn't have any network attached to it.
  To create a new network, use: lxc network create
  To attach a network to a container, use: lxc network attach

WARN[11-16|20:30:18] Dqlite client proxy TLS -> Unix: read tcp 10.1.1.102:39832->10.1.1.101:8444: use of closed network connection 
WARN[11-16|20:30:18] Dqlite server proxy Unix -> TLS: read unix @->@00089: use of closed network connection 
WARN[11-16|20:30:18] Dqlite server proxy TLS -> Unix: read tcp 10.1.1.102:8444->10.1.1.1:36250: use of closed network connection 
WARN[11-16|20:30:18] Dqlite client proxy TLS -> Unix: read tcp 10.1.1.102:39828->10.1.1.101:8444: use of closed network connection 
WARN[11-16|20:30:18] Dqlite client proxy Unix -> TLS: read unix @->@0008e: use of closed network connection 
WARN[11-16|20:30:18] Dqlite server proxy Unix -> TLS: read unix @->@00089: use of closed network connection 
WARN[11-16|20:30:18] Dqlite client proxy TLS -> Unix: read tcp 10.1.1.101:43984->10.1.1.101:8444: use of closed network connection 
WARN[11-16|20:30:18] Dqlite server proxy Unix -> TLS: read unix @->@00089: use of closed network connection 
==> Teardown clustering netns lxd48811
==> Teardown clustering netns lxd48812
==> Teardown clustering bridge br4881
==> Killing LXD at /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/fw7
==> Tearing down directory backend in /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/fw7
==> Killing LXD at /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/AKA
==> Tearing down directory backend in /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/AKA
==> TEST DONE: clustering address (20s)
==> TEST BEGIN: clustering image replication
==> Setup clustering bridge br4881
==> Setup clustering netns lxd48811
==> Spawn bootstrap cluster node in lxd48811 with storage driver dir
==> Setting up directory backend in /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/9UF
==> Spawning lxd in /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/9UF
==> Spawned LXD (PID is 26037)
==> Confirming lxd is responsive
WARN[11-16|20:30:20] CGroup memory swap accounting is disabled, swap limits will be ignored. 
WARN[11-16|20:30:23] Failed to update instance types: Get https://images.linuxcontainers.org/meta/instance-types/.yaml: lookup images.linuxcontainers.org on 127.0.0.53:53: read udp 127.0.0.1:59497->127.0.0.53:53: read: connection refused 
==> Setting trust password
EROR[11-16|20:30:24] Failed to get leader node address: Node is not clustered 
==> Setup clustering netns lxd48812
==> Spawn additional cluster node in lxd48812 with storage driver dir
==> Setting up directory backend in /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/mEw
==> Spawning lxd in /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/mEw
==> Spawned LXD (PID is 26340)
==> Confirming lxd is responsive
WARN[11-16|20:30:25] CGroup memory swap accounting is disabled, swap limits will be ignored. 
WARN[11-16|20:30:29] Failed to update instance types: Get https://images.linuxcontainers.org/meta/instance-types/.yaml: lookup images.linuxcontainers.org on 127.0.0.53:53: read udp 127.0.0.1:49918->127.0.0.53:53: read: connection refused 
==> Setting trust password
EROR[11-16|20:30:30] Failed to get leader node address: Node is not clustered 
EROR[11-16|20:30:32] Unaccounted raft node(s) not found in 'nodes' table for heartbeat: map[10.1.1.101:8443:{ID:1 Address:10.1.1.101:8443} 10.1.1.102:8443:{ID:2 Address:10.1.1.102:8443}] 
EROR[11-16|20:30:32] Empty raft node set received 
WARN[11-16|20:30:32] Dqlite client proxy TLS -> Unix: read tcp 10.1.1.102:44618->10.1.1.101:8443: use of closed network connection 
WARN[11-16|20:30:32] Dqlite server proxy Unix -> TLS: read unix @->@00092: use of closed network connection 
Image imported as: d3245ce429109d5a5fb7d3119f4fd7216bdfe890aeb3b478be9533b7300d78a7
Setup alias: testimage
==> Setup clustering netns lxd48813
==> Spawn additional cluster node in lxd48813 with storage driver dir
==> Setting up directory backend in /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/A4n
==> Spawning lxd in /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/A4n
==> Spawned LXD (PID is 26557)
==> Confirming lxd is responsive
WARN[11-16|20:30:35] CGroup memory swap accounting is disabled, swap limits will be ignored. 
WARN[11-16|20:30:38] Failed to update instance types: Get https://images.linuxcontainers.org/meta/instance-types/.yaml: lookup images.linuxcontainers.org on 127.0.0.53:53: read udp 127.0.0.1:38139->127.0.0.53:53: read: connection refused 
==> Setting trust password
EROR[11-16|20:30:40] Failed to get leader node address: Node is not clustered 
EROR[11-16|20:30:41] Unaccounted raft node(s) not found in 'nodes' table for heartbeat: map[10.1.1.101:8443:{ID:1 Address:10.1.1.101:8443} 10.1.1.102:8443:{ID:2 Address:10.1.1.102:8443} 10.1.1.103:8443:{ID:3 Address:10.1.1.103:8443}] 
EROR[11-16|20:30:41] Empty raft node set received 
EROR[11-16|20:30:41] Empty raft node set received 
WARN[11-16|20:30:41] Dqlite client proxy TLS -> Unix: read tcp 10.1.1.103:35212->10.1.1.101:8443: use of closed network connection 
WARN[11-16|20:30:41] Dqlite server proxy Unix -> TLS: read unix @->@00092: use of closed network connection 
Image imported as: d3245ce429109d5a5fb7d3119f4fd7216bdfe890aeb3b478be9533b7300d78a7
Setup alias: testimage
Image imported as: d3245ce429109d5a5fb7d3119f4fd7216bdfe890aeb3b478be9533b7300d78a7
Setup alias: testimage
Creating c1
                                           
The container you are starting doesn't have any network attached to it.
  To create a new network, use: lxc network create
  To attach a network to a container, use: lxc network attach

Starting c1
WARN[11-16|20:30:53] Detected poll(POLLNVAL) event. 
Container published with fingerprint: e3959cf2d0e189bf2741592f5e8f5c14a08718fa24dd9bf4efdcb9c44a60349e
Image imported as: d3245ce429109d5a5fb7d3119f4fd7216bdfe890aeb3b478be9533b7300d78a7
Setup alias: testimage
WARN[11-16|20:31:02] Dqlite client proxy TLS -> Unix: read tcp 10.1.1.101:42616->10.1.1.101:8443: use of closed network connection 
WARN[11-16|20:31:02] Dqlite server proxy Unix -> TLS: read unix @->@00092: use of closed network connection 
WARN[11-16|20:31:02] Dqlite client proxy Unix -> TLS: read unix @->@00099: use of closed network connection 
WARN[11-16|20:31:02] Dqlite server proxy Unix -> TLS: read unix @->@00095: use of closed network connection 
WARN[11-16|20:31:02] Dqlite client proxy Unix -> TLS: read unix @->@00098: use of closed network connection 
WARN[11-16|20:31:02] Dqlite server proxy TLS -> Unix: read tcp 10.1.1.101:8443->10.1.1.1:44626: use of closed network connection 
WARN[11-16|20:31:02] Dqlite client proxy Unix -> TLS: read unix @->@0009f: use of closed network connection 
WARN[11-16|20:31:02] Dqlite client proxy Unix -> TLS: read unix @->@0009e: use of closed network connection 
WARN[11-16|20:31:02] Dqlite client proxy TLS -> Unix: read tcp 10.1.1.101:34970->10.1.1.102:8443: use of closed network connection 
WARN[11-16|20:31:02] Dqlite server proxy TLS -> Unix: read tcp 10.1.1.101:8443->10.1.1.1:35220: use of closed network connection 
WARN[11-16|20:31:02] Dqlite server proxy TLS -> Unix: read tcp 10.1.1.101:8443->10.1.1.1:44622: use of closed network connection 
WARN[11-16|20:31:02] Failed to delete operation 55a061e9-ffdb-4107-b8a5-56fbdca0238d: failed to receive response: failed to receive header: failed to receive header: EOF 
WARN[11-16|20:31:02] Dqlite client proxy TLS -> Unix: read tcp 10.1.1.101:48316->10.1.1.103:8443: use of closed network connection 
WARN[11-16|20:31:02] Dqlite server proxy TLS -> Unix: read tcp 10.1.1.101:8443->10.1.1.1:35216: use of closed network connection 
WARN[11-16|20:31:02] Dqlite server proxy Unix -> TLS: read unix @->@0009b: use of closed network connection 
WARN[11-16|20:31:02] Failed to delete operation 51939035-3dcc-4905-96b4-d349e0484688: failed to begin transaction: sql: database is closed 
WARN[11-16|20:31:02] Failed to delete operation 4cef417b-efec-4c90-bcfd-5c4a89078192: failed to begin transaction: failed to create dqlite connection: no available dqlite leader server found 
WARN[11-16|20:31:02] Failed to get current cluster nodes: failed to begin transaction: failed to create dqlite connection: no available dqlite leader server found 
==> Teardown clustering netns lxd48811
==> Teardown clustering netns lxd48812
==> Teardown clustering netns lxd48813
==> Teardown clustering bridge br4881
==> Killing LXD at /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/9UF
==> Tearing down directory backend in /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/9UF
==> Killing LXD at /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/mEw
==> Tearing down directory backend in /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/mEw
==> Killing LXD at /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/A4n
==> Tearing down directory backend in /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/A4n
==> TEST DONE: clustering image replication (44s)
==> TEST BEGIN: clustering DNS
test1.lxd.		0	IN	A	10.140.78.145
145.78.140.10.in-addr.arpa. 0	IN	PTR	test1.lxd.
;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 27708
;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 34667
==> TEST DONE: clustering DNS (1s)
==> TEST BEGIN: clustering recovery
==> Setup clustering bridge br4881
==> Setup clustering netns lxd48811
==> Spawn bootstrap cluster node in lxd48811 with storage driver dir
==> Setting up directory backend in /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/pQ8
==> Spawning lxd in /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/pQ8
==> Spawned LXD (PID is 27790)
==> Confirming lxd is responsive
WARN[11-16|20:31:05] CGroup memory swap accounting is disabled, swap limits will be ignored. 
WARN[11-16|20:31:08] Failed to update instance types: Get https://images.linuxcontainers.org/meta/instance-types/.yaml: lookup images.linuxcontainers.org on 127.0.0.53:53: read udp 127.0.0.1:37540->127.0.0.53:53: read: connection refused 
==> Setting trust password
EROR[11-16|20:31:09] Failed to get leader node address: Node is not clustered 
==> Setup clustering netns lxd48812
==> Spawn additional cluster node in lxd48812 with storage driver dir
==> Setting up directory backend in /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/eaQ
==> Spawning lxd in /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/eaQ
==> Spawned LXD (PID is 28127)
==> Confirming lxd is responsive
WARN[11-16|20:31:10] CGroup memory swap accounting is disabled, swap limits will be ignored. 
WARN[11-16|20:31:14] Failed to update instance types: Get https://images.linuxcontainers.org/meta/instance-types/.yaml: lookup images.linuxcontainers.org on 127.0.0.53:53: read udp 127.0.0.1:44769->127.0.0.53:53: read: connection refused 
==> Setting trust password
EROR[11-16|20:31:16] Failed to get leader node address: Node is not clustered 
EROR[11-16|20:31:17] Unaccounted raft node(s) not found in 'nodes' table for heartbeat: map[10.1.1.101:8443:{ID:1 Address:10.1.1.101:8443} 10.1.1.102:8443:{ID:2 Address:10.1.1.102:8443}] 
EROR[11-16|20:31:17] Empty raft node set received 
WARN[11-16|20:31:17] Dqlite client proxy TLS -> Unix: read tcp 10.1.1.102:44880->10.1.1.101:8443: use of closed network connection 
WARN[11-16|20:31:17] Dqlite server proxy Unix -> TLS: read unix @->@000a1: use of closed network connection 
Project p1 created
Error: The LXD daemon is running, please stop it first.
WARN[11-16|20:31:18] Dqlite client proxy TLS -> Unix: read tcp 10.1.1.102:44888->10.1.1.101:8443: use of closed network connection 
WARN[11-16|20:31:18] Dqlite server proxy Unix -> TLS: read unix @->@000a1: use of closed network connection 
WARN[11-16|20:31:18] Dqlite server proxy TLS -> Unix: read tcp 10.1.1.102:8443->10.1.1.1:35232: use of closed network connectifon 
WARN[11-16|20:31:18] Dqlite client proxy Unix -> TLS: read unix @->@000a6: use of closed network connection 
WARN[11-16|20:31:18] Dqlite server proxy Unix -> TLS: read unix @->@000a1: use of closed network connection 
WARN[11-16|20:31:18] Dqlite client proxy TLS -> Unix: read tcp 10.1.1.102:44884->10.1.1.101:8443: use of closed network connection 
WARN[11-16|20:31:18] Dqlite client proxy TLS -> Unix: read tcp 10.1.1.101:42878->10.1.1.101:8443: use of closed network connection 
WARN[11-16|20:31:18] Dqlite server proxy Unix -> TLS: read unix @->@000a1: use of closed network connection 
==> Setting up directory backend in /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/pQ8
==> Spawning lxd in /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/pQ8
==> Spawned LXD (PID is 28274)
==> Confirming lxd is responsive
WARN[11-16|20:31:19] CGroup memory swap accounting is disabled, swap limits will be ignored. 
WARN[11-16|20:31:20] Failed to get events from node 10.1.1.102:8443: Unable to connect to: 10.1.1.102:8443 
WARN[11-16|20:31:20] Failed to update instance types: Get https://images.linuxcontainers.org/meta/instance-types/.yaml: lookup images.linuxcontainers.org on 127.0.0.53:53: read udp 127.0.0.1:57956->127.0.0.53:53: read: connection refused 
==> Setting trust password
WARN[11-16|20:31:21] Could not notify node 10.1.1.102:8443 
WARN[11-16|20:31:21] Failed to get events from node 10.1.1.102:8443: Unable to connect to: 10.1.1.102:8443 
EROR[11-16|20:31:21] Unaccounted raft node(s) not found in 'nodes' table for heartbeat: map[10.1.1.101:8443:{ID:1 Address:10.1.1.101:8443}] 
WARN[11-16|20:31:21] Dqlite client proxy TLS -> Unix: read tcp 10.1.1.101:42914->10.1.1.101:8443: use of closed network connection 
WARN[11-16|20:31:21] Dqlite server proxy Unix -> TLS: read unix @->@000a9: use of closed network connection 
==> Teardown clustering netns lxd48811
==> Teardown clustering netns lxd48812
==> Teardown clustering bridge br4881
==> Killing LXD at /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/pQ8
==> Tearing down directory backend in /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/pQ8
==> Killing LXD at /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/eaQ
==> Tearing down directory backend in /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/eaQ
==> TEST DONE: clustering recovery (17s)
==> TEST BEGIN: default project
Creating c1
==> TEST DONE: default project (1s)       
==> TEST BEGIN: projects CRUD operations
Project foo created
Error: Error inserting foo into database: Add project to database: This project already exists
Project foo renamed to bar
Project foo created
Error: A project named 'foo' already exists
Project foo deleted
Project bar deleted
==> TEST DONE: projects CRUD operations (1s)
==> TEST BEGIN: containers inside projects
Project foo created
Image imported as: d3245ce429109d5a5fb7d3119f4fd7216bdfe890aeb3b478be9533b7300d78a7
Setup alias: testimage
Device root added to default
Creating c1
                                          
The container you are starting doesn't have any network attached to it.
  To create a new network, use: lxc network create
  To attach a network to a container, use: lxc network attach

Error: not found
Error: not found
Error: Only empty projects can be removed
Error: Features can only be changed on empty projects
Error: Features can only be changed on empty projects
Creating c1
Project foo deleted                        
==> TEST DONE: containers inside projects (14s)
==> TEST BEGIN: snapshots inside projects
Project foo created
Image imported as: d3245ce429109d5a5fb7d3119f4fd7216bdfe890aeb3b478be9533b7300d78a7
Setup alias: testimage
Device root added to default
Creating c1
Retrieving image: Unpack: 42% (7.87MB/s)WARN[11-16|20:31:40] Failed to delete operation 36b99138-71d5-40af-9478-3be7f0915b7c: query deleted 0 rows instead of 1 
                                          
The container you are starting doesn't have any network attached to it.
  To create a new network, use: lxc network create
  To attach a network to a container, use: lxc network attach

WARN[11-16|20:31:40] Failed to delete operation 98f03532-a967-4757-8b13-fafe2155c4e3: query deleted 0 rows instead of 1 
WARN[11-16|20:31:40] Failed to delete operation b5877ccb-698e-4bae-96b8-2579115b25f3: query deleted 0 rows instead of 1 
Creating c1
Project foo deleted                       
==> TEST DONE: snapshots inside projects (7s)
==> TEST BEGIN: backups inside projects
Project foo created
WARN[11-16|20:31:45] Failed to delete operation e5edb7fd-cf4a-45ad-a96c-d8d4cbb29789: query deleted 0 rows instead of 1 
WARN[11-16|20:31:45] Failed to delete operation 5640e45d-e0cd-4cba-8467-88e3cfe5af70: query deleted 0 rows instead of 1 
WARN[11-16|20:31:46] Failed to delete operation ad0fef0c-995f-4e7a-94e0-21672d635ba8: query deleted 0 rows instead of 1 
WARN[11-16|20:31:46] Failed to delete operation c292d462-8251-48bd-a870-9dbb28c987f0: query deleted 0 rows instead of 1 
WARN[11-16|20:31:46] Failed to delete operation a07dd99b-5fde-4d14-9a86-e8d0cbd09a49: query deleted 0 rows instead of 1 
Image imported as: d3245ce429109d5a5fb7d3119f4fd7216bdfe890aeb3b478be9533b7300d78a7
Setup alias: testimage
Device root added to default
Creating c1
Retrieving image: Unpack: 19% (34.90MB/s)WARN[11-16|20:31:47] Failed to delete operation d4f209bc-0211-4679-8fe6-3c6a3e60f019: query deleted 0 rows instead of 1 
                                           
The container you are starting doesn't have any network attached to it.
  To create a new network, use: lxc network create
  To attach a network to a container, use: lxc network attach

WARN[11-16|20:31:47] Failed to delete operation a7063f8c-aa10-4270-b7e3-32af9734977d: query deleted 0 rows instead of 1 
Backup exported successfully!           
WARN[11-16|20:31:48] Failed to delete operation a4520e16-89cb-4eba-af25-a97807f13ace: query deleted 0 rows instead of 1 
Name: c1                               
Location: none
Remote: unix://
Architecture: x86_64
Created: 2019/11/16 20:31 UTC
Status: Stopped
Type: container
Profiles: default
Project foo deleted
==> TEST DONE: backups inside projects (4s)
==> TEST BEGIN: profiles inside projects
Project foo created
Profile p1 created
Profile p1 created
Profile p1 deleted
Project foo deleted
Profile p1 deleted
==> TEST DONE: profiles inside projects (0s)
==> TEST BEGIN: profiles from the global default project
Project foo created
WARN[11-16|20:31:50] Failed to delete operation 3fe4ebf3-6fb6-4582-b71b-04c1e0800f8f: query deleted 0 rows instead of 1 
WARN[11-16|20:31:50] Failed to delete operation 4da82a62-86fd-4210-a77f-6c7619af1c9d: query deleted 0 rows instead of 1 
Image imported as: d3245ce429109d5a5fb7d3119f4fd7216bdfe890aeb3b478be9533b7300d78a7
Creating c1
WARN[11-16|20:31:52] Failed to delete operation c455d389-88dc-47a2-96e1-f93f29bd0621: query deleted 0 rows instead of 1 
Creating c1
Retrieving image: Unpack: 100% (9.66MB/s)WARN[11-16|20:31:52] Failed to delete operation 3f0037db-dc36-4fe1-b116-9f95cf7118eb: query deleted 0 rows instead of 1 
WARN[11-16|20:31:52] Failed to delete operation d39ca9da-cd24-4076-b3df-1716e80767a4: query deleted 0 rows instead of 1 
WARN[11-16|20:31:53] Failed to delete operation 0df68ee0-5010-4345-a8fe-58ea45343d4d: query deleted 0 rows instead of 1 
WARN[11-16|20:31:53] Failed to delete operation b5474081-10c6-4010-b428-e22a2c084f54: query deleted 0 rows instead of 1 
Project foo deleted
==> TEST DONE: profiles from the global default project (4s)
==> TEST BEGIN: images inside projects
Project foo created
WARN[11-16|20:31:53] Failed to delete operation b101ba7a-d6ff-4185-80ff-46131ade78ec: query deleted 0 rows instead of 1 
WARN[11-16|20:31:53] Failed to delete operation 79137e40-18de-47af-a670-2e87182e9808: query deleted 0 rows instead of 1 
WARN[11-16|20:31:53] Failed to delete operation 780bba9e-448d-457a-ad37-fd63a58c52b8: query deleted 0 rows instead of 1 
Image imported as: d3245ce429109d5a5fb7d3119f4fd7216bdfe890aeb3b478be9533b7300d78a7
Image imported as: d3245ce429109d5a5fb7d3119f4fd7216bdfe890aeb3b478be9533b7300d78a7
Setup alias: foo-image
WARN[11-16|20:31:56] Failed to delete operation bd6432b4-6715-4cb0-9b1b-5738735daa00: query deleted 0 rows instead of 1 
Project foo deleted
==> TEST DONE: images inside projects (3s)
==> TEST BEGIN: images from the global default project
Project foo created
WARN[11-16|20:31:56] Failed to delete operation 6de74218-bd7c-472b-8e69-a1c9982a86d1: query deleted 0 rows instead of 1 
WARN[11-16|20:31:58] Failed to delete operation ffc2f09f-6be6-49a7-8dc4-c4cde9716afa: query deleted 0 rows instead of 1 
Image imported as: d3245ce429109d5a5fb7d3119f4fd7216bdfe890aeb3b478be9533b7300d78a7
Setup alias: foo-image
WARN[11-16|20:31:58] Failed to delete operation 98ebfe71-0a54-48c0-91bb-532b8a87fc47: query deleted 0 rows instead of 1 
Project foo deleted
==> TEST DONE: images from the global default project (2s)
==> TEST BEGIN: projects and storage pools
Storage volume vol created
Project foo created
Storage volume vol deleted
Project foo deleted
==> TEST DONE: projects and storage pools (1s)
==> TEST BEGIN: projects and networks
WARN[11-16|20:31:59] Failed to delete operation b5dd4337-af38-42a9-8874-c9e62f3cd640: query deleted 0 rows instead of 1 
WARN[11-16|20:31:59] Failed to delete operation 7e26a384-eaf7-4fcb-92aa-64e2f42fa1c3: query deleted 0 rows instead of 1 
WARN[11-16|20:32:01] Failed to delete operation 5829dab0-473e-4ccf-81e5-0cb0a4ceeeab: query deleted 0 rows instead of 1 
WARN[11-16|20:32:01] Failed to delete operation 6f7a543a-2951-47f5-b378-10ecbba5e7a9: query deleted 0 rows instead of 1 
WARN[11-16|20:32:02] Failed to delete operation dbc2d679-b719-4c63-b580-fdabf24c716f: query deleted 0 rows instead of 1 
Network lxdt4881 created
Project foo created
WARN[11-16|20:32:03] Failed to delete operation e6c1a10a-735a-41d6-b977-8f07e85cbaa3: query deleted 0 rows instead of 1 
Image imported as: d3245ce429109d5a5fb7d3119f4fd7216bdfe890aeb3b478be9533b7300d78a7
Setup alias: testimage
Device root added to default
Creating c1
Project foo deleted                       
Network lxdt4881 deleted
==> TEST DONE: projects and networks (5s)
==> TEST BEGIN: container devices - disk
Image imported as: d3245ce429109d5a5fb7d3119f4fd7216bdfe890aeb3b478be9533b7300d78a7
Setup alias: testimage
Creating foo
Starting foo                              
mkfs.fat 4.1 (2017-01-24)
WARN[11-16|20:32:08] Failed to delete operation 5b3a04d0-ff93-41e3-8916-a094f9133156: query deleted 0 rows instead of 1 
Creating foo-priv
Starting foo-priv                         
WARN[11-16|20:32:09] Failed to delete operation 73925185-d663-497e-90d5-d19c61dc06b8: query deleted 0 rows instead of 1 
WARN[11-16|20:32:09] Failed to delete operation a382ce6d-813c-459f-bc92-03d50d86b3a0: query deleted 0 rows instead of 1 
WARN[11-16|20:32:09] Failed to delete operation c6ab5fda-4058-477f-b106-cd6c600b89e0: query deleted 0 rows instead of 1 
Device loop_raw_mount_options added to foo-priv
Device loop_raw_mount_options removed from foo-priv
Device loop_raw_mount_options added to foo-priv
Device loop_raw_mount_options removed from foo-priv
Device loop_raw_mount_options added to foo-priv
Device loop_raw_mount_options removed from foo-priv
==> TEST DONE: container devices - disk (14s)
==> TEST BEGIN: container devices - nic - p2p
Creating nt4881
Starting nt4881                           
192.0.2.10 scope link 
2001:db8::10 metric 1024 pref medium
class htb 1:10 root prio 0 rate 1Mbit ceil 1Mbit burst 1600b cburst 1600b 
 police 0x1 rate 2Mbit burst 1Mb mtu 64Kb action drop overhead 0b 
1400
WARN[11-16|20:32:21] Detected poll(POLLNVAL) event. 
                                                    1400
0a:92:a7:0d:b7:d9
WARN[11-16|20:32:21] Detected poll(POLLNVAL) event. 
                                                    WARN[11-16|20:32:21] Detected poll(POLLNVAL) event. 
                        PING 192.0.2.10 (192.0.2.10) 56(84) bytes of data.
64 bytes from 192.0.2.10: icmp_seq=1 ttl=64 time=0.113 ms
64 bytes from 192.0.2.10: icmp_seq=2 ttl=64 time=0.061 ms

--- 192.0.2.10 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 2ms
rtt min/avg/max/mdev = 0.061/0.087/0.113/0.026 ms
WARN[11-16|20:32:23] Detected poll(POLLNVAL) event. 
                                                    PING 2001:db8::10(2001:db8::10) 56 data bytes
64 bytes from 2001:db8::10: icmp_seq=2 ttl=64 time=0.081 ms

--- 2001:db8::10 ping statistics ---
2 packets transmitted, 1 received, 50% packet loss, time 12ms
rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms
Device eth0 added to nt4881
192.0.2.30 scope link linkdown 
2001:db8::30 metric 1024 linkdown pref medium
class htb 1:10 root prio 0 rate 3Mbit ceil 3Mbit burst 1599b cburst 1599b 
 police 0x1 rate 4Mbit burst 1Mb mtu 64Kb action drop overhead 0b 
1401
WARN[11-16|20:32:27] Detected poll(POLLNVAL) event. 
                                                    1401
0a:92:a7:0d:b7:d9
WARN[11-16|20:32:27] Detected poll(POLLNVAL) event. 
                                                    Device eth0 removed from nt4881
192.0.2.10 scope link linkdown 
2001:db8::10 metric 1024 linkdown pref medium
class htb 1:10 root prio 0 rate 1Mbit ceil 1Mbit burst 1600b cburst 1600b 
 police 0x1 rate 2Mbit burst 1Mb mtu 64Kb action drop overhead 0b 
1400
1400
Device eth0 added to nt4881
192.0.2.20 scope link linkdown 
2001:db8::20 metric 1024 linkdown pref medium
class htb 1:10 root prio 0 rate 3Mbit ceil 3Mbit burst 1599b cburst 1599b 
 police 0x1 rate 4Mbit burst 1Mb mtu 64Kb action drop overhead 0b 
1402
0a:92:a7:0d:b7:d9
WARN[11-16|20:32:30] Detected poll(POLLNVAL) event. 
                                                    Device eth0 removed from nt4881
Profile nt4881 deleted
Creating nt4881
Starting nt4881                           
Device eth0 added to nt4881
192.0.2.20 dev veth59487c9f scope link linkdown 
2001:db8::20 dev veth59487c9f metric 1024 linkdown pref medium
192.0.2.30 dev veth59487c9f scope link linkdown 
2001:db8::30 dev veth59487c9f metric 1024 linkdown pref medium
Device eth0 removed from nt4881
Device eth0 added to nt4881
192.0.2.20 dev veth7739c410 scope link 
2001:db8::20 dev veth7739c410 metric 1024 pref medium
192.0.2.30 dev veth7739c410 scope link 
2001:db8::30 dev veth7739c410 metric 1024 pref medium
Device eth0 removed from nt4881
Device eth0 added to nt4881
192.0.2.20 dev veth813a2b1d scope link linkdown 
2001:db8::20 dev veth813a2b1d metric 1024 linkdown pref medium
Device eth0 removed from nt4881
Device eth1 added to nt4881
==> TEST DONE: container devices - nic - p2p (27s)
==> TEST BEGIN: container devices - nic - bridged
Network lxdt4881 created
Creating nt4881
Starting nt4881                           
192.0.2.17 scope link 
2001:db8::17 metric 1024 pref medium
class htb 1:10 root prio 0 rate 1Mbit ceil 1Mbit burst 1600b cburst 1600b 
 police 0x1 rate 2Mbit burst 1Mb mtu 64Kb action drop overhead 0b 
1400
1400
0a:92:a7:0d:b7:d9
WARN[11-16|20:32:59] Detected poll(POLLNVAL) event. 
                                                    WARN[11-16|20:32:59] Detected poll(POLLNVAL) event. 
                        WARN[11-16|20:33:00] Detected poll(POLLNVAL) event. 
                                                                            PING 192.0.2.17 (192.0.2.17) 56(84) bytes of data.
64 bytes from 192.0.2.17: icmp_seq=1 ttl=64 time=0.055 ms
64 bytes from 192.0.2.17: icmp_seq=2 ttl=64 time=0.066 ms

--- 192.0.2.17 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 2ms
rtt min/avg/max/mdev = 0.055/0.060/0.066/0.009 ms
WARN[11-16|20:33:01] Detected poll(POLLNVAL) event. 
                                                    PING 2001:db8::17(2001:db8::17) 56 data bytes
64 bytes from 2001:db8::17: icmp_seq=1 ttl=64 time=0.170 ms
64 bytes from 2001:db8::17: icmp_seq=2 ttl=64 time=0.088 ms

--- 2001:db8::17 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 23ms
rtt min/avg/max/mdev = 0.088/0.129/0.170/0.041 ms
Device eth0 added to nt4881
192.0.2.27 scope link linkdown 
2001:db8::27 metric 1024 linkdown pref medium
class htb 1:10 root prio 0 rate 3Mbit ceil 3Mbit burst 1599b cburst 1599b 
 police 0x1 rate 4Mbit burst 1Mb mtu 64Kb action drop overhead 0b 
1401
WARN[11-16|20:33:04] Detected poll(POLLNVAL) event. 
                                                    1401
0a:92:a7:0d:b7:d9
WARN[11-16|20:33:05] Detected poll(POLLNVAL) event. 
                                                    Device eth0 removed from nt4881
192.0.2.17 scope link 
2001:db8::17 metric 1024 linkdown pref medium
class htb 1:10 root prio 0 rate 1Mbit ceil 1Mbit burst 1600b cburst 1600b 
 police 0x1 rate 2Mbit burst 1Mb mtu 64Kb action drop overhead 0b 
1400
WARN[11-16|20:33:05] Detected poll(POLLNVAL) event. 
                                                    1400
Device eth0 added to nt4881
Error: Invalid devices: Invalid value for device option parent: Required value
Error: Invalid devices: Invalid device option: invalid.option
Error: Invalid devices: Invalid value for device option ipv4.routes: invalid CIDR address: 192.0.2.1/33
Error: Invalid devices: Invalid value for device option ipv6.routes: invalid CIDR address: 2001:db8::1/129
192.0.2.27 scope link linkdown 
2001:db8::27 metric 1024 linkdown pref medium
class htb 1:10 root prio 0 rate 3Mbit ceil 3Mbit burst 1599b cburst 1599b 
 police 0x1 rate 4Mbit burst 1Mb mtu 64Kb action drop overhead 0b 
1402
WARN[11-16|20:33:09] Detected poll(POLLNVAL) event. 
                                                    0a:92:a7:0d:b7:d9
Error: Failed to run: ip link set dev lxdt4881 mtu 1405: RTNETLINK answers: Invalid argument
==> Cleaning up
==> Killing LXD at /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/Gke
==> Deleting all containers
==> Deleting all images
==> Deleting all networks
Network lxdt4881 deleted
==> Deleting all profiles
Error: The 'default' profile cannot be deleted
Profile nt4881 deleted
==> Clearing config of default profile
==> Deleting all storage pools
Storage pool lxdtest-Gke deleted
==> Checking for locked DB tables
WARN[11-16|20:33:13] Failed to delete operation 4cc52cb5-05f6-4eeb-a424-48f3141df5ad: failed to begin transaction: sql: database is closed 
WARN[11-16|20:33:13] Failed to delete operation fa181e37-46fd-4487-b08e-38b9e3822368: failed to begin transaction: sql: database is closed 
==> Checking for leftover files
==> Checking for leftover DB entries
==> Tearing down directory backend in /home/lxd/go/src/github.com/lxc/lxd/test/tmp.b2f/Gke

==> TEST DONE: container devices - nic - bridged
==> Test result: failure

I decided to go back to using kernel v4.15.0-70-generic with Ubuntu 19.04.

@stgraber

This comment has been minimized.

Copy link
Member Author

@stgraber stgraber commented Nov 17, 2019

Do you have a lxdbr0 bridge on your system? We'd normally expect the testsuite to work without it but it may have regressed in that regard without us noticing.

@stgraber

This comment has been minimized.

Copy link
Member Author

@stgraber stgraber commented Nov 17, 2019

On the upside you're way past the clustering tests :)

@DBaum1

This comment has been minimized.

Copy link

@DBaum1 DBaum1 commented Nov 18, 2019

I do not have an lxdbr0 bridge. After some commenting out of tests, I've found that I am able to run all tests except container devices - nic - bridged (as stated above), id mapping, migration, and attaching storage volumes.

id mapping
==> Checking for dependencies
==> Available storage backends: dir btrfs lvm zfs
==> Using storage backend dir
==> Setting up directory backend in /home/test/go/src/github.com/lxc/lxd/test/tmp.w0t/Coc
==> Spawning lxd in /home/test/go/src/github.com/lxc/lxd/test/tmp.w0t/Coc
==> Spawned LXD (PID is 29883)
==> Confirming lxd is responsive
WARN[11-18|08:43:15] CGroup memory swap accounting is disabled, swap limits will be ignored. 
==> Binding to network
If this is your first time running LXD on this machine, you should also run: lxd init
To start your first container, try: lxc launch ubuntu:18.04

==> Bound to 127.0.0.1:59375
==> Setting trust password
==> Setting up networking
Device eth0 added to default
==> Configuring storage backend
==> Configuring directory backend in /home/test/go/src/github.com/lxc/lxd/test/tmp.w0t/Coc
Storage pool lxdtest-Coc created
Device root added to default
==> TEST BEGIN: id mapping
2019/11/18 08:43:20 auth - running at http://127.0.0.1:52093
Image imported as: d3245ce429109d5a5fb7d3119f4fd7216bdfe890aeb3b478be9533b7300d78a7
Setup alias: testimage
Creating idmap
Starting idmap                            
==> Cleaning up
==> Killing LXD at /home/test/go/src/github.com/lxc/lxd/test/tmp.w0t/Coc
==> Deleting all containers
==> Deleting all images
==> Deleting all networks
==> Deleting all profiles
Error: The 'default' profile cannot be deleted
==> Clearing config of default profile
==> Deleting all storage pools
Storage pool lxdtest-Coc deleted
==> Checking for locked DB tables
WARN[11-18|08:43:27] Failed to delete operation e40035eb-7968-451e-bf33-3e52bc67d5e7: failed to begin transaction: sql: database is closed 
==> Checking for leftover files
==> Checking for leftover DB entries
==> Tearing down directory backend in /home/test/go/src/github.com/lxc/lxd/test/tmp.w0t/Coc


==> TEST DONE: id mapping
==> Test result: failure
migration
root@liopleurodon:~/go/src/github.com/lxc/lxd/test# ./main.sh 
==> Checking for dependencies
==> Available storage backends: dir btrfs lvm zfs
==> Using storage backend dir
==> Setting up directory backend in /home/test/go/src/github.com/lxc/lxd/test/tmp.TBb/7Wt
==> Spawning lxd in /home/test/go/src/github.com/lxc/lxd/test/tmp.TBb/7Wt
==> Spawned LXD (PID is 22746)
==> Confirming lxd is responsive
WARN[11-18|08:33:12] CGroup memory swap accounting is disabled, swap limits will be ignored. 
==> Binding to network
If this is your first time running LXD on this machine, you should also run: lxd init
To start your first container, try: lxc launch ubuntu:18.04

==> Bound to 127.0.0.1:57497
==> Setting trust password
==> Setting up networking
Device eth0 added to default
==> Configuring storage backend
==> Configuring directory backend in /home/test/go/src/github.com/lxc/lxd/test/tmp.TBb/7Wt
Storage pool lxdtest-7Wt created
Device root added to default
==> TEST BEGIN: migration
2019/11/18 08:33:17 auth - running at http://127.0.0.1:50885
==> Setting up directory backend in /home/test/go/src/github.com/lxc/lxd/test/tmp.TBb/nkJ
==> Spawning lxd in /home/test/go/src/github.com/lxc/lxd/test/tmp.TBb/nkJ
==> Spawned LXD (PID is 22971)
==> Confirming lxd is responsive
WARN[11-18|08:33:17] CGroup memory swap accounting is disabled, swap limits will be ignored. 
==> Binding to network
==> Bound to 127.0.0.1:33709
==> Setting trust password
==> Setting up networking
Device eth0 added to default
==> Configuring storage backend
==> Configuring directory backend in /home/test/go/src/github.com/lxc/lxd/test/tmp.TBb/nkJ
Storage pool lxdtest-nkJ created
Device root added to default
Generating a client certificate. This may take a minute...
2019/11/18 08:33:21 http: TLS handshake error from 127.0.0.1:45936: EOF
2019/11/18 08:33:21 http: TLS handshake error from 127.0.0.1:45938: remote error: tls: bad certificate
Client certificate stored at server:  l1
2019/11/18 08:33:21 http: TLS handshake error from 127.0.0.1:45558: EOF
2019/11/18 08:33:21 http: TLS handshake error from 127.0.0.1:45560: remote error: tls: bad certificate
Client certificate stored at server:  l2
Image imported as: d3245ce429109d5a5fb7d3119f4fd7216bdfe890aeb3b478be9533b7300d78a7
Setup alias: testimage
Creating nonlive
  user.tester: foo                         
| nonlive2 | RUNNING |      |      | CONTAINER | 1         |
Creating cccp
Starting cccp                              
Creating cccp
Starting cccp                             
Creating cccp
Starting cccp                              
| nonlive | RUNNING |      |      | CONTAINER | 1         |
Creating cccp
Error: not found                          
Creating cccp
Error: not found                          
Creating cccp
Error: not found                          
Creating c1
Error: The remote "l1" doesn't exist       
Error: The remote "l2" doesn't exist
Error: Failed instance creation: Cannot refresh a running container
Error: not found
Error: Failed to fetch snapshot "snap0" of instance "c2" in project "default": No such object
+------+---------+------+------+-----------+-----------+
| NAME |  STATE  | IPV4 | IPV6 |   TYPE    | SNAPSHOTS |
+------+---------+------+------+-----------+-----------+
| c2   | STOPPED |      |      | CONTAINER | 1         |
+------+---------+------+------+-----------+-----------+
architecture: x86_64
config:
  image.architecture: x86_64
  image.description: Busybox x86_64
  image.name: busybox-x86_64
  image.os: Busybox
  volatile.base_image: d3245ce429109d5a5fb7d3119f4fd7216bdfe890aeb3b478be9533b7300d78a7
  volatile.eth0.hwaddr: 00:16:3e:90:20:5a
  volatile.idmap.base: "0"
  volatile.idmap.current: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":65536},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":65536}]'
  volatile.idmap.next: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":65536},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":65536}]'
  volatile.last_state.idmap: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":65536},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":65536}]'
  volatile.last_state.power: STOPPED
devices: {}
ephemeral: false
profiles:
- default
expires_at: 0001-01-01T00:00:00Z
architecture: x86_64
config:
  image.architecture: x86_64
  image.description: Busybox x86_64
  image.name: busybox-x86_64
  image.os: Busybox
  volatile.base_image: d3245ce429109d5a5fb7d3119f4fd7216bdfe890aeb3b478be9533b7300d78a7
  volatile.eth0.hwaddr: 00:16:3e:90:20:5a
  volatile.idmap.base: "0"
  volatile.idmap.current: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":65536},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":65536}]'
  volatile.idmap.next: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":65536},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":65536}]'
  volatile.last_state.idmap: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":65536},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":65536}]'
devices: {}
ephemeral: false
profiles:
- default
expires_at: 0001-01-01T00:00:00Z
Error: Failed to fetch snapshot "snap1" of instance "c2" in project "default": No such object
Storage volume vol1 created
Storage volume vol2 created
Storage volume copied successfully!
Storage volume moved successfully!
Error: not found
Storage volume copied successfully!
Storage volume copied successfully!
Storage volume moved successfully!
Storage volume vol2 deleted
Storage volume vol3 deleted
Storage volume vol4 deleted
Storage volume vol5 deleted
Storage volume vol6 deleted
Storage volume vol1 created
Storage volume vol2 created
Storage volume copied successfully!
Storage volume moved successfully!
Error: not found
Storage volume copied successfully!
Storage volume copied successfully!
Storage volume moved successfully!
Storage volume vol2 deleted
Storage volume vol3 deleted
Storage volume vol4 deleted
Storage volume vol5 deleted
Storage volume vol6 deleted
Storage volume vol1 created
Storage volume vol2 created
Storage volume copied successfully!
Storage volume moved successfully!
Error: not found
Storage volume copied successfully!
Storage volume copied successfully!
Storage volume moved successfully!
Storage volume vol2 deleted
Storage volume vol3 deleted
Storage volume vol4 deleted
Storage volume vol5 deleted
Storage volume vol6 deleted
Project proj created
Creating c1
Project proj deleted                      
==> CRIU: starting testing live-migration
WARN[11-18|08:35:58] Failed to delete operation f73a0f07-7963-4cfd-8bbf-e089aa9ed6a1: query deleted 0 rows instead of 1 
Creating migratee
Starting migratee                          
WARN[11-18|08:36:00] Failed to delete operation c9c58af9-86fd-44cc-a8a1-f61985c78270: query deleted 0 rows instead of 1 
WARN[11-18|08:36:02] Failed to delete operation 9960de92-b9db-468f-b4cd-d4c6065858c1: query deleted 0 rows instead of 1 
WARN[11-18|08:36:02] Failed to delete operation 8d77e91b-8376-4573-9aeb-0eaf4d61bff7: query deleted 0 rows instead of 1 
EROR[11-18|08:36:05] Error collecting checkpoint log file     err="lstat /home/test/go/src/github.com/lxc/lxd/test/tmp.TBb/7Wt/containers/migratee/state/restore.log: no such file or directory"
Error: Migrate: Failed to run: /home/test/go/bin/lxd forkmigrate migratee /home/test/go/src/github.com/lxc/lxd/test/tmp.TBb/7Wt/containers /home/test/go/src/github.com/lxc/lxd/test/tmp.TBb/7Wt/logs/migratee/lxc.conf /home/test/go/src/github.com/lxc/lxd/test/tmp.TBb/7Wt/containers/migratee/state false: 
Try `lxc info --show-log l1:migratee` for more info
==> Cleaning up
==> Killing LXD at /home/test/go/src/github.com/lxc/lxd/test/tmp.TBb/7Wt
==> Deleting all containers
==> Deleting all images
==> Deleting all networks
==> Deleting all profiles
Error: The 'default' profile cannot be deleted
==> Clearing config of default profile
==> Deleting all storage pools
Storage pool lxdtest-7Wt deleted
==> Checking for locked DB tables
==> Checking for leftover files
==> Checking for leftover DB entries
==> Tearing down directory backend in /home/test/go/src/github.com/lxc/lxd/test/tmp.TBb/7Wt
==> Killing LXD at /home/test/go/src/github.com/lxc/lxd/test/tmp.TBb/nkJ
==> Deleting all containers
==> Deleting all images
==> Deleting all networks
==> Deleting all profiles
Error: The 'default' profile cannot be deleted
==> Clearing config of default profile
==> Deleting all storage pools
Storage pool lxdtest-nkJ deleted
==> Checking for locked DB tables
==> Checking for leftover files
==> Checking for leftover DB entries
==> Tearing down directory backend in /home/test/go/src/github.com/lxc/lxd/test/tmp.TBb/nkJ

==> TEST DONE: migration
==> Test result: failure
attaching storage volumes
==> Checking for dependencies
==> Available storage backends: dir btrfs lvm zfs
==> Using storage backend dir
==> Setting up directory backend in /home/test/go/src/github.com/lxc/lxd/test/tmp.KYx/2Kz
==> Spawning lxd in /home/test/go/src/github.com/lxc/lxd/test/tmp.KYx/2Kz
==> Spawned LXD (PID is 30715)
==> Confirming lxd is responsive
WARN[11-18|08:44:54] CGroup memory swap accounting is disabled, swap limits will be ignored. 
==> Binding to network
If this is your first time running LXD on this machine, you should also run: lxd init
To start your first container, try: lxc launch ubuntu:18.04

==> Bound to 127.0.0.1:60265
==> Setting trust password
==> Setting up networking
Device eth0 added to default
==> Configuring storage backend
==> Configuring directory backend in /home/test/go/src/github.com/lxc/lxd/test/tmp.KYx/2Kz
Storage pool lxdtest-2Kz created
Device root added to default
==> TEST BEGIN: attaching storage volumes
2019/11/18 08:44:59 auth - running at http://127.0.0.1:47579
Image imported as: d3245ce429109d5a5fb7d3119f4fd7216bdfe890aeb3b478be9533b7300d78a7
Setup alias: testimage
Storage volume testvolume created
Creating c1
Starting c1                               
Creating c2
Starting c2                               
Error: Failed to get ID map: Not enough uid/gid available for the container
==> Cleaning up
==> Killing LXD at /home/test/go/src/github.com/lxc/lxd/test/tmp.KYx/2Kz
==> Deleting all containers
==> Deleting all images
==> Deleting all networks
==> Deleting all profiles
Error: The 'default' profile cannot be deleted
==> Clearing config of default profile
==> Deleting all storage pools
Error: storage pool "lxdtest-2Kz" has volumes attached to it
==> Checking for locked DB tables
WARN[11-18|08:45:14] Failed to delete operation 5dc367d0-85f3-4565-8184-274dadaf64ea: failed to begin transaction: sql: database is closed 
==> Checking for leftover files
==> Checking for leftover DB entries
DB table storage_pools is not empty, content:
1|lxdtest-2Kz|dir||1
DB table storage_pools_nodes is not empty, content:
1|1|1
DB table storage_pools_config is not empty, content:
1|1|1|source|/home/test/go/src/github.com/lxc/lxd/test/tmp.KYx/2Kz/storage-pools/lxdtest-2Kz
DB table storage_volumes is not empty, content:
1|testvolume|1|1|2||0|1
DB table storage_volumes_config is not empty, content:
3|1|volatile.idmap.last|[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":65536},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":65536}]
4|1|volatile.idmap.next|[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":65536},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":65536}]
==> Tearing down directory backend in /home/test/go/src/github.com/lxc/lxd/test/tmp.KYx/2Kz

==> TEST DONE: attaching storage volumes
==> Test result: failure
I'm not sure how much of a deal breaker not being able to run those tests is.

I'd just like to say thank you so much for your help and responsiveness - my machine has been causing no end of problems.

@stgraber

This comment has been minimized.

Copy link
Member Author

@stgraber stgraber commented Nov 18, 2019

idmap and attaching storage volumes are both caused by subuid/subgid issues. You can fix that by just deleting /etc/subuid and /etc/subgid.

The migration tests are failing because of CRIU being its usual unreliable self, we usually do not have CRIU installed on our test systems, so I'd say, just apt-get remove criu and that should fix it.

@DBaum1

This comment has been minimized.

Copy link

@DBaum1 DBaum1 commented Nov 19, 2019

I finally got all the tests to work! Your advice + changing the entry for lxd/root in /etc/sub{g,u}id to:

root:1000000:1000000000
lxd:1000000:1000000000

fixed everything! Thanks!

@stgraber

This comment has been minimized.

Copy link
Member Author

@stgraber stgraber commented Nov 19, 2019

Excellent!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
3 participants
You can’t perform that action at this time.