Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

multipass fails to launch on raspberry pi 4 with 18.04 because of missing qemu parameter #1376

Closed
NickZ opened this issue Feb 20, 2020 · 48 comments
Labels
enhancement medium medium importance

Comments

@NickZ
Copy link

NickZ commented Feb 20, 2020

Launching a VM on a raspberry pi 4 with ubuntu server 18.04 fails:

$ multipass launch
launch failed: The following errors occurred:                                   
deep-boa: shutdown called while starting

It seems that when launching an image on ARM64 hosts, multipass does not add the necessary -machine=virt parameter.

#snap logs multipass
2020-02-20T21:08:30Z multipassd[570]: process working dir '/snap/multipass/1599/qemu'
2020-02-20T21:08:30Z multipassd[570]: process program 'qemu-system-aarch64'
2020-02-20T21:08:30Z multipassd[570]: process arguments '--enable-kvm, -device, virtio-scsi-pci,id=scsi0, -drive, file=/var/snap/multipass/common/data/multipassd/vault/instances/deep-boa/ubuntu-18.04-server-cloudimg-arm64.img,if=none,format=qcow2,discard=unmap,id=hda, -device, scsi-hd,drive=hda,bus=scsi0.0, -smp, 1, -m, 1024M, -device, virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:75:0d:8a, -netdev, tap,id=hostnet0,ifname=tap-408e17d5db5,script=no,downscript=no, -qmp, stdio, -cpu, host, -chardev, null,id=char0, -serial, chardev:char0, -nographic, -cdrom, /var/snap/multipass/common/data/multipassd/vault/instances/deep-boa/cloud-init-config.iso'
2020-02-20T21:08:30Z multipassd[8864]: Applying AppArmor policy: multipass.deep-boa.qemu-system-aarch64
2020-02-20T21:08:30Z multipassd[570]: process started
2020-02-20T21:08:30Z multipassd[570]: qemu-system-aarch64: --enable-kvm: No machine specified, and there is no default
Use -machine help to list supported machines
2020-02-20T21:08:30Z multipassd[570]: 
2020-02-20T21:08:30Z multipassd[570]: process finished with exit code 1
}

Launching a virtual machine with qemu with those parameters and -machine=virt added, it launches and runs fine. It looks like multipass just needs to add that parameter when launching on arm64 hosts?

system info:

$ uname -a                                                       
Linux ubuntu 4.19.97-v8-27 #27 SMP PREEMPT Mon Jan 20 10:37:34 PST 2020 aarch64 aarch64 aarch64 GNU/Linux

 $ snap list multipass --all
Name       Version  Rev   Tracking  Publisher   Notes
multipass  1.0.2    1599  beta      canonical✓  classic

 $ snap version
snap    2.43.3
snapd   2.43.3
series  16
ubuntu  18.04
kernel  4.19.97-v8-27

$ kvm-ok 
INFO: /dev/kvm exists
KVM acceleration can be used
@townsend2010
Copy link
Collaborator

Yes, we need to be better about detecting the host architecture and then setting any specific qemu options.

@townsend2010 townsend2010 added enhancement medium medium importance labels Feb 26, 2020
@Saviq
Copy link
Collaborator

Saviq commented Feb 26, 2020

Huh I could've sworn I replied here… I wonder why -machine is required, I wonder if it can be made default.

@Chanakan5591
Copy link

Chanakan5591 commented May 4, 2020

Somehow QEMU on ARM and ARM64 did not have default machine set and can be set by recompiling the QEMU only. So maybe try adding the -machine flag when run on ARM? And I think the reason of not setting default -machine flag is this (From Documentation/Platforms/ARM):

Because ARM systems differ so much and in fundamental ways, typically operating system or firmware images intended to run on one machine will not run at all on any other. This is often surprising for new users who are used to the x86 world where every system looks like a standard PC. (Once the kernel has booted, most userspace software cares much less about the detail of the hardware.)

They said that we can use -machine virt for just virtual machine

@anonymouse64
Copy link

What's the status of this bug? Is it within the scope of multipass to just add this argument when running on i.e. Raspberry Pi?

@Saviq
Copy link
Collaborator

Saviq commented May 5, 2020

Hi @anonymouse64 we're working hard on a LXD backend (will be an option within a couple weeks), at which point it's LXD that will handle qemu, and not Multipass, directly.

You should already be able to try this out with LXD directly for the moment.

@Samuel-Berger
Copy link

Does not work when building tmux on Raspberry Pi 3 either:

user@rbpi3:~/git/snap-tmux$ snapcraft --debug
Launching a VM.
launch failed: Internal error: qemu-system-x86_64 failed getting vmstate (Process returned exit code: 1) with output:
qemu-system-aarch64: -nographic: No machine specified, and there is no default
Use -machine help to list supported machines

An error occurred with the instance when trying to launch with 'multipass': returned exit code 2.
Ensure that 'multipass' is setup correctly and try again.

Output from log:

sudoer@rbpi3:~$ sudo snap logs multipass
2020-06-23T20:59:56Z multipassd[1242]: process arguments '--enable-kvm, -device, virtio-scsi-pci,id=scsi0, -drive, file=/var/snap/multipass/common/data/multipassd/vault/instances/test18/ubuntu-18.04-server-cloudimg-arm64.img,if=none,format=qcow2,discard=unmap,id=hda, -device, scsi-hd,drive=hda,bus=scsi0.0, -smp, 1, -m, 1024M, -device, virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:f1:5c:b6, -netdev, tap,id=hostnet0,ifname=tap-dd34acb92a4,script=no,downscript=no, -qmp, stdio, -cpu, host, -chardev, null,id=char0, -serial, chardev:char0, -nographic, -cdrom, /var/snap/multipass/common/data/multipassd/vault/instances/test18/cloud-init-config.iso'
2020-06-23T20:59:57Z multipassd[1242]: qemu-system-aarch64:
2020-06-23T20:59:57Z multipassd[1242]: qemu-system-aarch64: -nographic: No machine specified, and there is no default
Use -machine help to list supported machines
2020-06-23T20:59:57Z multipassd[1242]: attempting to release non-existant addr: 52:54:00:76:0b:b2
2020-06-23T20:59:57Z dnsmasq[1466]: reading /etc/resolv.conf
2020-06-23T20:59:57Z dnsmasq[1466]: using local addresses only for domain multipass
2020-06-23T20:59:57Z dnsmasq[1466]: using nameserver 127.0.0.53#53
2020-06-23T20:59:57Z dnsmasq[1466]: reading /etc/resolv.conf
2020-06-23T20:59:57Z dnsmasq[1466]: using local addresses only for domain multipass
2020-06-23T20:59:57Z dnsmasq[1466]: using nameserver 127.0.0.53#53

@anonymouse64
Copy link

the error message here trying this is now a little confusing, I can't tell if it's trying to use qemu-system-x86_64 or not ...

$ sudo multipass launch focal
launch failed: Internal error: qemu-system-x86_64 failed getting vmstate (Process returned exit code: 1) with output:
qemu-system-aarch64: -nographic: No machine specified, and there is no default
Use -machine help to list supported machines

@townsend2010
Copy link
Collaborator

Hey @anonymouse64,

The qemu-system-x86_64 string there is hardcoded and was added recently. It should be smarter about which qemu executable is being used, so I'll a new bug for that. That said, it's really using qemu-system-aarch64 since that has no default machine type.

@townsend2010
Copy link
Collaborator

townsend2010 commented Jul 17, 2020

Actually, it wasn't added too recent and I'm the one who did it, lol

@tbalthazar
Copy link

Hi @townsend2010 👋
In the meantime, is there a known workaround to start a VM?

ubuntu@ubuntu:~$ uname -a
Linux ubuntu 5.4.0-1015-raspi #15-Ubuntu SMP Fri Jul 10 05:34:24 UTC 2020 aarch64 aarch64 aarch64 GNU/Linux

ubuntu@ubuntu:~$ snap list multipass --all
Name       Version                  Rev   Tracking     Publisher   Notes
multipass  1.5.0-dev.213+gd6af4d6a  2404  latest/edge  canonical✓  -

ubuntu@ubuntu:~$ snap version
snap    2.45.2
snapd   2.45.2
series  16
ubuntu  20.04
kernel  5.4.0-1015-raspi
ubuntu@ubuntu:~$ multipass launch
launch failed: Internal error: qemu-system-x86_64 failed getting vmstate (Process returned exit code: 1) with output:
qemu-system-aarch64: -nographic: No machine specified, and there is no default
Use -machine help to list supported machines

@townsend2010
Copy link
Collaborator

Hi @tbalthazar,

At this time, the only thing I can think of is to install the LXD snap and switch to using the LXD driver in multipass via:

$ sudo multipass set local.driver=lxd

Hope this helps!

@tbalthazar
Copy link

Thanks @townsend2010, it did help!
For the record, I did:

$ sudo snap install lxd
$ sudo lxd init --auto
$ sudo snap connect multipass:lxd lxd
$ sudo multipass set local.driver=lxd
$ multipass launch
$ 🎉 

@tbalthazar
Copy link

Something doesn't work, creating a VM times out and they don't get an IP:

$ multipass list
Name                    State             IPv4             Image
doting-sole             Running           UNKNOWN          Ubuntu 18.04 LTS
opulent-mosquito        Running           UNKNOWN          Ubuntu 18.04 LTS

I cannot delete/stop them:

ubuntu@ubuntu:~$ multipass stop doting-sole
stop failed: unix:///var/snap/lxd/common/lxd/unix.socket@1.0/operations/cb6da465-0fb3-4f80-9c56-f22452353f11/wait?project=multipass: Operation canceled

I'll investigate...

@anonymouse64
Copy link

I had the same experience as @tbalthazar, so I tried just using lxd directly and didn't get much further,

$ sudo lxc launch --vm images:ubuntu/20.04/cloud
Creating the instance
Instance name is: logical-hog               
Starting logical-hog
$ sudo lxc shell logical-hog
Error: Failed to connect to lxd-agent

Probably worth an upstream LXD issue/bug report.

@townsend2010
Copy link
Collaborator

Seems maybe LXD's dnsmasq is not running correctly on this architecture? Without a machine to test on, logs, etc, it's hard to know for sure.

@tbalthazar
Copy link

here are the Multipass logs (sudo snap logs -f multipass) when I try to launch a new VM (which times out):

2020-08-01T07:33:03Z multipassd[3036]: Did not find any supported products in "appliance"
2020-08-01T07:33:03Z multipassd[3036]: QFileInfo::absolutePath: Constructed with empty filename
2020-08-01T07:33:03Z multipassd[3036]: Creating container with stream: https://cloud-images.ubuntu.com/releases/, id: 126d5fd8ab8fd00a293e531714404d71ce362b4083bff0291a9b7046f965ef2b

and when I try to stop a VM that's running without an IP:

2020-08-01T07:29:34Z multipassd[3036]: Cannot open ssh session on "doting-sole" shutdown: failed to determine IP address
2020-08-01T07:29:34Z multipassd[3036]: No mounts to stop for instance "doting-sole"
2020-08-01T07:30:04Z multipassd[3036]: Request timed out: GET unix:///var/snap/lxd/common/lxd/unix.socket@1.0/operations/a71cb86b-a2bc-44b2-9c10-5bd784a6c3c1/wait?project=multipass

There are no logs in sudo snap logs -f lxd.

@townsend2010
Copy link
Collaborator

Hi @tbalthazar,

I wonder what LXD thinks about these instances. After launching an instance, could you please run:

$ lxc list --project=multipass
and
$ lxc show <instance_name> --project=multipass

and post the output?

At the least the image hash you posted above lets me know that it is indeed downloading the arm64 image.

Thanks!

@tbalthazar
Copy link

Hey @townsend2010,

Thanks for getting back to me.
I removed the multipass and lxc snaps and started from scratch:

ubuntu@ubuntu:~$ sudo snap install lxd
lxd 4.4 from Canonical✓ installed
ubuntu@ubuntu:~$ sudo snap install multipass --candidate
multipass (candidate) 1.3.0 from Canonical✓ installed
ubuntu@ubuntu:~$ sudo lxd init --auto
ubuntu@ubuntu:~$ sudo snap connect multipass:lxd lxd
ubuntu@ubuntu:~$ sudo multipass set local.driver=lxd
ubuntu@ubuntu:~$ multipass launch
launch failed: The following errors occurred:
evocative-gar: timed out waiting for response
ubuntu@ubuntu:~$ multipass list
Name                    State             IPv4             Image
evocative-gar           Running           UNKNOWN          Ubuntu 18.04 LTS
ubuntu@ubuntu:~$ lxc list --project=multipass
To start your first instance, try: lxc launch ubuntu:18.04

+---------------+---------+------+------+-----------------+-----------+
|     NAME      |  STATE  | IPV4 | IPV6 |      TYPE       | SNAPSHOTS |
+---------------+---------+------+------+-----------------+-----------+
| evocative-gar | RUNNING |      |      | VIRTUAL-MACHINE | 0         |
+---------------+---------+------+------+-----------------+-----------+
ubuntu@ubuntu:~$ lxc show evocative-gar --project=multipass
Error: unknown command "show" for "lxc"

Did you mean this?
        stop

ubuntu@ubuntu:~$ lxc info evocative-gar --project=multipass
Name: evocative-gar
Location: none
Remote: unix://
Architecture: aarch64
Created: 2020/08/03 13:39 UTC
Status: Running
Type: virtual-machine
Profiles: default
Pid: 4109122
Ips:
  eth0: inet6   fe80::216:3eff:febb:576d        tapd2c8ae55
Resources:
  Processes: -1
  Network usage:
    eth0:
      Bytes received: 0B
      Bytes sent: 0B
      Packets received: 0
      Packets sent: 0

Let me know if I can provide more info.

@townsend2010
Copy link
Collaborator

Hey @tbalthazar,

Thanks! Sorry about the wrong lxc show command, but you figured it out:)

As suspected, there is something wrong with the networking. Could you please provide the following:

$ lxc network list

If you see a mpbr0 name in the output, please output the following:

$ lxc network show mpbr0

and

$ lxc network info mpbr0

Thanks!

@tbalthazar
Copy link

here you are @townsend2010 😉

ubuntu@ubuntu:~$ lxc network list
+---------+----------+---------+----------------+---------------------------+------------------------------+---------+
|  NAME   |   TYPE   | MANAGED |      IPV4      |           IPV6            |         DESCRIPTION          | USED BY |
+---------+----------+---------+----------------+---------------------------+------------------------------+---------+
| cni0    | bridge   | NO      |                |                           |                              | 0       |
+---------+----------+---------+----------------+---------------------------+------------------------------+---------+
| docker0 | bridge   | NO      |                |                           |                              | 0       |
+---------+----------+---------+----------------+---------------------------+------------------------------+---------+
| eth0    | physical | NO      |                |                           |                              | 0       |
+---------+----------+---------+----------------+---------------------------+------------------------------+---------+
| lxdbr0  | bridge   | YES     | 10.189.31.1/24 | fd42:7e7a:704c:6cc0::1/64 |                              | 1       |
+---------+----------+---------+----------------+---------------------------+------------------------------+---------+
| mpbr0   | bridge   | YES     | 10.5.101.1/24  | fd42:9ef7:1148:4f27::1/64 | Network bridge for Multipass | 2       |
+---------+----------+---------+----------------+---------------------------+------------------------------+---------+
| wlan0   | physical | NO      |                |                           |                              | 0       |
+---------+----------+---------+----------------+---------------------------+------------------------------+---------+
ubuntu@ubuntu:~$ lxc network show mpbr0
config:
  ipv4.address: 10.5.101.1/24
  ipv4.nat: "true"
  ipv6.address: fd42:9ef7:1148:4f27::1/64
  ipv6.nat: "true"
  volatile.bridge.hwaddr: 00:16:3e:68:06:6c
description: Network bridge for Multipass
name: mpbr0
type: bridge
used_by:
- /1.0/instances/evocative-gar?project=multipass
- /1.0/profiles/default?project=multipass
managed: true
status: Created
locations:
- none
ubuntu@ubuntu:~$ lxc network info mpbr0
Name: mpbr0
MAC address: 00:16:3e:68:06:6c
MTU: 1500
State: up

Ips:
  inet  10.5.101.1
  inet6 fd42:9ef7:1148:4f27::1
  inet6 fe80::216:3eff:fe68:66c

Network usage:
  Bytes received: 5.01kB
  Bytes sent: 6.05kB
  Packets received: 27
  Packets sent: 34

@townsend2010
Copy link
Collaborator

Hey @tbalthazar,

So I looked through LXD issues and we seem we might be hitting https://github.com/lxc/lxd/issues/7191 and particularly https://github.com/lxc/lxd/issues/7191#issuecomment-613775164.

@tbalthazar
Copy link

Thanks for looking into it @townsend2010 🤔
Any idea how I could test this security.secureboot=false option mentioned in the comment to see if that helps in our context?

@townsend2010
Copy link
Collaborator

Any idea how I could test this security.secureboot=false option mentioned in the comment to see if that helps in our context?

Maybe lxc config security.secureboot=false will do it?

@tbalthazar
Copy link

No luck:

ubuntu@ubuntu:~$ lxc list --project=multipass
+---------------+---------+------+------+-----------------+-----------+
|     NAME      |  STATE  | IPV4 | IPV6 |      TYPE       | SNAPSHOTS |
+---------------+---------+------+------+-----------------+-----------+
| evocative-gar | RUNNING |      |      | VIRTUAL-MACHINE | 0         |
+---------------+---------+------+------+-----------------+-----------+
ubuntu@ubuntu:~$ sudo lxc config set evocative-gar security.secureboot=false
Error: not found
ubuntu@ubuntu:~$ sudo lxc config set security.secureboot=false
Error: cannot set 'security.secureboot' to 'false': unknown key
ubuntu@ubuntu:~$ sudo lxc config set security.secureboot=0
Error: cannot set 'security.secureboot' to '0': unknown key

@Saviq
Copy link
Collaborator

Saviq commented Aug 3, 2020

ubuntu@ubuntu:~$ sudo lxc config set evocative-gar security.secureboot=false
Error: not found

You need --project=multipass here, too!

@townsend2010
Copy link
Collaborator

Right, so try

$ lxc config set evocative-gar security.secureboot=false --project=multipass
$ lxc stop evocative-gar --project=multipass --force
$ multipass start evocative-gar

If we're lucky, that should do it!

@tbalthazar
Copy link

Thanks for bearing with me!
So, I did:

ubuntu@ubuntu:~$ sudo lxc config set evocative-gar security.secureboot=false --project=multipass
Error: Only user.* keys can be updated on running VMs
ubuntu@ubuntu:~$ sudo lxc stop evocative-gar --project=multipass --force
ubuntu@ubuntu:~$ lxc config set evocative-gar security.secureboot=false --project=multipass
ubuntu@ubuntu:~$ multipass start evocative-gar
ubuntu@ubuntu:~$ multipass ls
Name                    State             IPv4             Image
evocative-gar           Running           10.5.101.9       Ubuntu 18.04 LTS
ubuntu@ubuntu:~$ multipass shell evocative-gar
Welcome to Ubuntu 18.04.4 LTS (GNU/Linux 4.15.0-112-generic aarch64)

Thanks a lot for your help, @townsend2010, it worked. 🎉

Any rough idea about when this could be fixed upstream so we have the same seamless experience of snap install multipass && multipass launch on ARM (the PI 4 with 8GB of RAM is a great device to have around and play with VMs)?

Anyway, thanks again for your help and all your work on multipass! 🙏

@townsend2010
Copy link
Collaborator

@tbalthazar,

Awesome that it works now! Too bad about having to jump through so many hoops to get it going.

Here is the upstream bug for UEFI secure boot on arm64: https://bugs.launchpad.net/ubuntu/+source/shim/+bug/1862279

I've no idea if the signed shim on arm64 will be backported to 18.04.

@zfeng8
Copy link

zfeng8 commented Aug 7, 2020

@tbalthazar @townsend2010 I think the commands only works on ubuntu 18.04. When I tried the first command in ubuntu 20.04, it returned as of Error: not found

@guyluz11
Copy link

@zfeng8 correct, on 18:04 it return Error: not found and in 20.04 the commands does get executed.
Tested on armbian OS device Nanopi k1

@rathboma
Copy link

rathboma commented Sep 4, 2020

👋 Hello amazing people. I just wanted to report the same issue on Raspberry Pi 4, running Ubuntu 20.04.

$ multipass launch
launch failed: Internal error: qemu-system-x86_64 failed getting vmstate (Process returned exit code: 1) with output:
qemu-system-aarch64: -nographic: No machine specified, and there is no default
Use -machine help to list supported machines

Doesn't look like there's a resolution yet, but if there's any way I can help, please let me know

@panlinux
Copy link

panlinux commented Sep 7, 2020

multipass 1.5.0-dev.92+g7bb6af5b 2541 latest/edge canonical✓

Same error with me. Fresh ubuntu focal arm64 host (8cpus, 64Gb of ram, this ain't no pi4):

$ multipass launch
launch failed: Internal error: qemu-system-x86_64 failed getting vmstate (Process returned exit code: 1) with output:
qemu-system-aarch64: -nographic: No machine specified, and there is no default
Use -machine help to list supported machines

snap logs shows:

2020-09-07T18:32:32Z multipassd[3601]: process working dir '/snap/multipass/2541/qemu'
2020-09-07T18:32:32Z multipassd[3601]: process program 'qemu-system-aarch64'
2020-09-07T18:32:32Z multipassd[3601]: process arguments '--enable-kvm, -device, virtio-scsi-pci,id=scsi0, -drive, file=/var/snap/multipass/common/data/multipassd/vault/instances/amazed-quetzal/ubuntu-20.04-server-cloudimg-arm64.img,if=none,format=qcow2,discard=unmap,id=hda, -device, scsi-hd,drive=hda,bus=scsi0.0, -smp, 1, -m, 1024M, -device, virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:0a:0f:95, -netdev, tap,id=hostnet0,ifname=tap-c24f44b1ad6,script=no,downscript=no, -qmp, stdio, -cpu, host, -chardev, null,id=char0, -serial, chardev:char0, -nographic, -cdrom, /var/snap/multipass/common/data/multipassd/vault/instances/amazed-quetzal/cloud-init-config.iso'
2020-09-07T18:32:32Z multipassd[3601]: starting: qemu-system-aarch64 -nographic -dump-vmstate /tmp/multipassd.th3601
2020-09-07T18:32:32Z multipassd[3601]: qemu-system-aarch64: -nographic:
2020-09-07T18:32:32Z multipassd[3601]: qemu-system-aarch64: -nographic: No machine specified, and there is no default
Use -machine help to list supported machines
2020-09-07T18:32:32Z multipassd[3601]: attempting to release non-existant addr: 52:54:00:0a:0f:95

I hit this when I was trying to build a snap on arm64 using snapcraft. The default experience will hit this bug, as multipass is the default build environment.

@Saviq
Copy link
Collaborator

Saviq commented Sep 8, 2020

Hi @panlinux, we're moving to LXD as our default KVM backend very soon, could you please verify that you can lxc launch --vm ubuntu: on this host?

@panlinux
Copy link

panlinux commented Sep 8, 2020

Nope, that did not work:

$ lxc launch --vm ubuntu:
Creating the instance
Instance name is: accurate-gorilla            
Starting accurate-gorilla
Error: Failed to run: /snap/lxd/current/bin/lxd forklimits limit=memlock:unlimited:unlimited -- /snap/lxd/16945/bin/qemu-system-aarch64 -S -name accurate-gorilla -uuid c19b5c1c-7c2e-4c54-87e7-dce72b589e9f -daemonize -cpu host -nographic -serial chardev:console -nodefaults -no-reboot -no-user-config -sandbox on,obsolete=deny,elevateprivileges=allow,spawn=deny,resourcecontrol=deny -readconfig /var/snap/lxd/common/lxd/logs/accurate-gorilla/qemu.conf -pidfile /var/snap/lxd/common/lxd/logs/accurate-gorilla/qemu.pid -D /var/snap/lxd/common/lxd/logs/accurate-gorilla/qemu.log -chroot /var/snap/lxd/common/lxd/virtual-machines/accurate-gorilla -smbios type=2,manufacturer=Canonical Ltd.,product=LXD -runas lxd: : exit status 1
Try `lxc info --show-log local:accurate-gorilla` for more info

Log has:

$ lxc info --show-log local:accurate-gorilla
Name: accurate-gorilla
Location: none
Remote: unix://
Architecture: aarch64
Created: 2020/09/08 12:33 UTC
Status: Stopped
Type: virtual-machine
Profiles: default

Log:

qemu-system-aarch64: warning: gic-version=host not relevant with kernel-irqchip=off as only userspace GICv2 is supported. Using v2 ...
qemu-system-aarch64:/var/snap/lxd/common/lxd/logs/accurate-gorilla/qemu.conf:63: MSI-X is not supported by interrupt controller

@panlinux
Copy link

panlinux commented Sep 8, 2020

I filed an lxd bug about that: https://github.com/lxc/lxd/issues/7846

@Saviq
Copy link
Collaborator

Saviq commented Sep 10, 2020

@panlinux now the fix got merged you should be able to:

$ snap refresh lxd --channel edge
$ snap refresh multipass --channel edge  # stable should work for the most part, too
$ snap connect multipass:lxd lxd:
$ multipass set local.driver=lxd
$ multipass launch

@tbalthazar
Copy link

@Saviq thanks for the heads-up. It didn't work for me:

ubuntu@ubuntu:~$ uname -a
Linux ubuntu 5.4.0-1015-raspi #15-Ubuntu SMP Fri Jul 10 05:34:24 UTC 2020 aarch64 aarch64 aarch64 GNU/Linux
ubuntu@ubuntu:~$ sudo snap install lxd --channel edge
lxd (edge) git-ec14e2a from Canonical✓ installed
ubuntu@ubuntu:~$ sudo snap install multipass --channel edge
multipass (edge) 1.5.0-dev.94+g793ee25c from Canonical✓ installed
ubuntu@ubuntu:~$ sudo snap connect multipass:lxd lxd
ubuntu@ubuntu:~$ sudo multipass set local.driver=lxd
ubuntu@ubuntu:~$ multipass launch
launch failed: multipass socket access denied
Please check that you have read/write permissions to '/var/snap/multipass/common/multipass_socket'
ubuntu@ubuntu:~$ sudo multipass launch
launch failed: unix://multipass/var/snap/lxd/common/lxd/unix.socket@1.0/virtual-machines?project=multipass: Bad Request

@townsend2010
Copy link
Collaborator

Hey @tbalthazar,

The original launch failed: multipass socket access denied is due to a long delay while multipassd starts when using the LXD driver. Just keep trying it without sudo and it will eventually work. A fix for this will come in the future, but due to the nature of the multipass/lxd interaction, fixing it isn't trivial.

@tbalthazar
Copy link

@townsend2010 I tried again and it still doesn't work.
I rebooted the Pi, did multipass launch, it re-downloaded the image (which was surprising to me, since I already downloaded it before rebooting), but the error is still there:

ubuntu@ubuntu:~$ multipass launch
launch failed: unix://multipass/var/snap/lxd/common/lxd/unix.socket@1.0/virtual-machines?project=multipass: Bad Request
ubuntu@ubuntu:~$ multipass launch
launch failed: unix://multipass/var/snap/lxd/common/lxd/unix.socket@1.0/virtual-machines?project=multipass: Bad Request
ubuntu@ubuntu:~$ multipass launch
launch failed: unix://multipass/var/snap/lxd/common/lxd/unix.socket@1.0/virtual-machines?project=multipass: Bad Request
ubuntu@ubuntu:~$ multipass launch
launch failed: unix://multipass/var/snap/lxd/common/lxd/unix.socket@1.0/virtual-machines?project=multipass: Bad Request
ubuntu@ubuntu:~$ multipass launch
launch failed: unix://multipass/var/snap/lxd/common/lxd/unix.socket@1.0/virtual-machines?project=multipass: Bad Request
ubuntu@ubuntu:~$ multipass launch
launch failed: unix://multipass/var/snap/lxd/common/lxd/unix.socket@1.0/virtual-machines?project=multipass: Bad Request

@townsend2010
Copy link
Collaborator

Hi @tbalthazar,

Hmm, LXD is not happy with something we are sending it. Could you run in a separate terminal lxc monitor --debug and then run the launch and perhaps it will give us some idea of what it doesn't like?

@tbalthazar
Copy link

@townsend2010 that's what I get:

location: none
metadata:
  context:
    ip: '@'
    method: GET
    url: /1.0?project=multipass
    user: ""
  level: dbug
  message: Handling
timestamp: "2020-09-10T20:52:54.773394427Z"
type: logging


location: none
metadata:
  context:
    ip: '@'
    method: GET
    url: /1.0/projects/multipass?project=multipass
    user: ""
  level: dbug
  message: Handling
timestamp: "2020-09-10T20:52:54.778581129Z"
type: logging


location: none
metadata:
  context:
    ip: '@'
    method: GET
    url: /1.0/networks/mpbr0?project=multipass
    user: ""
  level: dbug
  message: Handling
timestamp: "2020-09-10T20:52:54.782224139Z"
type: logging


location: none
metadata:
  context:
    ip: '@'
    method: GET
    url: /1.0/virtual-machines/incredible-whale?project=multipass
    user: ""
  level: dbug
  message: Handling
timestamp: "2020-09-10T20:52:54.793861125Z"
type: logging


location: none
metadata:
  context: {}
  level: dbug
  message: 'Database error: &errors.errorString{s:"No such object"}'
timestamp: "2020-09-10T20:52:54.795369262Z"
type: logging


location: none
metadata:
  context:
    ip: '@'
    method: GET
    url: /1.0/images/27a0e12a5b9aab193a987863e4c8ee8d308b1b0e20cb987759a35d50aa187360?project=multipass
    user: ""
  level: dbug
  message: Handling
timestamp: "2020-09-10T20:52:54.79818337Z"
type: logging


location: none
metadata:
  context:
    ip: '@'
    method: GET
    url: /1.0?project=multipass
    user: ""
  level: dbug
  message: Handling
timestamp: "2020-09-10T20:52:54.805488186Z"
type: logging


location: none
metadata:
  context:
    ip: '@'
    method: GET
    url: /1.0/virtual-machines/incredible-whale/state?project=multipass
    user: ""
  level: dbug
  message: Handling
timestamp: "2020-09-10T20:52:54.813788234Z"
type: logging


location: none
metadata:
  context: {}
  level: dbug
  message: 'Database error: &errors.errorString{s:"No such object"}'
timestamp: "2020-09-10T20:52:54.815304056Z"
type: logging


location: none
metadata:
  context:
    ip: '@'
    method: POST
    url: /1.0/virtual-machines?project=multipass
    user: ""
  level: dbug
  message: Handling
timestamp: "2020-09-10T20:52:54.819567339Z"
type: logging


location: none
metadata:
  context: {}
  level: dbug
  message: Responding to instance create
timestamp: "2020-09-10T20:52:54.819624654Z"
type: logging


location: none
metadata:
  context:
    ip: '@'
    method: DELETE
    url: /1.0/virtual-machines/incredible-whale?project=multipass
    user: ""
  level: dbug
  message: Handling
timestamp: "2020-09-10T20:52:54.826908302Z"
type: logging


location: none
metadata:
  context: {}
  level: dbug
  message: 'Database error: &errors.errorString{s:"No such object"}'
timestamp: "2020-09-10T20:52:54.827601056Z"
type: logging

@townsend2010
Copy link
Collaborator

Hey @tbalthazar,

Well, lxc monitor sure didn't provide anything helpful. So the next thing to do will be to try to enable trace logging and then getting the output out of journalctl.

First thing to do is:
$ sudo systemctl cat snap.multipass.multipassd.service
and then put in the following:

[Service]
ExecStart=
ExecStart=/usr/bin/snap run multipass.multipassd --verbosity trace

then do:
$ sudo snap restart multipass

Then from journalctl, copy the relevant lines from when you try to launch the instance. Hopefully that may shed some light as to why LXD is unhappy with the request.

@tbalthazar
Copy link

here you are @townsend2010 :

ubuntu@ubuntu:~$ cat /etc/systemd/system/snap.multipass.multipassd.service
[Unit]
# Auto-generated, DO NOT EDIT
Description=Service for snap application multipass.multipassd
Requires=snap-multipass-2552.mount
Wants=network.target
After=snap-multipass-2552.mount network.target snapd.apparmor.service
X-Snappy=yes

[Service]
EnvironmentFile=-/etc/environment
ExecStart=/usr/bin/snap run multipass.multipassd --verbosity trace
SyslogIdentifier=multipass.multipassd
Restart=on-failure
WorkingDirectory=/var/snap/multipass/2552
TimeoutStopSec=30
Type=simple

[Install]
WantedBy=multi-user.target
ubuntu@ubuntu:~$ sudo snap restart multipass
ubuntu@ubuntu:~$ multipass launch
launch failed: unix://multipass/var/snap/lxd/common/lxd/unix.socket@1.0/virtual-machines?project=multipass: Bad Request
$ journalctl -f
Sep 12 09:43:02 ubuntu multipassd[1892388]: QFileInfo::absolutePath: Constructed with empty filename
Sep 12 09:43:02 ubuntu multipassd[1892388]: Creating instance with image id: 27a0e12a5b9aab193a987863e4c8ee8d308b1b0e20cb987759a35d50aa187360
Sep 12 09:43:02 ubuntu multipassd[1892388]: Instance 'exalted-chigger' does not exist: not removing

@harryqt
Copy link

harryqt commented Mar 21, 2021

So, there are no fix for this? Using Ubuntu 20.04 on Pi 4, getting the same error.

@Saviq
Copy link
Collaborator

Saviq commented Mar 21, 2021

@Dibbyo456 did you try with the LXD driver? sudo snap connect multipass:lxd lxd; sudo multipass set local.driver=lxd and it should be good again.

@harryqt
Copy link

harryqt commented Mar 21, 2021

@Saviq Thank you so much for getting back to me. I'm kinda linux noob. Can you please give me all the commands step by step?

@nikitalita
Copy link

For the benefit of those playing at home, here's the commands you should run to install and setup multipass with lxd set as the driver on a Raspberry Pi 4 (tested using on aarch64 Ubuntu 21.04):

sudo snap install lxd
sudo lxd init --auto
sudo snap install multipass --candidate
sudo snap connect multipass:lxd lxd
sudo multipass set local.driver=lxd

#wait like 5 minutes for multipass to connect to lxd, only a one-time thing...
#then launch a VM:
multipass launch 

@Saviq
Copy link
Collaborator

Saviq commented Oct 27, 2021

Multipass 1.8.0 is now in the candidate channel, with LXD as the default backend.

@Saviq Saviq closed this as completed Oct 27, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement medium medium importance
Projects
None yet
Development

No branches or pull requests