Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Raspbian: Error response from daemon: unable to find "net_prio" in controller set: unknown #545

Closed
2 of 3 tasks
Pea13 opened this issue Jan 7, 2019 · 54 comments · Fixed by moby/moby#38873
Closed
2 of 3 tasks

Comments

@Pea13
Copy link

Pea13 commented Jan 7, 2019

  • This is a bug report
  • This is a feature request
  • I searched existing issues before opening this one

Expected behavior

Create and run containers.

Actual behavior

# docker run --rm -it alpine:3.8 /bin/sh
docker: Error response from daemon: unable to find "net_prio" in controller set: unknown.

Steps to reproduce the behavior

Output of docker version:

Client:
 Version:           18.09.0
 API version:       1.39
 Go version:        go1.10.4
 Git commit:        4d60db4
 Built:             Wed Nov  7 00:57:21 2018
 OS/Arch:           linux/arm
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          18.09.0
  API version:      1.39 (minimum version 1.12)
  Go version:       go1.10.4
  Git commit:       4d60db4
  Built:            Wed Nov  7 00:17:57 2018
  OS/Arch:          linux/arm
  Experimental:     true

Output of docker info:

Containers: 0
 Running: 0
 Paused: 0
 Stopped: 0
Images: 5
Server Version: 18.09.0
Storage Driver: btrfs
 Build Version: Btrfs v4.7.3
 Library Version: 101
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: bridge host ipvlan macvlan null overlay
 Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: active
 NodeID: jxgj5lhwlq7mep5e4jqx64frm
 Is Manager: true
 ClusterID: m10pfu1k75j8mvx5mqxjbb67n
 Managers: 3
 Nodes: 5
 Default Address Pool: 10.0.0.0/8
 SubnetSize: 24
 Orchestration:
  Task History Retention Limit: 5
 Raft:
  Snapshot Interval: 10000
  Number of Old Snapshots to Retain: 0
  Heartbeat Tick: 1
  Election Tick: 3
 Dispatcher:
  Heartbeat Period: 5 seconds
 CA Configuration:
  Expiry Duration: 3 months
  Force Rotate: 0
 Autolock Managers: false
 Root Rotation In Progress: false
 Node Address: 192.168.1.37
 Manager Addresses:
  192.168.1.33:2377
  192.168.1.34:2377
  192.168.1.37:2377
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: c4446665cb9c30056f4998ed953e6d4ff22c7c39
runc version: 4fc53a81fb7c994640722ac585fa9ca548971871
init version: fec3683
Security Options:
 seccomp
  Profile: default
Kernel Version: 4.19.13-v7+
Operating System: Raspbian GNU/Linux buster/sid
OSType: linux
Architecture: armv7l
CPUs: 4
Total Memory: 926.1MiB
Name: docker02
ID: KCJI:V4QV:M3ZZ:R5IU:R444:GNAX:3ORW:CEDK:V367:6X27:OHXC:VCVB
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Labels:
Experimental: true
Insecure Registries:
 127.0.0.0/8
Live Restore Enabled: false
Product License: Community Engine

WARNING: No swap limit support
WARNING: No cpu cfs quota support
WARNING: No cpu cfs period support

Additional environment details (AWS, VirtualBox, physical, etc.)

I use Raspbian buster with the latest kernel (branch next) : Linux docker02 4.19.13-v7+ #1186 SMP Tue Jan 1 11:32:58 GMT 2019 armv7l GNU/Linux.

Everything was working fine but since the last upgrade (of Raspbian), it seems to be broken.

# ls -hl /sys/fs/cgroup/
total 0
dr-xr-xr-x 5 root root  0 Dec 21 19:53 blkio
lrwxrwxrwx 1 root root 11 Dec 21 19:53 cpu -> cpu,cpuacct
lrwxrwxrwx 1 root root 11 Dec 21 19:53 cpuacct -> cpu,cpuacct
dr-xr-xr-x 5 root root  0 Dec 21 19:53 cpu,cpuacct
dr-xr-xr-x 3 root root  0 Dec 21 19:53 cpuset
dr-xr-xr-x 5 root root  0 Dec 21 19:53 devices
dr-xr-xr-x 3 root root  0 Dec 21 19:53 freezer
dr-xr-xr-x 5 root root  0 Dec 21 19:53 memory
dr-xr-xr-x 3 root root  0 Dec 21 19:53 net_cls
lrwxrwxrwx 1 root root  7 Dec 21 19:53 net_prio -> net_cls
dr-xr-xr-x 6 root root  0 Dec 21 19:53 systemd
dr-xr-xr-x 5 root root  0 Dec 21 19:53 unified
# mount | grep cgroup
tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,mode=755)
cgroup2 on /sys/fs/cgroup/unified type cgroup2 (rw,nosuid,nodev,noexec,relatime,nsdelegate)
cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,name=systemd)
cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpu,cpuacct)
cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer)
cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio)
cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory)
cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices)
cgroup on /sys/fs/cgroup/net_cls type cgroup (rw,nosuid,nodev,noexec,relatime,net_cls)
cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset)

Do you have any ideas or tests i can do ?
Regards.

Thanks,

@magpielover
Copy link

I have the same issue on Kali

@Pea13
Copy link
Author

Pea13 commented Jan 14, 2019

Hello,

Could be related to systemd. There is an similary issue here: #552 .
On my system, i have systemd 240-2+rpi1
@magpielover could you please give us your systemd's version ?

@TheAifam5
Copy link

TheAifam5 commented Jan 14, 2019

@Pea13 its related to systemd. Please downgrade systemd to resolve that problem for now.

The issue is present on 240 version and above. Older versions are fully functional.

@Pea13
Copy link
Author

Pea13 commented Jan 14, 2019

It seems complicated. I only have the latest version available:

# apt-cache showpkg systemd
Package: systemd
Versions: 
240-2+rpi1 (/var/lib/apt/lists/raspbian.raspberrypi.org_raspbian_dists_buster_main_binary-armhf_Packages) (/var/lib/dpkg/status)
 Description Language: 
                 File: /var/lib/apt/lists/raspbian.raspberrypi.org_raspbian_dists_buster_main_binary-armhf_Packages
                  MD5: 00afa0c6fd35cc93a91e4654874648cb

Maybe i can do some tests with this new version ?
Which changes could affect Docker ?

@sthibaul
Copy link

To fix it on my system I just needed to rebuild my kernel with CONFIG_CGROUP_NET_PRIO enabled

@magpielover
Copy link

systemd's version
sudo systemd --version
systemd 240

@Pea13
Copy link
Author

Pea13 commented Jan 21, 2019

That's really annoying.. The only solutions are :

  • build a new kernel with CONFIG_CGROUP_NET_PRIO enable ?
  • downgrade systemd ?
    No news about fixing this to work with the latest systemd version ?
    Regards,

@TheAifam5
Copy link

@Pea13 I fixed it just by downgrading systemd. I watching multiple similar issues so I will wait until the issues with systemd will be fixed. :)

@Pea13
Copy link
Author

Pea13 commented Jan 21, 2019

Yes i know, but for raspbian stretch, older packages are no longer available :(
I fixed the problem by rebuilduing the kernel for all my raspberry pi 3.

@lategoodbye
Copy link

Did anyone report this to systemd?

@TheAifam5
Copy link

@lategoodbye nope.. because they know about it. just read the changelog.

@lategoodbye
Copy link

I read the NEWS file for version 240 and didn't find anything about CONFIG_CGROUP_NET_PRIO.

Could you please point me to the relevant part?

@damentz
Copy link

damentz commented Jan 28, 2019

@lategoodbye in-case you didn't find it, @TheAifam5 posted which parts of systemd 240 changelog seem related on a linked github issue:

#552 (comment)

Seems like systemd changed the way they expose cgroups and it's fooling Docker into thinking controllers are available when they're not, or vice versa? Adding the cgroup to the kernel config papers over the issue by implementing the cgroup controller, but on some kernels like CK / MuQSS, some controllers are unavailable entirely (cpuacct), and worked fine up until systemd 240.

@Pea13
Copy link
Author

Pea13 commented Jan 28, 2019

I agree. On raspbian's kernel, the option CONFIG_CGROUP_NET_PRIO has never been set to y (iirc) but it worked fine until systemd 240.

@lategoodbye
Copy link

@damentz Thanks. I read this section but this doesn't clearly explain why CONFIG_CGROUP_NET_PRIO must be enabled. I think this issue should be solved in userspace. Sorry, but i'm pretty annoyed that systemd has such a dependency from specific kernel config options.

Did anyone tried to add systemd.unified_cgroup_hierarchy=0 to the kernel cmdline as mentioned in the changelog?

@damentz
Copy link

damentz commented Jan 29, 2019

I just tried it now and got the same result as I did without it for a MuQSS based kernel. I can't confirm the same result with Raspbian, but I suspect it will be the same.

@lategoodbye
Copy link

Thanks for testing this.

@cpuguy83
Copy link
Collaborator

Are (or were) you using JoinControllers in your systemd system.conf? JoinControllers is not supported as of 240.

@andrewhsu
Copy link
Contributor

FYI when Fedora 30 is released, it may likely have systemd 240. This may have an effect on Fedora x86_64 if the issue is found to be caused by systemd 240 behaviour.

@Pea13
Copy link
Author

Pea13 commented Jan 31, 2019

Are (or were) you using JoinControllers in your systemd system.conf? JoinControllers is not supported as of 240.

No. No JoinControllers in the file /etc/systemd/system.conf

@tonistiigi
Copy link
Member

Looking at where the actual error is coming from, it seems to be from comparing the list of cgroups with the mounts. Can you post cat /proc/$(pgrep dockerd)/cgroup as well to be sure (and the commands in the initial comment if their output isn't exactly the same).

@damentz
Copy link

damentz commented Feb 1, 2019

$ cat /proc/$(pgrep dockerd)/cgroup
10:net_cls,net_prio:/
9:blkio:/system.slice/docker.service
8:cpu:/system.slice/docker.service
7:hugetlb:/
6:freezer:/
5:devices:/system.slice/docker.service
4:perf_event:/
3:cpuset:/
2:memory:/system.slice/docker.service
1:name=systemd:/system.slice/docker.service
0::/system.slice/docker.service
$ mount | grep cgroup
tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,mode=755)
cgroup2 on /sys/fs/cgroup/unified type cgroup2 (rw,nosuid,nodev,noexec,relatime,nsdelegate)
cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,name=systemd)
cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory)
cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset)
cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event)
cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices)
cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer)
cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,hugetlb)
cgroup on /sys/fs/cgroup/cpu type cgroup (rw,nosuid,nodev,noexec,relatime,cpu)
cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio)
cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,net_cls,net_prio)

And for reference since I'm coming from a linked github issue (#552), my particular message when running a container is unable to find "cpuacct" in controller set: unknown since upgrading from systemd 239 to 240.

@Pea13
Copy link
Author

Pea13 commented Feb 2, 2019

On Raspbian buster
kernel: Linux docker04 4.19.17-v7+ #1196 SMP Thu Jan 24 14:59:34 GMT 2019 armv7l GNU/Linux

# cat /proc/$(pgrep dockerd)/cgroup
8:net_cls:/
7:memory:/system.slice/docker.service
6:freezer:/
5:blkio:/system.slice/docker.service
4:devices:/system.slice/docker.service
3:cpuset:/
2:cpu,cpuacct:/system.slice/docker.service
1:name=systemd:/system.slice/docker.service
0::/system.slice/docker.service
# mount | grep cgroup
tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,mode=755)
cgroup2 on /sys/fs/cgroup/unified type cgroup2 (rw,nosuid,nodev,noexec,relatime,nsdelegate)
cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,name=systemd)
cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpu,cpuacct)
cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset)
cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices)
cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio)
cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer)
cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory)
cgroup on /sys/fs/cgroup/net_cls type cgroup (rw,nosuid,nodev,noexec,relatime,net_cls)
# ls -hl /sys/fs/cgroup/
total 0
dr-xr-xr-x 5 root root  0 Dec 21 19:53 blkio
lrwxrwxrwx 1 root root 11 Dec 21 19:53 cpu -> cpu,cpuacct
lrwxrwxrwx 1 root root 11 Dec 21 19:53 cpuacct -> cpu,cpuacct
dr-xr-xr-x 5 root root  0 Dec 21 19:53 cpu,cpuacct
dr-xr-xr-x 3 root root  0 Dec 21 19:53 cpuset
dr-xr-xr-x 5 root root  0 Dec 21 19:53 devices
dr-xr-xr-x 3 root root  0 Dec 21 19:53 freezer
dr-xr-xr-x 5 root root  0 Dec 21 19:53 memory
dr-xr-xr-x 3 root root  0 Dec 21 19:53 net_cls
lrwxrwxrwx 1 root root  7 Dec 21 19:53 net_prio -> net_cls
dr-xr-xr-x 6 root root  0 Dec 21 19:53 systemd
dr-xr-xr-x 5 root root  0 Dec 21 19:53 unified

@fedya
Copy link

fedya commented Feb 7, 2019

any fix for this?

@Niemi
Copy link

Niemi commented Feb 13, 2019

same question.

@sanpoChew
Copy link

is this MR that was just merged going to fix this?

containerd/cgroups#77

@sanpoChew
Copy link

looks like it will be fixed in next release of containerd

containerd/containerd#3048

kiku-jw pushed a commit to kiku-jw/moby that referenced this issue May 16, 2019
Relevant changes:

- containerd/containerd#51 Fix empty device type
- containerd/containerd#52 Remove call to unitName
  - Calling unitName incorrectly appends -slice onto the end of the slice cgroup we are looking for
  - addresses containerd/containerd#47 cgroups: cgroup deleted
- containerd/containerd#53 systemd-239+ no longer allows delegate slice
- containerd/containerd#54 Bugfix: can't write to cpuset cgroup
- containerd/containerd#63 Makes Load function more lenient on subsystems' checking
  - addresses containerd/containerd#58 Very strict checking of subsystems' existence while loading cgroup
- containerd/containerd#67 Add functionality for retrieving all tasks of a cgroup
- containerd/containerd#68 Fix net_prio typo
- containerd/containerd#69 Blkio weight/leafWeight pointer value
- containerd/containerd#77 Check for non-active/supported cgroups
  - addresses containerd/containerd#76 unable to find * in controller set: unknown
  - addresses docker/for-linux#545 Raspbian: Error response from daemon: unable to find "net_prio" in controller set: unknown
  - addresses docker/for-linux#552 Error response from daemon: unable to find "cpuacct" in controller set: unknown
  - addresses docker/for-linux#545 Raspbian: Error response from daemon: unable to find "net_prio" in controller set: unknown

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
robertgzr added a commit to balena-os/balena-engine that referenced this issue Jun 3, 2019
Change-type: patch
Connects-to: docker/for-linux#545
Signed-off-by: Robert Günzler <robertg@balena.io>
robertgzr added a commit to balena-os/balena-engine that referenced this issue Jun 3, 2019
Fixes issues with systemd versoin >=420 and non-existent cgroups.

Change-type: patch
Connects-to: containerd/cgroups#76
Connects-to: docker/for-linux#545
Signed-off-by: Robert Günzler <robertg@balena.io>
robertgzr added a commit to balena-os/balena-engine that referenced this issue Jun 3, 2019
Fixes issues with systemd version >=420 and non-existent cgroups.

Change-type: patch
Connects-to: containerd/cgroups#76
Connects-to: docker/for-linux#545
Signed-off-by: Robert Günzler <robertg@balena.io>
@sbrudenell
Copy link

This doesn't seem fixed in current raspbian.

On a raspberry pi 3, I installed raspbian buster lite (from 2019-06-20-raspbian-buster-lite.zip), ran apt-get update && apt-get install docker.io, and got:

pi@raspberrypi:~ $ docker run --rm -it alpine:3.8 /bin/sh
docker: Error response from daemon: unable to find "net_prio" in controller set: unknown.

I have:

pi@raspberrypi:~ $ docker info | grep containerd
containerd version: 9754871865f7fe2f4e74d43e2fc7ccd237edcbce
WARNING: No swap limit support
WARNING: No cpu cfs quota support
WARNING: No cpu cfs period support

And:

pi@raspberrypi:~ $ docker-containerd --version
containerd github.com/containerd/containerd 18.09.1 9754871865f7fe2f4e74d43e2fc7ccd237edcbce

I don't understand where to look for "Containerd 1.2.5-1", which is quoted above as working, but my overall docker version is later than the one quoted above.

Is this a regression, or some problem with raspbian versioning? If so, how do I identify it?

@socmag
Copy link

socmag commented Jun 27, 2019

@sbrudenell

It seems you can check with kubectl

# kubectl get node -o wide
NAME       STATUS   ROLES    AGE     VERSION         INTERNAL-IP    EXTERNAL-IP   OS-IMAGE                         KERNEL-VERSION   CONTAINER-RUNTIME
minisub1   Ready    master   99m     v1.14.3-k3s.1   192.168.2.34   <none>        Raspbian GNU/Linux 10 (buster)   4.19.50-v7l+     containerd://1.2.5+unknown
minisub2   Ready    worker   6m41s   v1.14.3-k3s.1   192.168.2.36   <none>        Raspbian GNU/Linux 10 (buster)   4.19.50-v7l+     containerd://1.2.5+unknown

I have 1.2.5+unknown - dunno what "unknown" part is, but in any case we still get the same error as you.

The versions of everything else appear to be correct

@Floppy
Copy link

Floppy commented Jun 27, 2019

@sbrudenell Assuming the hash in your (and my) containerd version is a commit hash, it corresponds to containerd version 1.2.2: containerd/containerd@9754871 - so perhaps we need a newer containerd in the docker.io package.

Although given some other reports of the same error, the real problem is I think that docker haven't done an official buster release for raspbian yet - we might have to wait for that and change over to docker-ce. See #709 🤷‍♂

@amerinoo
Copy link

Could you reopen the issue? As @sbrudenell says, this doesn't seem fixed in current raspbian.

@conradKE
Copy link

docker: Error response from daemon: unable to find "net_prio" in controller set: unknown

the issue is still there on rasbian buster

@yun14u
Copy link

yun14u commented Jul 16, 2019

agree with @Azim254 . the issue is still there on raspbian buster
$ sudo docker-containerd --version
containerd github.com/containerd/containerd 18.09.1 9754871865f7fe2f4e74d43e2fc7ccd237edcbce

@CountParadox
Copy link

Just fresh installed buster on a pi 3B+ and same issue, seems its indeed still there
pi@raspberrypi:~ $ sudo docker-containerd --version
containerd github.com/containerd/containerd 18.09.1 9754871865f7fe2f4e74d43e2fc7ccd237edcbce

@amurchikus
Copy link

Hi
I have the same
Running in 319bfa614698
unable to find "net_prio" in controller set: unknown

root@raspberrypi:/opt/dockerprojets/homeassistant# docker info | grep containerd
WARNING: No swap limit support
WARNING: No cpu cfs quota support
WARNING: No cpu cfs period support
containerd version: 9754871865f7fe2f4e74d43e2fc7ccd237edcbce
root@raspberrypi:/opt/dockerprojets/homeassistant# docker-containerd --version
containerd github.com/containerd/containerd 18.09.1 9754871865f7fe2f4e74d43e2fc7ccd237edcbce

@Jeremy-Marghem
Copy link

Hi
Same problem here:

jma@raspberrypi:~ $ sudo docker run hello-world
docker: Error response from daemon: unable to find "net_prio" in controller set: unknown.
ERRO[0001] error waiting for container: context canceled
jma@raspberrypi:~ $ sudo docker info | grep containerd
containerd version: 9754871865f7fe2f4e74d43e2fc7ccd237edcbce
WARNING: No swap limit support
WARNING: No cpu cfs quota support
WARNING: No cpu cfs period support
jma@raspberrypi:~ $ sudo docker-containerd --version
containerd github.com/containerd/containerd 18.09.1 9754871865f7fe2f4e74d43e2fc7ccd237edcbce

Linux raspberrypi 4.19.57-v7+ #1244 SMP Thu Jul 4 18:45:25 BST 2019 armv7l GNU/Linux

@jgourmelen
Copy link

same problem with raspbian lite and pi3B+

@jgourmelen
Copy link

jgourmelen commented Jul 17, 2019

root@raspberrypi:/var/log# sudo docker run hello-world
Jul 17 19:00:42 raspberrypi systemd[902]: var-lib-docker-overlay2-44cb540ead7ca89f0a3b7f7733b3bd3a61a807ff031d03ed625853ca85571651\x2dinit-merged.mount: Succeeded.
Jul 17 19:00:42 raspberrypi systemd[1]: var-lib-docker-overlay2-44cb540ead7ca89f0a3b7f7733b3bd3a61a807ff031d03ed625853ca85571651\x2dinit-merged.mount: Succeeded.
Jul 17 19:00:42 raspberrypi kernel: [ 828.272044] docker0: port 1(vethb0c506a) entered blocking state
Jul 17 19:00:42 raspberrypi kernel: [ 828.272064] docker0: port 1(vethb0c506a) entered disabled state
Jul 17 19:00:42 raspberrypi kernel: [ 828.272379] device vethb0c506a entered promiscuous mode
Jul 17 19:00:42 raspberrypi kernel: [ 828.272878] IPv6: ADDRCONF(NETDEV_UP): vethb0c506a: link is not ready
Jul 17 19:00:42 raspberrypi kernel: [ 828.374550] IPv6: ADDRCONF(NETDEV_UP): veth7d82fed: link is not ready
Jul 17 19:00:42 raspberrypi kernel: [ 828.374593] IPv6: ADDRCONF(NETDEV_CHANGE): veth7d82fed: link becomes ready
Jul 17 19:00:42 raspberrypi kernel: [ 828.374737] IPv6: ADDRCONF(NETDEV_CHANGE): vethb0c506a: link becomes ready
Jul 17 19:00:42 raspberrypi kernel: [ 828.374829] docker0: port 1(vethb0c506a) entered blocking state
Jul 17 19:00:42 raspberrypi dhcpcd[534]: veth7d82fed: waiting for carrier
Jul 17 19:00:42 raspberrypi kernel: [ 828.374837] docker0: port 1(vethb0c506a) entered forwarding state
Jul 17 19:00:42 raspberrypi dhcpcd[534]: vethb0c506a: IAID ab:71:2c:52
Jul 17 19:00:42 raspberrypi dhcpcd[534]: vethb0c506a: adding address fe80::23d3:5ca6:1ff8:eef6
Jul 17 19:00:42 raspberrypi avahi-daemon[380]: Joining mDNS multicast group on interface vethb0c506a.IPv6 with address fe80::23d3:5ca6:1ff8:eef6.
Jul 17 19:00:42 raspberrypi avahi-daemon[380]: New relevant interface vethb0c506a.IPv6 for mDNS.
Jul 17 19:00:42 raspberrypi avahi-daemon[380]: Registering new address record for fe80::23d3:5ca6:1ff8:eef6 on vethb0c506a..
Jul 17 19:00:42 raspberrypi dhcpcd[534]: veth7d82fed: carrier acquired
Jul 17 19:00:42 raspberrypi dhcpcd[534]: veth7d82fed: IAID c0:3a:91:85
Jul 17 19:00:42 raspberrypi dhcpcd[534]: veth7d82fed: adding address fe80::98b3:238b:9999:21f4
Jul 17 19:00:42 raspberrypi avahi-daemon[380]: Joining mDNS multicast group on interface veth7d82fed.IPv6 with address fe80::98b3:238b:9999:21f4.
Jul 17 19:00:42 raspberrypi avahi-daemon[380]: New relevant interface veth7d82fed.IPv6 for mDNS.
Jul 17 19:00:42 raspberrypi avahi-daemon[380]: Registering new address record for fe80::98b3:238b:9999:21f4 on veth7d82fed.
.
Jul 17 19:00:42 raspberrypi dhcpcd[534]: docker0: carrier acquired
Jul 17 19:00:42 raspberrypi dhcpcd[534]: docker0: IAID 17:02:7e:8b
Jul 17 19:00:42 raspberrypi dhcpcd[534]: docker0: adding address fe80::bf7c:67f6:3678:caa7
Jul 17 19:00:42 raspberrypi avahi-daemon[380]: Joining mDNS multicast group on interface docker0.IPv6 with address fe80::bf7c:67f6:3678:caa7.
Jul 17 19:00:42 raspberrypi avahi-daemon[380]: New relevant interface docker0.IPv6 for mDNS.
Jul 17 19:00:42 raspberrypi avahi-daemon[380]: Registering new address record for fe80::bf7c:67f6:3678:caa7 on docker0..
Jul 17 19:00:42 raspberrypi dockerd[1763]: time="2019-07-17T19:00:42.813616128+01:00" level=info msg="shim docker-containerd-shim started" address=/containerd-shim/moby/8f592a47524e73ba3e0f0d7e03c7a8338901f858c00c2fc2080580834008613f/shim.sock debug=false pid=1944
Jul 17 19:00:42 raspberrypi dhcpcd[534]: veth7d82fed: soliciting an IPv6 router
Jul 17 19:00:42 raspberrypi dhcpcd[534]: docker0: soliciting an IPv6 router
Jul 17 19:00:43 raspberrypi dhcpcd[534]: vethb0c506a: soliciting an IPv6 router
Jul 17 19:00:43 raspberrypi avahi-daemon[380]: Interface veth7d82fed.IPv6 no longer relevant for mDNS.
Jul 17 19:00:43 raspberrypi avahi-daemon[380]: Leaving mDNS multicast group on interface veth7d82fed.IPv6 with address fe80::98b3:238b:9999:21f4.
Jul 17 19:00:43 raspberrypi dhcpcd[534]: veth7d82fed: soliciting a DHCP lease
Jul 17 19:00:43 raspberrypi dhcpcd[534]: veth7d82fed: if_getmtu: No such device
Jul 17 19:00:43 raspberrypi dhcpcd[534]: veth7d82fed: carrier lost
Jul 17 19:00:43 raspberrypi avahi-daemon[380]: Withdrawing address record for fe80::98b3:238b:9999:21f4 on veth7d82fed.
Jul 17 19:00:43 raspberrypi kernel: [ 829.139808] docker0: port 1(vethb0c506a) entered disabled state
Jul 17 19:00:43 raspberrypi kernel: [ 829.140793] eth0: renamed from veth7d82fed
Jul 17 19:00:43 raspberrypi dhcpcd[534]: veth7d82fed: deleting address fe80::98b3:238b:9999:21f4
Jul 17 19:00:43 raspberrypi kernel: [ 829.198334] docker0: port 1(vethb0c506a) entered blocking state
Jul 17 19:00:43 raspberrypi kernel: [ 829.198347] docker0: port 1(vethb0c506a) entered forwarding state
Jul 17 19:00:43 raspberrypi dhcpcd[534]: docker0: soliciting a DHCP lease
Jul 17 19:00:43 raspberrypi dhcpcd[534]: veth7d82fed: removing interface
Jul 17 19:00:43 raspberrypi dhcpcd[534]: vethb0c506a: soliciting a DHCP lease
Jul 17 19:00:43 raspberrypi dhcpcd[534]: vethb0c506a: carrier lost
Jul 17 19:00:43 raspberrypi dockerd[1763]: time="2019-07-17T19:00:43.828536395+01:00" level=info msg="shim reaped" id=8f592a47524e73ba3e0f0d7e03c7a8338901f858c00c2fc2080580834008613f
Jul 17 19:00:43 raspberrypi dhcpcd[534]: vethb0c506a: deleting address fe80::23d3:5ca6:1ff8:eef6
Jul 17 19:00:43 raspberrypi avahi-daemon[380]: Withdrawing address record for fe80::23d3:5ca6:1ff8:eef6 on vethb0c506a.
Jul 17 19:00:43 raspberrypi avahi-daemon[380]: Leaving mDNS multicast group on interface vethb0c506a.IPv6 with address fe80::23d3:5ca6:1ff8:eef6.
Jul 17 19:00:43 raspberrypi avahi-daemon[380]: Interface vethb0c506a.IPv6 no longer relevant for mDNS.
Jul 17 19:00:43 raspberrypi dhcpcd[534]: vethb0c506a: carrier acquired
Jul 17 19:00:43 raspberrypi dhcpcd[534]: vethb0c506a: IAID ab:71:2c:52
Jul 17 19:00:43 raspberrypi dhcpcd[534]: vethb0c506a: adding address fe80::23d3:5ca6:1ff8:eef6
Jul 17 19:00:43 raspberrypi avahi-daemon[380]: Joining mDNS multicast group on interface vethb0c506a.IPv6 with address fe80::23d3:5ca6:1ff8:eef6.
Jul 17 19:00:43 raspberrypi avahi-daemon[380]: New relevant interface vethb0c506a.IPv6 for mDNS.
Jul 17 19:00:43 raspberrypi avahi-daemon[380]: Registering new address record for fe80::23d3:5ca6:1ff8:eef6 on vethb0c506a.
.
Jul 17 19:00:43 raspberrypi dhcpcd[534]: vethb0c506a: carrier lost
Jul 17 19:00:43 raspberrypi kernel: [ 829.535548] docker0: port 1(vethb0c506a) entered disabled state
Jul 17 19:00:43 raspberrypi kernel: [ 829.535792] veth7d82fed: renamed from eth0
Jul 17 19:00:44 raspberrypi avahi-daemon[380]: Interface vethb0c506a.IPv6 no longer relevant for mDNS.
Jul 17 19:00:44 raspberrypi avahi-daemon[380]: Leaving mDNS multicast group on interface vethb0c506a.IPv6 with address fe80::23d3:5ca6:1ff8:eef6.
Jul 17 19:00:44 raspberrypi kernel: [ 829.683666] docker0: port 1(vethb0c506a) entered disabled state
Jul 17 19:00:44 raspberrypi avahi-daemon[380]: Withdrawing address record for fe80::23d3:5ca6:1ff8:eef6 on vethb0c506a.
Jul 17 19:00:44 raspberrypi kernel: [ 829.691518] device vethb0c506a left promiscuous mode
Jul 17 19:00:44 raspberrypi kernel: [ 829.691530] docker0: port 1(vethb0c506a) entered disabled state
Jul 17 19:00:44 raspberrypi dhcpcd[534]: vethb0c506a: removing interface
Jul 17 19:00:44 raspberrypi systemd[902]: run-docker-netns-6838320d3ec8.mount: Succeeded.
Jul 17 19:00:44 raspberrypi systemd[1]: run-docker-netns-6838320d3ec8.mount: Succeeded.
Jul 17 19:00:44 raspberrypi systemd[902]: var-lib-docker-containers-8f592a47524e73ba3e0f0d7e03c7a8338901f858c00c2fc2080580834008613f-mounts-shm.mount: Succeeded.
Jul 17 19:00:44 raspberrypi systemd[1]: var-lib-docker-containers-8f592a47524e73ba3e0f0d7e03c7a8338901f858c00c2fc2080580834008613f-mounts-shm.mount: Succeeded.
Jul 17 19:00:44 raspberrypi systemd[902]: var-lib-docker-overlay2-44cb540ead7ca89f0a3b7f7733b3bd3a61a807ff031d03ed625853ca85571651-merged.mount: Succeeded.
Jul 17 19:00:44 raspberrypi systemd[1]: var-lib-docker-overlay2-44cb540ead7ca89f0a3b7f7733b3bd3a61a807ff031d03ed625853ca85571651-merged.mount: Succeeded.
Jul 17 19:00:44 raspberrypi dockerd[1763]: time="2019-07-17T19:00:44.272040096+01:00" level=error msg="8f592a47524e73ba3e0f0d7e03c7a8338901f858c00c2fc2080580834008613f cleanup: failed to delete container from containerd: no such container"
docker: Error response from daemon: unable to find "net_prio" in controller set: unknown.
ERRO[0001] error waiting for container: context canceled
Jul 17 19:00:44 raspberrypi dockerd[1763]: time="2019-07-17T19:00:44.272276292+01:00" level=error msg="Handler for POST /v1.39/containers/8f592a47524e73ba3e0f0d7e03c7a8338901f858c00c2fc2080580834008613f/start returned error: unable to find "net_prio" in controller set: unknown"
root@raspberrypi:/var/log# Jul 17 19:00:44 raspberrypi dhcpcd[534]: docker0: carrier lost
Jul 17 19:00:44 raspberrypi dhcpcd[534]: docker0: deleting address fe80::bf7c:67f6:3678:caa7
Jul 17 19:00:44 raspberrypi avahi-daemon[380]: Withdrawing address record for fe80::bf7c:67f6:3678:caa7 on docker0.
Jul 17 19:00:44 raspberrypi avahi-daemon[380]: Leaving mDNS multicast group on interface docker0.IPv6 with address fe80::bf7c:67f6:3678:caa7.
Jul 17 19:00:44 raspberrypi avahi-daemon[380]: Interface docker0.IPv6 no longer relevant for mDNS.

@jgourmelen
Copy link

Jul 17 19:03:54 raspberrypi dockerd[1763]: time="2019-07-17T19:03:54.482258725+01:00" level=error msg="8f6339687216a9d27258e171646bfc7740721559af3f4a7eca3587a2e8ee4d8a cleanup: failed to delete container from containerd: no such container"
Jul 17 19:03:54 raspberrypi dockerd[1763]: time="2019-07-17T19:03:54.484270639+01:00" level=error msg="Handler for POST /v1.39/containers/8f6339687216a9d27258e171646bfc7740721559af3f4a7eca3587a2e8ee4d8a/start returned error: unable to find "net_prio" in controller set: unknown"

@yun14u
Copy link

yun14u commented Jul 17, 2019

This github issue entry is closed. I already opened another issue entry
#729

@tonistiigi
Copy link
Member

@crosbymichael ^

@jgourmelen
Copy link

containerd.io packages have been uploaded to the nightly channel.

You can install the latest nightly release using:

curl -fsSL get.docker.com | CHANNEL=nightly sh
This works for me!

@Jeremy-Marghem
Copy link

@jugou28 This works for me too!

@tlilianas
Copy link

@jugou28 works for me too.
Thx !

@machineska
Copy link

containerd.io packages have been uploaded to the nightly channel.

You can install the latest nightly release using:

curl -fsSL get.docker.com | CHANNEL=nightly sh
This works for me!

It works for me too, rpi buster

@kaihendry
Copy link

For a working install of Docker on the Rpi4, is one expected to use get.docker.com and not official packages (i.e. uninstall them)?

Or are these fixes being pushed into Raspbian somewhere?

@Niemi
Copy link

Niemi commented Aug 13, 2019

For a working install of Docker on the Rpi4, is one expected to use get.docker.com and not official packages (i.e. uninstall them)?

Or are these fixes being pushed into Raspbian somewhere?

Kaihendry are you sure that Docker build it for ARM CPU? Check Rpi4 CPU and then check Docker repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.