Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Docker not working at all - exit code 139 #3258

Closed
SaturnusDJ opened this issue Dec 8, 2019 · 10 comments
Closed

Docker not working at all - exit code 139 #3258

SaturnusDJ opened this issue Dec 8, 2019 · 10 comments

Comments

@SaturnusDJ
Copy link

SaturnusDJ commented Dec 8, 2019

Creating a bug report/issue

Required Information

  • DietPi version | cat /DietPi/dietpi/.version
    G_DIETPI_VERSION_CORE=6
    G_DIETPI_VERSION_SUB=26
    G_DIETPI_VERSION_RC=3
    G_GITBRANCH='master'
    G_GITOWNER='MichaIng'

  • Distro version | echo $G_DISTRO_NAME or cat /etc/debian_version
    9.11

  • Kernel version | uname -a
    Linux RPi_1B 4.19.66+ DietPi-Config | CPU performance benchmark #1253 Thu Aug 15 11:37:30 BST 2019 armv6l GNU/Linux

  • SBC device | echo $G_HW_MODEL_DESCRIPTION or (EG: RPi3)
    RPi B (armv6l)

  • Power supply used | (EG: 5V 1A RAVpower)
    5V thing that always worked.

  • SDcard used | (EG: SanDisk ultra)
    2GB thing that always worked.

Additional Information (if applicable)

  • Software title | (EG: Nextcloud)
    Docker
docker version
Client: Docker Engine - Community
 Version:           19.03.5
 API version:       1.40
 Go version:        go1.12.12
 Git commit:        633a0ea
 Built:             Wed Nov 13 07:36:04 2019
 OS/Arch:           linux/arm
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          19.03.5
  API version:      1.40 (minimum version 1.12)
  Go version:       go1.12.12
  Git commit:       633a0ea
  Built:            Wed Nov 13 07:30:06 2019
  OS/Arch:          linux/arm
  Experimental:     false
 containerd:
  Version:          1.2.10
  GitCommit:        b34a5c8af56e510852c35414db4c1f4fa6172339
 runc:
  Version:          1.0.0-rc8+dev
  GitCommit:        3e425f80a8c931f88e6d94a8c831b9d5aa481657
 docker-init:
  Version:          0.18.0
  GitCommit:        fec3683
  • Was the software title installed freshly or updated/migrated?
    Both. Not sure what the version of the old install was.

  • Can this issue be replicated on a fresh installation of DietPi?
    Yes. Never had this problem. Then after not using this specific RPi for some months, used it again today: problem. Reinstalled DietPi, still problem.

Problem
Most Docker containers exit with code 139. Very difficult to find any errors and logs. Portainer Agent is the only one so far that works.
I tried:

  • images that work on other systems.
  • build those images locally.
  • try other very basic images from the internet, such as hello-world.

Journalctl -xe during hello-world:

Dec 08 22:30:39 RPi_1B kernel: docker0: port 1(veth74c498a) entered blocking state
Dec 08 22:30:39 RPi_1B kernel: docker0: port 1(veth74c498a) entered disabled state
Dec 08 22:30:39 RPi_1B kernel: device veth74c498a entered promiscuous mode
Dec 08 22:30:41 RPi_1B containerd[441]: time="2019-12-08T22:30:41.350661838Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/bdb178ca834a0d2d35856f21a17c972c7b7
0e2e68ef87e1853c2683609967e65/shim.sock" debug=false pid=3782
Dec 08 22:30:43 RPi_1B kernel: eth0: renamed from veth5c866db
Dec 08 22:30:43 RPi_1B kernel: docker0: port 1(veth74c498a) entered blocking state
Dec 08 22:30:43 RPi_1B kernel: docker0: port 1(veth74c498a) entered forwarding state
Dec 08 22:30:45 RPi_1B containerd[441]: time="2019-12-08T22:30:45.450144838Z" level=info msg="shim reaped" id=bdb178ca834a0d2d35856f21a17c972c7b70e2e68ef87e1853c2683609967e65
Dec 08 22:30:45 RPi_1B dockerd[668]: time="2019-12-08T22:30:45.461778838Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Dec 08 22:30:45 RPi_1B kernel: docker0: port 1(veth74c498a) entered disabled state
Dec 08 22:30:45 RPi_1B kernel: veth5c866db: renamed from eth0
Dec 08 22:30:45 RPi_1B kernel: docker0: port 1(veth74c498a) entered disabled state
Dec 08 22:30:45 RPi_1B kernel: device veth74c498a left promiscuous mode
Dec 08 22:30:45 RPi_1B kernel: docker0: port 1(veth74c498a) entered disabled state
Dec 08 22:30:45 RPi_1B dockerd[668]: time="2019-12-08T22:30:45.987541838Z" level=warning msg="bdb178ca834a0d2d35856f21a17c972c7b70e2e68ef87e1853c2683609967e65 cleanup: failed to unmount IPC:
 umount /mnt/dietpi_userdata/docker-data/containers/bdb178ca834a0d2d35856f21a17c972c7b70e2e68ef87e1853c2683609967e65/mounts/shm, flags: 0x2: no such file or directory"

No fix:

Similar (?):

Edit: Did a third install on a different 2GB card (don't have bigger available), this time only installed docker and ran docker run hello-world. Outcome is the same. Nothing is given as output. docker container ls -a shows exit 139 again.

@MichaIng
Copy link
Owner

MichaIng commented Dec 9, 2019

@SaturnusDJ
Many thanks for your report. Can you please paste the following:

journalctl -u docker
journalctl -u containerd

And there should be log files in /mnt/dietpi_userdata/docker-data/containers/ per-container as well.
Those should show some more logs compared to journalctl -xe.

There was an external issue with Docker on ARMv6: #3128
It was fixed meanwhile, but I thought that before and then some update broke it again. However this was an issue on binary/arch level, hence the binary call should break immediately and not produce any output like in your case 🤔. Lets see if the logs give some more hints.

@SaturnusDJ
Copy link
Author

SaturnusDJ commented Dec 9, 2019

Reboot: 14:47.
Then date command.
Then ran docker run hello-world.
Then date command.

Output:

root@RPi_1B:~# date
Mon  9 Dec 14:48:13 GMT 2019
root@RPi_1B:~# docker run hello-world
dateroot@RPi_1B:~# date
Mon  9 Dec 14:48:30 GMT 2019

journalctl -u docker

-- Logs begin at Mon 2019-12-09 14:47:19 GMT, end at Mon 2019-12-09 14:48:28 GMT. --
Dec 09 14:47:29 RPi_1B systemd[1]: Starting Docker Application Container Engine...
Dec 09 14:47:33 RPi_1B dockerd[426]: time="2019-12-09T14:47:33.147083000Z" level=info msg="libcontainerd: started new docker-containerd process" pid=480
Dec 09 14:47:33 RPi_1B dockerd[426]: time="2019-12-09T14:47:33.152566000Z" level=info msg="parsed scheme: \"unix\"" module=grpc
Dec 09 14:47:33 RPi_1B dockerd[426]: time="2019-12-09T14:47:33.156186000Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
Dec 09 14:47:33 RPi_1B dockerd[426]: time="2019-12-09T14:47:33.233516000Z" level=info msg="ccResolverWrapper: sending new addresses to cc: [{unix:///var/run/docker/containerd/docker-containerd.sock 0  <nil>}]" module=grpc
Dec 09 14:47:33 RPi_1B dockerd[426]: time="2019-12-09T14:47:33.236665000Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
Dec 09 14:47:33 RPi_1B dockerd[426]: time="2019-12-09T14:47:33.237365000Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0x14933ea0, CONNECTING" module=grpc
Dec 09 14:47:35 RPi_1B dockerd[426]: time="2019-12-09T14:47:35Z" level=info msg="starting containerd" revision=468a545b9edcd5932818eb9de8e72413e616e86e version=v1.1.2
Dec 09 14:47:35 RPi_1B dockerd[426]: time="2019-12-09T14:47:35Z" level=info msg="loading plugin "io.containerd.content.v1.content"..." type=io.containerd.content.v1
Dec 09 14:47:35 RPi_1B dockerd[426]: time="2019-12-09T14:47:35Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.btrfs"..." type=io.containerd.snapshotter.v1
Dec 09 14:47:35 RPi_1B dockerd[426]: time="2019-12-09T14:47:35Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.btrfs" error="path /mnt/dietpi_userdata/docker-data/containerd/daemon/io.containerd.snapshotter.v1.bt
rfs must be a btrfs filesystem to be used with the btrfs snapshotter"
Dec 09 14:47:35 RPi_1B dockerd[426]: time="2019-12-09T14:47:35Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.aufs"..." type=io.containerd.snapshotter.v1
Dec 09 14:47:35 RPi_1B dockerd[426]: time="2019-12-09T14:47:35Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.aufs" error="modprobe aufs failed: "modprobe: FATAL: Module aufs not found in directory /lib/modules/
4.19.66+\n": exit status 1"
Dec 09 14:47:35 RPi_1B dockerd[426]: time="2019-12-09T14:47:35Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.native"..." type=io.containerd.snapshotter.v1
Dec 09 14:47:35 RPi_1B dockerd[426]: time="2019-12-09T14:47:35Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.overlayfs"..." type=io.containerd.snapshotter.v1
Dec 09 14:47:35 RPi_1B dockerd[426]: time="2019-12-09T14:47:35Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.zfs"..." type=io.containerd.snapshotter.v1
Dec 09 14:47:35 RPi_1B dockerd[426]: time="2019-12-09T14:47:35Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.zfs" error="path /mnt/dietpi_userdata/docker-data/containerd/daemon/io.containerd.snapshotter.v1.zfs 
must be a zfs filesystem to be used with the zfs snapshotter"
Dec 09 14:47:35 RPi_1B dockerd[426]: time="2019-12-09T14:47:35Z" level=info msg="loading plugin "io.containerd.metadata.v1.bolt"..." type=io.containerd.metadata.v1
Dec 09 14:47:35 RPi_1B dockerd[426]: time="2019-12-09T14:47:35Z" level=warning msg="could not use snapshotter zfs in metadata plugin" error="path /mnt/dietpi_userdata/docker-data/containerd/daemon/io.containerd.snapshotter.v1.zfs must b
e a zfs filesystem to be used with the zfs snapshotter"
Dec 09 14:47:35 RPi_1B dockerd[426]: time="2019-12-09T14:47:35Z" level=warning msg="could not use snapshotter btrfs in metadata plugin" error="path /mnt/dietpi_userdata/docker-data/containerd/daemon/io.containerd.snapshotter.v1.btrfs mu
st be a btrfs filesystem to be used with the btrfs snapshotter"
Dec 09 14:47:35 RPi_1B dockerd[426]: time="2019-12-09T14:47:35Z" level=warning msg="could not use snapshotter aufs in metadata plugin" error="modprobe aufs failed: "modprobe: FATAL: Module aufs not found in directory /lib/modules/4.19.6
6+\n": exit status 1"
Dec 09 14:47:35 RPi_1B dockerd[426]: time="2019-12-09T14:47:35Z" level=info msg="loading plugin "io.containerd.differ.v1.walking"..." type=io.containerd.differ.v1
Dec 09 14:47:35 RPi_1B dockerd[426]: time="2019-12-09T14:47:35Z" level=info msg="loading plugin "io.containerd.gc.v1.scheduler"..." type=io.containerd.gc.v1
Dec 09 14:47:35 RPi_1B dockerd[426]: time="2019-12-09T14:47:35Z" level=info msg="loading plugin "io.containerd.service.v1.containers-service"..." type=io.containerd.service.v1
Dec 09 14:47:35 RPi_1B dockerd[426]: time="2019-12-09T14:47:35Z" level=info msg="loading plugin "io.containerd.service.v1.content-service"..." type=io.containerd.service.v1
Dec 09 14:47:35 RPi_1B dockerd[426]: time="2019-12-09T14:47:35Z" level=info msg="loading plugin "io.containerd.service.v1.diff-service"..." type=io.containerd.service.v1
Dec 09 14:47:35 RPi_1B dockerd[426]: time="2019-12-09T14:47:35Z" level=info msg="loading plugin "io.containerd.service.v1.images-service"..." type=io.containerd.service.v1
Dec 09 14:47:35 RPi_1B dockerd[426]: time="2019-12-09T14:47:35Z" level=info msg="loading plugin "io.containerd.service.v1.leases-service"..." type=io.containerd.service.v1
Dec 09 14:47:35 RPi_1B dockerd[426]: time="2019-12-09T14:47:35Z" level=info msg="loading plugin "io.containerd.service.v1.namespaces-service"..." type=io.containerd.service.v1
Dec 09 14:47:35 RPi_1B dockerd[426]: time="2019-12-09T14:47:35Z" level=info msg="loading plugin "io.containerd.service.v1.snapshots-service"..." type=io.containerd.service.v1
Dec 09 14:47:35 RPi_1B dockerd[426]: time="2019-12-09T14:47:35Z" level=info msg="loading plugin "io.containerd.monitor.v1.cgroups"..." type=io.containerd.monitor.v1
Dec 09 14:47:35 RPi_1B dockerd[426]: time="2019-12-09T14:47:35Z" level=info msg="loading plugin "io.containerd.runtime.v1.linux"..." type=io.containerd.runtime.v1
Dec 09 14:47:35 RPi_1B dockerd[426]: time="2019-12-09T14:47:35Z" level=info msg="loading plugin "io.containerd.service.v1.tasks-service"..." type=io.containerd.service.v1
Dec 09 14:47:35 RPi_1B dockerd[426]: time="2019-12-09T14:47:35Z" level=info msg="loading plugin "io.containerd.grpc.v1.containers"..." type=io.containerd.grpc.v1
Dec 09 14:47:35 RPi_1B dockerd[426]: time="2019-12-09T14:47:35Z" level=info msg="loading plugin "io.containerd.grpc.v1.content"..." type=io.containerd.grpc.v1
Dec 09 14:47:35 RPi_1B dockerd[426]: time="2019-12-09T14:47:35Z" level=info msg="loading plugin "io.containerd.grpc.v1.diff"..." type=io.containerd.grpc.v1
Dec 09 14:47:35 RPi_1B dockerd[426]: time="2019-12-09T14:47:35Z" level=info msg="loading plugin "io.containerd.grpc.v1.events"..." type=io.containerd.grpc.v1
Dec 09 14:47:35 RPi_1B dockerd[426]: time="2019-12-09T14:47:35Z" level=info msg="loading plugin "io.containerd.grpc.v1.healthcheck"..." type=io.containerd.grpc.v1
Dec 09 14:47:35 RPi_1B dockerd[426]: time="2019-12-09T14:47:35Z" level=info msg="loading plugin "io.containerd.grpc.v1.images"..." type=io.containerd.grpc.v1
Dec 09 14:47:35 RPi_1B dockerd[426]: time="2019-12-09T14:47:35Z" level=info msg="loading plugin "io.containerd.grpc.v1.leases"..." type=io.containerd.grpc.v1
Dec 09 14:47:35 RPi_1B dockerd[426]: time="2019-12-09T14:47:35Z" level=info msg="loading plugin "io.containerd.grpc.v1.namespaces"..." type=io.containerd.grpc.v1
Dec 09 14:47:35 RPi_1B dockerd[426]: time="2019-12-09T14:47:35Z" level=info msg="loading plugin "io.containerd.grpc.v1.snapshots"..." type=io.containerd.grpc.v1
Dec 09 14:47:35 RPi_1B dockerd[426]: time="2019-12-09T14:47:35Z" level=info msg="loading plugin "io.containerd.grpc.v1.tasks"..." type=io.containerd.grpc.v1
Dec 09 14:47:35 RPi_1B dockerd[426]: time="2019-12-09T14:47:35Z" level=info msg="loading plugin "io.containerd.grpc.v1.version"..." type=io.containerd.grpc.v1
Dec 09 14:47:35 RPi_1B dockerd[426]: time="2019-12-09T14:47:35Z" level=info msg="loading plugin "io.containerd.grpc.v1.introspection"..." type=io.containerd.grpc.v1
Dec 09 14:47:35 RPi_1B dockerd[426]: time="2019-12-09T14:47:35Z" level=info msg=serving... address="/var/run/docker/containerd/docker-containerd-debug.sock"
Dec 09 14:47:35 RPi_1B dockerd[426]: time="2019-12-09T14:47:35Z" level=info msg=serving... address="/var/run/docker/containerd/docker-containerd.sock"
Dec 09 14:47:35 RPi_1B dockerd[426]: time="2019-12-09T14:47:35Z" level=info msg="containerd successfully booted in 0.225519s"
Dec 09 14:47:35 RPi_1B dockerd[426]: time="2019-12-09T14:47:35.498423000Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0x14933ea0, READY" module=grpc
Dec 09 14:47:35 RPi_1B dockerd[426]: time="2019-12-09T14:47:35.721809000Z" level=info msg="parsed scheme: \"unix\"" module=grpc
Dec 09 14:47:35 RPi_1B dockerd[426]: time="2019-12-09T14:47:35.725214000Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
Dec 09 14:47:35 RPi_1B dockerd[426]: time="2019-12-09T14:47:35.729770000Z" level=info msg="ccResolverWrapper: sending new addresses to cc: [{unix:///var/run/docker/containerd/docker-containerd.sock 0  <nil>}]" module=grpc
Dec 09 14:47:35 RPi_1B dockerd[426]: time="2019-12-09T14:47:35.730558000Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
Dec 09 14:47:35 RPi_1B dockerd[426]: time="2019-12-09T14:47:35.734792000Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0x149bdca0, CONNECTING" module=grpc
Dec 09 14:47:35 RPi_1B dockerd[426]: time="2019-12-09T14:47:35.745380000Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0x149bdca0, READY" module=grpc
Dec 09 14:47:36 RPi_1B dockerd[426]: time="2019-12-09T14:47:36.028719000Z" level=info msg="Graph migration to content-addressability took 0.00 seconds"
Dec 09 14:47:36 RPi_1B dockerd[426]: time="2019-12-09T14:47:36.065253000Z" level=warning msg="Your kernel does not support swap memory limit"
Dec 09 14:47:36 RPi_1B dockerd[426]: time="2019-12-09T14:47:36.069355000Z" level=warning msg="Your kernel does not support cgroup cfs period"
Dec 09 14:47:36 RPi_1B dockerd[426]: time="2019-12-09T14:47:36.072626000Z" level=warning msg="Your kernel does not support cgroup cfs quotas"
Dec 09 14:47:36 RPi_1B dockerd[426]: time="2019-12-09T14:47:36.073183000Z" level=warning msg="Your kernel does not support cgroup rt period"
Dec 09 14:47:36 RPi_1B dockerd[426]: time="2019-12-09T14:47:36.073589000Z" level=warning msg="Your kernel does not support cgroup rt runtime"
Dec 09 14:47:36 RPi_1B dockerd[426]: time="2019-12-09T14:47:36.074453000Z" level=warning msg="Unable to find cpuset cgroup in mounts"
Dec 09 14:47:36 RPi_1B dockerd[426]: time="2019-12-09T14:47:36.075554000Z" level=warning msg="mountpoint for pids not found"
Dec 09 14:47:36 RPi_1B dockerd[426]: time="2019-12-09T14:47:36.090214000Z" level=info msg="parsed scheme: \"unix\"" module=grpc
Dec 09 14:47:36 RPi_1B dockerd[426]: time="2019-12-09T14:47:36.090619000Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
Dec 09 14:47:36 RPi_1B dockerd[426]: time="2019-12-09T14:47:36.105548000Z" level=info msg="ccResolverWrapper: sending new addresses to cc: [{unix:///var/run/docker/containerd/docker-containerd.sock 0  <nil>}]" module=grpc
Dec 09 14:47:36 RPi_1B dockerd[426]: time="2019-12-09T14:47:36.106196000Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
Dec 09 14:47:36 RPi_1B dockerd[426]: time="2019-12-09T14:47:36.107220000Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0x14a78a80, CONNECTING" module=grpc
Dec 09 14:47:36 RPi_1B dockerd[426]: time="2019-12-09T14:47:36.128736000Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0x14a78a80, READY" module=grpc
Dec 09 14:47:36 RPi_1B dockerd[426]: time="2019-12-09T14:47:36.129408000Z" level=info msg="Loading containers: start."
Dec 09 14:47:52 RPi_1B dockerd[426]: time="2019-12-09T14:47:52.855781874Z" level=info msg="Loading containers: done."
Dec 09 14:47:55 RPi_1B dockerd[426]: time="2019-12-09T14:47:55.326364874Z" level=info msg="Docker daemon" commit=d7080c1 graphdriver(s)=overlay2 version=18.06.3-ce
Dec 09 14:47:55 RPi_1B dockerd[426]: time="2019-12-09T14:47:55.343756874Z" level=info msg="Daemon has completed initialization"
Dec 09 14:47:55 RPi_1B dockerd[426]: time="2019-12-09T14:47:55.729716874Z" level=warning msg="Could not register builder git source: failed to find git binary: exec: \"git\": executable file not found in $PATH"
Dec 09 14:47:56 RPi_1B systemd[1]: Started Docker Application Container Engine.
Dec 09 14:47:56 RPi_1B dockerd[426]: time="2019-12-09T14:47:56.556577874Z" level=info msg="API listen on /var/run/docker.sock"
Dec 09 14:48:25 RPi_1B dockerd[426]: time="2019-12-09T14:48:25Z" level=info msg="shim docker-containerd-shim started" address="/containerd-shim/moby/38b9f812d326edceb04835fe3a6abeac9a613bc546624bef5bfd9e1d71ae3694/shim.sock" debug=false
 pid=986
Dec 09 14:48:28 RPi_1B dockerd[426]: time="2019-12-09T14:48:28Z" level=info msg="shim reaped" id=38b9f812d326edceb04835fe3a6abeac9a613bc546624bef5bfd9e1d71ae3694
Dec 09 14:48:28 RPi_1B dockerd[426]: time="2019-12-09T14:48:28.342277874Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"

journalctl -u containerd

-- No entries --

Docker inspect for the container:

[
    {
        "Id": "38b9f812d326edceb04835fe3a6abeac9a613bc546624bef5bfd9e1d71ae3694",
        "Created": "2019-12-09T14:48:20.791477874Z",
        "Path": "/hello",
        "Args": [],
        "State": {
            "Status": "exited",
            "Running": false,
            "Paused": false,
            "Restarting": false,
            "OOMKilled": false,
            "Dead": false,
            "Pid": 0,
            "ExitCode": 139,
            "Error": "",
            "StartedAt": "2019-12-09T14:48:27.537259874Z",
            "FinishedAt": "2019-12-09T14:48:27.666775874Z"
        },
        "Image": "sha256:618e43431df9635eee9cf7224aa92c8d6f74aa36cd3b2359604389ca36e79380",
        "ResolvConfPath": "/mnt/dietpi_userdata/docker-data/containers/38b9f812d326edceb04835fe3a6abeac9a613bc546624bef5bfd9e1d71ae3694/resolv.conf",
        "HostnamePath": "/mnt/dietpi_userdata/docker-data/containers/38b9f812d326edceb04835fe3a6abeac9a613bc546624bef5bfd9e1d71ae3694/hostname",
        "HostsPath": "/mnt/dietpi_userdata/docker-data/containers/38b9f812d326edceb04835fe3a6abeac9a613bc546624bef5bfd9e1d71ae3694/hosts",
        "LogPath": "",
        "Name": "/agitated_hamilton",
        "RestartCount": 0,
        "Driver": "overlay2",
        "Platform": "linux",
        "MountLabel": "",
        "ProcessLabel": "",
        "AppArmorProfile": "",
        "ExecIDs": null,
        "HostConfig": {
            "Binds": null,
            "ContainerIDFile": "",
            "LogConfig": {
                "Type": "journald",
                "Config": {}
            },
            "NetworkMode": "default",
            "PortBindings": {},
            "RestartPolicy": {
                "Name": "no",
                "MaximumRetryCount": 0
            },
            "AutoRemove": false,
            "VolumeDriver": "",
            "VolumesFrom": null,
            "CapAdd": null,
            "CapDrop": null,
            "Dns": [],
            "DnsOptions": [],
            "DnsSearch": [],
            "ExtraHosts": null,
            "GroupAdd": null,
            "IpcMode": "shareable",
            "Cgroup": "",
            "Links": null,
            "OomScoreAdj": 0,
            "PidMode": "",
            "Privileged": false,
            "PublishAllPorts": false,
            "ReadonlyRootfs": false,
            "SecurityOpt": null,
            "UTSMode": "",
            "UsernsMode": "",
            "ShmSize": 67108864,
            "Runtime": "runc",
            "ConsoleSize": [
                0,
                0
            ],
            "Isolation": "",
            "CpuShares": 0,
            "Memory": 0,
            "NanoCpus": 0,
            "CgroupParent": "",
            "BlkioWeight": 0,
            "BlkioWeightDevice": [],
            "BlkioDeviceReadBps": null,
            "BlkioDeviceWriteBps": null,
            "BlkioDeviceReadIOps": null,
            "BlkioDeviceWriteIOps": null,
            "CpuPeriod": 0,
            "CpuQuota": 0,
            "CpuRealtimePeriod": 0,
            "CpuRealtimeRuntime": 0,
            "CpusetCpus": "",
            "CpusetMems": "",
            "Devices": [],
            "DeviceCgroupRules": null,
            "DiskQuota": 0,
            "KernelMemory": 0,
            "MemoryReservation": 0,
            "MemorySwap": 0,
            "MemorySwappiness": null,
            "OomKillDisable": false,
            "PidsLimit": 0,
            "Ulimits": null,
            "CpuCount": 0,
            "CpuPercent": 0,
            "IOMaximumIOps": 0,
            "IOMaximumBandwidth": 0,
            "MaskedPaths": [
                "/proc/acpi",
                "/proc/kcore",
                "/proc/keys",
                "/proc/latency_stats",
                "/proc/timer_list",
                "/proc/timer_stats",
                "/proc/sched_debug",
                "/proc/scsi",
                "/sys/firmware"
            ],
            "ReadonlyPaths": [
                "/proc/asound",
                "/proc/bus",
                "/proc/fs",
                "/proc/irq",
                "/proc/sys",
                "/proc/sysrq-trigger"
            ]
        },
        "GraphDriver": {
            "Data": {
                "LowerDir": "/mnt/dietpi_userdata/docker-data/overlay2/8b0b69c4c3023deb1eb24b5bbd89854841567bdc8c702ad1c8ec0ce964c7d563-init/diff:/mnt/dietpi_userdata/docker-data/overlay2/29a3682c020324316168841a42f7fbc55138852ccfc59bdf596aaed05fdc3cee/diff",
                "MergedDir": "/mnt/dietpi_userdata/docker-data/overlay2/8b0b69c4c3023deb1eb24b5bbd89854841567bdc8c702ad1c8ec0ce964c7d563/merged",
                "UpperDir": "/mnt/dietpi_userdata/docker-data/overlay2/8b0b69c4c3023deb1eb24b5bbd89854841567bdc8c702ad1c8ec0ce964c7d563/diff",
                "WorkDir": "/mnt/dietpi_userdata/docker-data/overlay2/8b0b69c4c3023deb1eb24b5bbd89854841567bdc8c702ad1c8ec0ce964c7d563/work"
            },
            "Name": "overlay2"
        },
        "Mounts": [],
        "Config": {
            "Hostname": "38b9f812d326",
            "Domainname": "",
            "User": "",
            "AttachStdin": false,
            "AttachStdout": true,
            "AttachStderr": true,
            "Tty": false,
            "OpenStdin": false,
            "StdinOnce": false,
            "Env": [
                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
            ],
            "Cmd": [
                "/hello"
            ],
            "ArgsEscaped": true,
            "Image": "hello-world",
            "Volumes": null,
            "WorkingDir": "",
            "Entrypoint": null,
            "OnBuild": null,
            "Labels": {}
        },
        "NetworkSettings": {
            "Bridge": "",
            "SandboxID": "ca23610c3ffa96afacfbba1ef9ac6cdc099a732bd79034bb7b774a57da78b182",
            "HairpinMode": false,
            "LinkLocalIPv6Address": "",
            "LinkLocalIPv6PrefixLen": 0,
            "Ports": {},
            "SandboxKey": "/var/run/docker/netns/ca23610c3ffa",
            "SecondaryIPAddresses": null,
            "SecondaryIPv6Addresses": null,
            "EndpointID": "",
            "Gateway": "",
            "GlobalIPv6Address": "",
            "GlobalIPv6PrefixLen": 0,
            "IPAddress": "",
            "IPPrefixLen": 0,
            "IPv6Gateway": "",
            "MacAddress": "",
            "Networks": {
                "bridge": {
                    "IPAMConfig": null,
                    "Links": null,
                    "Aliases": null,
                    "NetworkID": "4a0cef660cf8cac5ea56569c055605a714f34b6dfacaafe478271746b23fd558",
                    "EndpointID": "",
                    "Gateway": "",
                    "IPAddress": "",
                    "IPPrefixLen": 0,
                    "IPv6Gateway": "",
                    "GlobalIPv6Address": "",
                    "GlobalIPv6PrefixLen": 0,
                    "MacAddress": "",
                    "DriverOpts": null
                }
            }
        }
    }
]

No logs found in /mnt/dietpi_userdata/docker-data/containers/.

Note, this was on the other Docker version I installed to try:

Client:
 Version:           18.06.3-ce
 API version:       1.38
 Go version:        go1.10.3
 Git commit:        d7080c1
 Built:             Wed Feb 20 02:42:54 2019
 OS/Arch:           linux/arm
 Experimental:      false

Server:
 Engine:
  Version:          18.06.3-ce
  API version:      1.38 (minimum version 1.12)
  Go version:       go1.10.3
  Git commit:       d7080c1
  Built:            Wed Feb 20 02:38:25 2019
  OS/Arch:          linux/arm
  Experimental:     false

No fix:

@MichaIng
Copy link
Owner

MichaIng commented Dec 9, 2019

Hmm actually I can't see any error, just some warnings about expected skipped modules. Also the status shows an exit code but no error. Not sure what is expected as hallo world output, most likely a console print, that is missing in your case?

But did you try an actual docker image/container that is not just for testing purpose?

EDIT: Ah sorry you mentioned in OT that exit code 139 is result of others as well. So we need to check what this means.

@SaturnusDJ
Copy link
Author

SaturnusDJ commented Dec 9, 2019

@MichaIng
Yes. I found several explanations that differ a lot. Some link it to out of memory, which sound plausible on an RPi 1B, but then the 'OOMKilled' flag should have been true, unless this is processed wrong. Other explanation is that the program finished and exited, but should that not be an exit 0 then? And indeed for hello-world I expect a console output. https://hub.docker.com/_/hello-world/

How about that level=warning msg="mountpoint for pids not found error?

What I saw in a test yesterday is that the memory starts to get filled until free -m shows 30MB free left. Then again and above all of the former, I remember with full certainty that similar containers as the one I was trying to run initially did run before on this exact Pi with this exact install. I just removed the containers and put the device in storage before taking it out a few days ago. I did not even force a manual update of anything, so it is extra strange that it was not possible anymore. Does DietPi do auto updating? If so, then I suspect an auto-update did something. And that commit obviously is also in a fresh install, that also does not work.

@MichaIng
Copy link
Owner

MichaIng commented Dec 9, 2019

Docker 139 seems to be container segmentation fault. Could be memory related or due to wrong architecture or many other reasons...

Did you take care to pull RPi or armv6hf containers/images. A usual issue is that RPi1/Zero user take regular armhf images which are not compatible with the special armv6hf that RPi1 and Zero is. Also 30 MB left memory is much too few, although hello world should never require that much. However they to increase the swapfile size in case.

@SaturnusDJ
Copy link
Author

SaturnusDJ commented Dec 9, 2019

For the arch I let it auto select. According to Docker pages it should download the right arch automatically. But...hello-world does not seem to have a v6.
My own images do have that and they don't work as well (original case).
And even building from source goes wrong (does not even complete), but for that I am using alpine:3.10 so I assume Docker is smart enough to download the right arch.

So I am trying to verify if I got the right ARM version arch but Docker seems not to show that, or I need a more advanced command. Comparing SHA256 also seems useless?

EDIT
Okay getting somewhere...pulled my image by SHA256 of the v6 and it seems to run. How come that the auto select does not work... but it did work for Portainer Agent...hmmm I am clueless now about that.

https://dietpi.com/phpbb/viewtopic.php?t=5681 related?

@MichaIng
Copy link
Owner

MichaIng commented Dec 9, 2019

@SaturnusDJ
Yeah I just know from other reports, as well on Docker repo, that indeed Docker does not detect/select well the correct arch for ARMv6 RPis. The problem is the non-standard architecture. Depending on what Docker checks:

  • uname -m returns armv6l. On Debian this would mean armel packages, since armhf packages are for armv7l, hence not compatible with ARMv6. armel is wrong as well, note that hence the Debian repo is simply incompatible with ARMv6 RPi models, which was the initial reason for Raspbian, which is basically Debian compiled for this special armv6hf architecture.
  • apt-config dump | grep 'APT::Architecture' prints armhf, which, when taken as ARMv7, which is correct on Debian, is as above wrong for RPi1/Zero as well.

So RPi1/Zero always need special builds, which have neither a clear uname arch print, nor a clear APT architecture. armv6hf is only some wording I use. The Docker images as well use different naming to clarify that it is for those RPi models.

Jep the issue you linked fits, only my state that "armhf fits always" is wrong 😅.

@SaturnusDJ
Copy link
Author

Thank you very much for the explanation!
So in the end it had nothing todo with DietPi but it was a lack of knowledge on my side. Sorry for that.

To get Alpine on an RPi 1B, you could run docker pull arm32v6/alpine:3.10
In case of Debian it would be a v5... docker pull arm32v5/debian

I guess in the case of Portainer they made an ARM image that runs on all versions. https://hub.docker.com/r/portainer/agent/tags

And I just checked for the software I used months ago on that RPi 1B, and...no surprise...it is ARM general. That's why it ran.

So there is arm32v5, arm32v6, arm32v7, arm64v8 and just 'arm'.

Opened this topic for not being able to pull arch + username: https://forums.docker.com/t/how-to-docker-pull-architecture-and-username/85629.

Offtopic:
Do you have any idea how to (Docker) build in order to get the 'general arm' image that apparently works on all (32 bit at least) ARM?
I did multiarch before (https://digilution.io/posts/multiarch-docker-builds/) but this results in images for each different ARM version.

@MichaIng
Copy link
Owner

@SaturnusDJ
The "general arm", if it shall work on all RPi models and all other ARMv7 SBCs as well, shoould be arm32v6, since ARMv7 is backwards compatible. But I would not recommend to now build or pull those everywhere, since you'd loose ARMv7 benefits on those devices. So it makes sense to use arm32v6 only on RPi models with uname -m == armv6l architecture, else arm32v7.

Aside of that, I do not use Docker myself, hence are not expert on anything that goes beyond the general architecture binary compatibility, especially the Docker-specific naming of those.

@SaturnusDJ
Copy link
Author

Thanks @MichaIng, I will stick to pulling and building specifics then. 👌

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants