Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Docker hangs after docker stop on a host with high load #32809

Closed
eugene-dounar opened this issue Apr 25, 2017 · 14 comments
Closed

Docker hangs after docker stop on a host with high load #32809

eugene-dounar opened this issue Apr 25, 2017 · 14 comments

Comments

@eugene-dounar
Copy link

Description

Docker daemon is stuck after Docker Enforcer stops container. Seem to be a race condition occurring on heavy load (load average ~100 on 64 CPUs).

Steps to reproduce the issue:
Not easily reproducible but the general steps are:

  1. Run few hundred containers, make sure load average is much higher than number of CPUs.
  2. Run docker kill <cid>

Describe the results you received:
docker ps hangs

Describe the results you expected:
docker ps should show running containers

Additional information you deem important (e.g. issue happens only occasionally):

$ sudo cat /var/log/syslog | grep 'b8a3b4b411e30c6b1559236c51ad0e1a5b073bf3e9237838c8edc89dedd8d14f' | grep '^Apr 25 09:5'
Apr 25 09:50:56 localhost dockerd[20755]: time="2017-04-25T09:50:56.531568496Z" level=debug msg="Calling GET /containers/b8a3b4b411e30c6b1559236c51ad0e1a5b073bf3e9237838c8edc89dedd8d14f/json"
Apr 25 09:51:21 localhost dockerd[20755]: time="2017-04-25T09:51:21.812724967Z" level=debug msg="Calling GET /v1.24/containers/b8a3b4b411e30c6b1559236c51ad0e1a5b073bf3e9237838c8edc89dedd8d14f/json"
Apr 25 09:51:21 localhost dockerd[20755]: time="2017-04-25T09:51:21.820214164Z" level=debug msg="Calling POST /v1.24/containers/b8a3b4b411e30c6b1559236c51ad0e1a5b073bf3e9237838c8edc89dedd8d14f/stop?t=10"
Apr 25 09:51:21 localhost dockerd[20755]: time="2017-04-25T09:51:21.820370378Z" level=debug msg="Sending kill signal 15 to container b8a3b4b411e30c6b1559236c51ad0e1a5b073bf3e9237838c8edc89dedd8d14f"
Apr 25 09:51:25 localhost dockerd[20755]: time="2017-04-25T09:51:25.191458544Z" level=debug msg="containerd: process exited" id=b8a3b4b411e30c6b1559236c51ad0e1a5b073bf3e9237838c8edc89dedd8d14f pid=init status=0 systemPid=82493
Apr 25 09:51:25 localhost dockerd[20755]: time="2017-04-25T09:51:25.224382229Z" level=debug msg="libcontainerd: received containerd event: &types.Event{Type:\"exit\", Id:\"b8a3b4b411e30c6b1559236c51ad0e1a5b073bf3e9237838c8edc89dedd8d14f\", Status:0x0, Pid:\"init\", Timestamp:(*timestamp.Timestamp)(0xc4264d4b10)}"
Apr 25 09:51:31 localhost dockerd[20755]: time="2017-04-25T09:51:31.821848679Z" level=info msg="Container b8a3b4b411e30c6b1559236c51ad0e1a5b073bf3e9237838c8edc89dedd8d14f failed to exit within 10 seconds of signal 15 - using the force"
Apr 25 09:58:41 localhost dockerd[130013]: time="2017-04-25T09:58:41.310847513Z" level=debug msg="Loaded container b8a3b4b411e30c6b1559236c51ad0e1a5b073bf3e9237838c8edc89dedd8d14f"
Apr 25 09:58:47 localhost dockerd[130013]: time="2017-04-25T09:58:47.321166168Z" level=warning msg="libcontainerd: client is out of sync, restore was called on a fully synced container (b8a3b4b411e30c6b1559236c51ad0e1a5b073bf3e9237838c8edc89dedd8d14f)."
Apr 25 09:58:47 localhost dockerd[130013]: time="2017-04-25T09:58:47.321831432Z" level=debug msg="libcontainerd: received past event &types.Event{Type:\"exit\", Id:\"b8a3b4b411e30c6b1559236c51ad0e1a5b073bf3e9237838c8edc89dedd8d14f\", Status:0x0, Pid:\"init\", Timestamp:(*timestamp.Timestamp)(0xc4205e42f0)}"
Apr 25 09:58:47 localhost dockerd[130013]: time="2017-04-25T09:58:47.321869014Z" level=warning msg="libcontainerd: failed to retrieve container b8a3b4b411e30c6b1559236c51ad0e1a5b073bf3e9237838c8edc89dedd8d14f state: rpc error: code = 2 desc = containerd: container not found"
Apr 25 09:58:56 localhost dockerd[12166]: time="2017-04-25T09:58:56.980536101Z" level=debug msg="Loaded container b8a3b4b411e30c6b1559236c51ad0e1a5b073bf3e9237838c8edc89dedd8d14f"
Apr 25 09:59:15 localhost dockerd[27129]: time="2017-04-25T09:59:15.433257714Z" level=debug msg="Loaded container b8a3b4b411e30c6b1559236c51ad0e1a5b073bf3e9237838c8edc89dedd8d14f"
Apr 25 09:59:31 localhost dockerd[42687]: time="2017-04-25T09:59:31.730505609Z" level=debug msg="Loaded container b8a3b4b411e30c6b1559236c51ad0e1a5b073bf3e9237838c8edc89dedd8d14f"
Apr 25 09:59:50 localhost dockerd[61500]: time="2017-04-25T09:59:50.702414056Z" level=debug msg="Loaded container b8a3b4b411e30c6b1559236c51ad0e1a5b073bf3e9237838c8edc89dedd8d14f"

Note that containerd says process exited at 09:51:25 but dockerd reports it failed to exit 6 seconds later at 09:51:31. After that docker becomes unresponsive.

Output of docker version:

$ sudo docker version
Client:
 Version:      17.03.1-ee-3
 API version:  1.27
 Go version:   go1.7.5
 Git commit:   3fcee33
 Built:        Thu Mar 30 20:06:11 2017
 OS/Arch:      linux/amd64

Server:
 Version:      17.03.1-ee-3
 API version:  1.27 (minimum version 1.12)
 Go version:   go1.7.5
 Git commit:   3fcee33
 Built:        Thu Mar 30 20:06:11 2017
 OS/Arch:      linux/amd64
 Experimental: false

Output of docker info:

$ sudo docker info
Containers: 405
 Running: 329
 Paused: 0
 Stopped: 76
Images: 1886
Server Version: 17.03.1-ee-3
Storage Driver: aufs
 Root Dir: /opt/xxx/docker/aufs
 Backing Filesystem: extfs
 Dirs: 4530
 Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins: 
 Volume: local
 Network: bridge host macvlan null overlay
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 4ab9917febca54791c5f071a9d1f404867857fcc
runc version: 54296cf40ad8143b62dbcaa1d90e520a2136ddfe
init version: 949e6fa
Security Options:
 apparmor
 seccomp
  Profile: default
Kernel Version: 4.4.0-72-generic
Operating System: Ubuntu 16.04.2 LTS
OSType: linux
Architecture: x86_64
CPUs: 64
Total Memory: 960.7 GiB
Name: ip-xx-xx-xx-xx
ID: UGZS:UFD3:GB4C:W5MX:JU2L:K7PH:6ZWS:4GPM:27Q5:UNNN:X3DC:YDT7
Docker Root Dir: /opt/io1/docker
Debug Mode (client): false
Debug Mode (server): true
 File Descriptors: 2970
 Goroutines: 1953
 System Time: 2017-04-25T11:03:47.542873414Z
 EventsListeners: 1
Registry: https://index.docker.io/v1/
WARNING: No swap limit support
Experimental: false
Insecure Registries:
 127.0.0.0/8
Live Restore Enabled: true

Additional environment details (AWS, VirtualBox, physical, etc.):

$ uptime 
 <skipped>,  load average: 126.67, 127.43, 107.61

AWS: x1.16xlarge (dedicated host)

@cpuguy83
Copy link
Member

Can you send SIGUSR1 to the hung daemon and collect the goroutine stack dump from the daemon logs?

@eugene-dounar
Copy link
Author

We've collected a number of stack dumps. Seems like there's a pattern in all of them:

  • Goroutines doing docker ps stuck on a container lock of a particular container. Those docker ps come from monitoring system and ourselves.
  • A single goroutine doing docker kill stuck on the same container

stacks.tar.gz
daemon-data-*.log files are also available

@cpuguy83
Copy link
Member

cpuguy83 commented May 1, 2017

On one, it looks like it's stuck syncing to disk:

goroutine 73750 [runnable]:
syscall.Syscall(0x4a, 0x6, 0x0, 0x0, 0x0, 0x0, 0x0)
	/usr/local/go/src/syscall/asm_linux_amd64.s:18 +0x5
syscall.Fsync(0x6, 0xc42ba11b58, 0xc42ba11b60)
	/usr/local/go/src/syscall/zsyscall_linux_amd64.go:492 +0x4a
os.(*File).Sync(0xc424f4d420, 0x1a15940, 0xc42ba11bb0)
	/usr/local/go/src/os/file_posix.go:121 +0x3e
github.com/docker/docker/pkg/ioutils.(*atomicFileWriter).Close(0xc429ccb290, 0x0, 0x0)
	/usr/src/docker/.gopath/src/github.com/docker/docker/pkg/ioutils/fswriters.go:68 +0x81
github.com/docker/docker/container.(*Container).ToDisk(0xc4201ca400, 0x0, 0x0)
	/usr/src/docker/.gopath/src/github.com/docker/docker/container/container.go:168 +0x1dc
github.com/docker/docker/daemon.(*Daemon).StateChanged(0xc4201ca200, 0xc42ca34a40, 0x40, 0xc426948c20, 0x4, 0x100000000, 0x0, 0x0, 0x0, 0x0, ...)
	/usr/src/docker/.gopath/src/github.com/docker/docker/daemon/monitor.go:80 +0x5c8
github.com/docker/docker/libcontainerd.(*container).handleEvent.func1()
	/usr/src/docker/.gopath/src/github.com/docker/docker/libcontainerd/container_unix.go:217 +0x82
github.com/docker/docker/libcontainerd.(*queue).append.func1(0xc420213d01, 0xc42c56c5a0, 0xc42a2654a0, 0xc42c4a1320)
	/usr/src/docker/.gopath/src/github.com/docker/docker/libcontainerd/queue_unix.go:28 +0x30
created by github.com/docker/docker/libcontainerd.(*queue).append
	/usr/src/docker/.gopath/src/github.com/docker/docker/libcontainerd/queue_unix.go:30 +0x170

On several others it's waiting on IO streams to exit (could be a bug):

goroutine 68248 [semacquire, 2 minutes]:
sync.runtime_Semacquire(0xc4212dda0c)
	/usr/local/go/src/runtime/sema.go:47 +0x30
sync.(*WaitGroup).Wait(0xc4212dda00)
	/usr/local/go/src/sync/waitgroup.go:131 +0x97
github.com/docker/docker/daemon.(*Daemon).StateChanged(0xc420455600, 0xc429287e00, 0x40, 0xc426706388, 0x4, 0x8900000000, 0x0, 0x0, 0x0, 0x0, ...)
	/usr/src/docker/.gopath/src/github.com/docker/docker/daemon/monitor.go:42 +0x2c6
github.com/docker/docker/libcontainerd.(*container).handleEvent.func1()
	/usr/src/docker/.gopath/src/github.com/docker/docker/libcontainerd/container_unix.go:217 +0x82
github.com/docker/docker/libcontainerd.(*queue).append.func1(0xc420fcfd00, 0x0, 0xc42d43ba40, 0xc42277f380)
	/usr/src/docker/.gopath/src/github.com/docker/docker/libcontainerd/queue_unix.go:28 +0x30
created by github.com/docker/docker/libcontainerd.(*queue).append
	/usr/src/docker/.gopath/src/github.com/docker/docker/libcontainerd/queue_unix.go:30 +0x170

So it seems like we aren't getting an EOF from the FIFO from containerd.
Ping @mlaventure, does this look familiar for 17.03.x?


@eugene-dounar
Copy link
Author

Hi @cpuguy83, @mlaventure!

Here's containerd stack for the last dockerd stack dump (goroutine-stacks-2017-04-29T035321Z.log)
containerd_201704290353.txt

Should I report this as a separate issue in containerd repo?

@mlaventure
Copy link
Contributor

@eugene-dounar it looks like you have several dockerd daemon running at the same time:

Apr 25 09:58:47 localhost dockerd[130013]: time="2017-04-25T09:58:47.321869014Z" level=warning msg="libcontainerd: failed to retrieve container b8a3b4b411e30c6b1559236c51ad0e1a5b073bf3e9237838c8edc89dedd8d14f state: rpc error: code = 2 desc = containerd: container not found"
Apr 25 09:58:56 localhost dockerd[12166]: time="2017-04-25T09:58:56.980536101Z" level=debug msg="Loaded container b8a3b4b411e30c6b1559236c51ad0e1a5b073bf3e9237838c8edc89dedd8d14f"
Apr 25 09:59:15 localhost dockerd[27129]: time="2017-04-25T09:59:15.433257714Z" level=debug msg="Loaded container b8a3b4b411e30c6b1559236c51ad0e1a5b073bf3e9237838c8edc89dedd8d14f"
Apr 25 09:59:31 localhost dockerd[42687]: time="2017-04-25T09:59:31.730505609Z" level=debug msg="Loaded container b8a3b4b411e30c6b1559236c51ad0e1a5b073bf3e9237838c8edc89dedd8d14f"
Apr 25 09:59:50 localhost dockerd[61500]: time="2017-04-25T09:59:50.702414056Z" level=debug msg="Loaded container b8a3b4b411e30c6b1559236c51ad0e1a5b073bf3e9237838c8edc89dedd8d14f"

Or, the dockerd binary keeps on crashing and being restarted, but that should show somewhere in the logs (maybe in the one from systemd directly?)

@eugene-dounar
Copy link
Author

eugene-dounar commented May 17, 2017

@mlaventure I checked the logs once again — the daemon failed to start multiple times due to #32808.

$ zgrep '^Apr 25 09:5' syslog.gz | grep -E '(Starting Docker|panic)'

Apr 25 09:58:38 localhost systemd[1]: Starting Docker Socket for the API.
Apr 25 09:58:38 localhost systemd[1]: Starting Docker Application Container Engine...
Apr 25 09:58:49 localhost dockerd[130013]: panic: runtime error: invalid memory address or nil pointer dereference
Apr 25 09:58:49 localhost dockerd[130013]: panic(0x16dd280, 0xc42000c070)
Apr 25 09:58:49 localhost dockerd[130013]: #011/usr/local/go/src/runtime/panic.go:500 +0x1a1
Apr 25 09:58:54 localhost systemd[1]: Starting Docker Socket for the API.
Apr 25 09:58:54 localhost systemd[1]: Starting Docker Application Container Engine...
Apr 25 09:59:07 localhost dockerd[12166]: panic: runtime error: invalid memory address or nil pointer dereference
Apr 25 09:59:07 localhost dockerd[12166]: panic(0x16dd280, 0xc42000c080)
Apr 25 09:59:07 localhost dockerd[12166]: #011/usr/local/go/src/runtime/panic.go:500 +0x1a1
Apr 25 09:59:12 localhost systemd[1]: Starting Docker Socket for the API.
Apr 25 09:59:12 localhost systemd[1]: Starting Docker Application Container Engine...
Apr 25 09:59:23 localhost dockerd[27129]: panic: runtime error: invalid memory address or nil pointer dereference
Apr 25 09:59:23 localhost dockerd[27129]: panic(0x16dd280, 0xc42000c080)
Apr 25 09:59:23 localhost dockerd[27129]: #011/usr/local/go/src/runtime/panic.go:500 +0x1a1
Apr 25 09:59:28 localhost systemd[1]: Starting Docker Socket for the API.
Apr 25 09:59:29 localhost systemd[1]: Starting Docker Application Container Engine...
Apr 25 09:59:42 localhost dockerd[42687]: panic: runtime error: invalid memory address or nil pointer dereference
Apr 25 09:59:42 localhost dockerd[42687]: panic(0x16dd280, 0xc42000c080)
Apr 25 09:59:42 localhost dockerd[42687]: #011/usr/local/go/src/runtime/panic.go:500 +0x1a1
Apr 25 09:59:47 localhost systemd[1]: Starting Docker Socket for the API.
Apr 25 09:59:48 localhost systemd[1]: Starting Docker Application Container Engine...

No indication of multiple Docker daemons running at the same time

@mlaventure
Copy link
Contributor

mlaventure commented May 25, 2017

@eugene-dounar your log indicates that your daemon is panicing, but the rest of the stacktrace is missing unfortunately, is it possible to get the rest of it?

@thaJeztah
Copy link
Member

ping @eugene-dounar ^^

@eugene-dounar
Copy link
Author

@mlaventure sorry I did not quit get what is the missing stacktrace?

The panic was reported as a separate bug #32808 I don't think it's directly related.
So what happened is as follows:

But this particular bug does not make Docker daemon panic

@eugene-dounar
Copy link
Author

eugene-dounar commented Sep 4, 2017

Hi all,
We are still getting Docker daemon stuck when removing containers, and I some additional related information. Our current version is 17.03.2-ee-5

Here is goroutines dump from the latest incident: goroutine-stacks-2017-09-04T091905Z.log.gz
A large number of goroutines are stuck in reducePsContainers, those are all of the stuck docker ps calls. There's also a single goroutine stuck in ContainerRm->cleanupContainer->Kill->killProcess->GetPID. All of those stacks have 0xc420360600 pointer somewhere in arguments. According to deamon-data.log file this pointer points to Container struct for container with ID "07b52a28065413455ed5dd9819971d39d364d1e25bec7a71ca688b29e557f150".

Then I've grepped syslog entries with this container. Note that Docker stopped responding to docker ps right after 08:50 UTC.

Sep  4 08:50:45 docker-linux-5-dh dockerd[93817]: time="2017-09-04T08:50:45.242878083Z" level=debug msg="Calling DELETE /v1.27/containers/XXXXX?force=1"
Sep  4 08:50:45 docker-linux-5-dh dockerd[93817]: time="2017-09-04T08:50:45.242945800Z" level=debug msg="Sending kill signal 9 to container 07b52a28065413455ed5dd9819971d39d364d1e25bec7a71ca688b29e557f150"
Sep  4 08:50:45 docker-linux-5-dh dockerd[93817]: time="2017-09-04T08:50:45.245758768Z" level=debug msg="containerd: process exited" id=07b52a28065413455ed5dd9819971d39d364d1e25bec7a71ca688b29e557f150 pid=490fc11823d7d2723162429900bb92674c78dd1b9c44713f9811891466993575 status=137 systemPid=17744
Sep  4 08:50:45 docker-linux-5-dh dockerd[93817]: time="2017-09-04T08:50:45.247538156Z" level=debug msg="libcontainerd: received containerd event: &types.Event{Type:\"exit\", Id:\"07b52a28065413455ed5dd9819971d39d364d1e25bec7a71ca688b29e557f150\", Status:0x89, Pid:\"490fc11823d7d2723162429900bb92674c78dd1b9c44713f9811891466993575\", Timestamp:(*timestamp.Timestamp)(0xc423342bf0)}"
Sep  4 08:50:45 docker-linux-5-dh dockerd[93817]: time="2017-09-04T08:50:45.3376167Z" level=debug msg="containerd: process exited" id=07b52a28065413455ed5dd9819971d39d364d1e25bec7a71ca688b29e557f150 pid=init status=137 systemPid=112097
Sep  4 08:50:45 docker-linux-5-dh dockerd[93817]: time="2017-09-04T08:50:45.363652028Z" level=debug msg="libcontainerd: received containerd event: &types.Event{Type:\"exit\", Id:\"07b52a28065413455ed5dd9819971d39d364d1e25bec7a71ca688b29e557f150\", Status:0x89, Pid:\"init\", Timestamp:(*timestamp.Timestamp)(0xc4271da060)}"

It appears that reducePsContainer is getting stuck while waiting for a lock on this particular container that is being removed. Removal is also waiting for lock in GetPID call.

Then I've tried to find the goroutine that actually holds the lock. After categorizing all of the them I've ended up with this one:

goroutine 22640669 [semacquire, 28 minutes]:
sync.runtime_Semacquire(0xc42061710c)
        /usr/local/go/src/runtime/sema.go:47 +0x30
sync.(*WaitGroup).Wait(0xc420617100)
        /usr/local/go/src/sync/waitgroup.go:131 +0x97
github.com/docker/docker/daemon.(*Daemon).StateChanged(0xc42041ce00, 0xc426092400, 0x40, 0xc4271da000, 0x4, 0x8900000000, 0x0, 0x0, 0x0, 0x0, ...)
        /usr/src/docker/.gopath/src/github.com/docker/docker/daemon/monitor.go:42 +0x2c6
github.com/docker/docker/libcontainerd.(*container).handleEvent.func1()
        /usr/src/docker/.gopath/src/github.com/docker/docker/libcontainerd/container_unix.go:217 +0x82
github.com/docker/docker/libcontainerd.(*queue).append.func1(0xc423d8bd01, 0xc4224d4e40, 0xc4247ec500, 0xc425b1eae0)
        /usr/src/docker/.gopath/src/github.com/docker/docker/libcontainerd/queue_unix.go:28 +0x30
created by github.com/docker/docker/libcontainerd.(*queue).append
        /usr/src/docker/.gopath/src/github.com/docker/docker/libcontainerd/queue_unix.go:30 +0x170

Judging by monitor.go lines 41-43, here we're waiting for streams to close, while holding the container locked.

		c.Lock()
		c.StreamConfig.Wait()
		c.Reset(false)

PS.
I guess this is same that @cpuguy83 mentioned in a comment above.

@eugene-dounar
Copy link
Author

We got another similar incident and I've collected lsof -p <dockerd_pid>:

dockerd 86836 root   37u     FIFO               0,19         0t0       3202 /run/docker/libcontainerd/9eaeed2925278415764391f5655caf5d465a1c62762ced3f989e4ec193cb0f71/init-stderr (deleted)
dockerd 86836 root   39w      REG              252,1         181   43263209 /srv/docker_root/containers/9eaeed2925278415764391f5655caf5d465a1c62762ced3f989e4ec193cb0f71/9eaeed2925278415764391f5655caf5d465a1c62762ced3f989e4ec193cb0f71-json.log 

By that time /run/docker/libcontainerd/<CID> and /run/docker/libcontainerd/containerd/<CID> were already removed

@genuss
Copy link

genuss commented Sep 18, 2017

Had the same issue. Replicated very good on one of our servers with debian stretch, kernel 4.9.0-3-amd64, docker 17.06.2-ce. Downgrading to debian jessie with kernel 3.16.0-4-amd64 solved this issue completely.
Suppose there might be something related to kernel changes there.

@mswain
Copy link

mswain commented Jan 9, 2018

Seeing a similar issue. We've tested docker 1.13.1, 17.03.1, 17.06.0, 17.09.0 and the symptoms are the same. Here is info from 17.03.1:

Stack trace:
https://gist.github.com/mswain/ccdb1dc24b11164310f43a14e820376e

Kernel:
Linux ip-X-X-X-X.ec2.internal 4.4.0-1044-aws #53-Ubuntu SMP Mon Dec 11 13:49:57 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux

Docker Info:

Containers: 71
 Running: 62
 Paused: 0
 Stopped: 9
Images: 321
Server Version: 17.03.1-ce
Storage Driver: overlay
 Backing Filesystem: extfs
 Supports d_type: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: bridge host macvlan null overlay
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 4ab9917febca54791c5f071a9d1f404867857fcc
runc version: 54296cf40ad8143b62dbcaa1d90e520a2136ddfe
init version: 949e6fa
Security Options:
 apparmor
 seccomp
  Profile: default
Kernel Version: 4.4.0-1044-aws
Operating System: Ubuntu 16.04.3 LTS
OSType: linux
Architecture: x86_64
CPUs: 32
Total Memory: 240.1 GiB
Name: ip-X-X-X-X.ec2.internal
ID: A77P:EZSJ:WB5L:ZOGO:4CMB:4MPH:HZVA:GR72:BA25:4ZLI:UDLY:YFO5
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
WARNING: No swap limit support
Experimental: false
Insecure Registries:
 127.0.0.0/8
Live Restore Enabled: true

Docker logs around issue:
https://gist.github.com/mswain/1f58a17d317e8ad100bb46c225132086

@sam-thibault
Copy link
Contributor

This is an old issue. I will close as stale. If you see this error on 23.0 or newer, please open a new issue.

@sam-thibault sam-thibault closed this as not planned Won't fix, can't repro, duplicate, stale Apr 21, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

8 participants