Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Docker is incredibly slow to start many containers in parallel even on a many-core machine #42096

Open
GunshipPenguin opened this issue Feb 28, 2021 · 20 comments

Comments

@GunshipPenguin
Copy link

Description

On a many-core machine (Ryzen 9 3950X in my case), starting 32 docker containers simultaneously results in the time needed to start any one of them increasing 10-20 fold.

Steps to reproduce the issue:
1.

On a multi-core machine, start a bunch of docker containers running /bin/true (which does nothing but exit with status 0, thus time needed to run the command should be negligible) in parallel. This can be done with GNU Parallel as follows:

parallel -j32 -N0 docker run --rm alpine /bin/true ::: {1..100}

In this case I'm starting starting 32 with -j32 as I'm on a 32 threaded machine.

Immediately thereafter, in another terminal run /bin/true in another docker container and measure the time it takes to complete:

time docker run --rm alpine /bin/true

After all containers from step 1 have exited, run the command in step 2 again and compare the times.

In my case, when running /bin/true with 32 other containers starting at the same time, I got:

real    0m7.399s
user    0m0.024s
sys     0m0.019s

When running without the other containers starting at the same time, I got:

real    0m0.603s
user    0m0.030s
sys     0m0.015s

Describe the results you received:

Time needed in step 2 was vastly greater than time needed in step 3.

Describe the results you expected:

Times in step 2 and 3 are comparable given container creation processes should be scheduled on different cores.

Additional information you deem important (e.g. issue happens only occasionally):

Thinking dockerd may be serializing around a lock or something of the sort, I generated two off CPU flame graphs of the dockerd process using tools from Brendan Gregg. They can be found zipped here: flamegraphs.zip. offcpu-dockerd-parallel.svg covers docker being used to start 32 containers in parallel and offcpu-dockerd-sequential.svg covers docker being to start 32 containers sequentially. As can be seen in the parallel case, dockerd spends a very large amount of time being blocked in the open syscall (see right hand side of graph), specifically when waiting for the other end of a FIFO to be opened. This is not present in the sequential case.

Running lsof on dockerd during the parallel test, I consistently got output resembling the following:

$ sudo lsof -p `pgrep -nx dockerd` | grep FIFO
dockerd 10755 root   26u     FIFO               0,23       0t0      77979 /run/docker/containerd/59335fc9195f9e4a23ae8ad6ac3b5539ba4f784053bc5dc2305731f405ad18b2/init-stdout                                      
dockerd 10755 root   31u     FIFO               0,23       0t0      78015 /run/docker/containerd/e4869e1ea909edf9e4fe29a148014ac0d24b437dfe8e64b0b0aac74665f64bb4/init-stdout                                      
dockerd 10755 root   49u     FIFO               0,23       0t0      77980 /run/docker/containerd/59335fc9195f9e4a23ae8ad6ac3b5539ba4f784053bc5dc2305731f405ad18b2/init-stderr                                      
dockerd 10755 root  114u     FIFO               0,23       0t0      78003 /run/docker/containerd/807fc9cef8a4834e3a6f671837e54c3bb3afad8ccbd0b491716c44ca08dba2fe/init-stdout                                      
dockerd 10755 root  117u     FIFO               0,23       0t0      78004 /run/docker/containerd/807fc9cef8a4834e3a6f671837e54c3bb3afad8ccbd0b491716c44ca08dba2fe/init-stderr                                      
dockerd 10755 root  132u     FIFO               0,23       0t0      78016 /run/docker/containerd/e4869e1ea909edf9e4fe29a148014ac0d24b437dfe8e64b0b0aac74665f64bb4/init-stderr 

ie. the FIFOs being waited on seem to be /run/docker/containerd/<container_id>/init-[stderr,stdout].

Some googling resulted in me stumbling upon #29369, which seems to be a very similar situation. This comment is of particular interest as it revealed that containerd only allows starting 10 containers in parallel at once, putting any further requests in a queue. This might have provided an answer to this issue, however, repeating the reproduction steps above with 9 containers in step 2 resulted in a similar (albeit slightly less extreme) performance hit (~1.8s vs ~0.7s). Additionally, given that comment is 5 years old, I'm not sure if it's still accurate regarding containerd limits.

I'm not entirely sure if this is expected behaviour or not. It would of course be expected behaviour if there was a contended resource shared between all 32 container startups that needed to be locked around for sequential access. I'm however not familiar enough with the docker/containerd codebases to know if such a resource exists. If it does, I'd be very grateful if someone could enlighten me to its existence 😛

Output of docker version:

Client: Docker Engine - Community
 Version:           20.10.3
 API version:       1.41
 Go version:        go1.13.15
 Git commit:        48d30b5
 Built:             Fri Jan 29 14:33:25 2021
 OS/Arch:           linux/amd64
 Context:           default
 Experimental:      true

Server: Docker Engine - Community
 Engine:
  Version:          20.10.3
  API version:      1.41 (minimum version 1.12)
  Go version:       go1.13.15
  Git commit:       46229ca
  Built:            Fri Jan 29 14:31:38 2021
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.4.3
  GitCommit:        269548fa27e0089a8b8278fc4fc781d7f65a939b
 runc:
  Version:          1.0.0-rc92
  GitCommit:        ff819c7e9184c13b7c2607fe6c30ae19403a7aff
 docker-init:
  Version:          0.19.0
  GitCommit:        de40ad0

Output of docker info:

Client:
 Context:    default
 Debug Mode: false
 Plugins:
  app: Docker App (Docker Inc., v0.9.1-beta3)
  buildx: Build with BuildKit (Docker Inc., v0.5.1-docker)

Server:
 Containers: 0
  Running: 0
  Paused: 0
  Stopped: 0
 Images: 28
 Server Version: 20.10.3
 Storage Driver: overlay2
  Backing Filesystem: extfs
  Supports d_type: true
  Native Overlay Diff: true
 Logging Driver: json-file
 Cgroup Driver: cgroupfs
 Cgroup Version: 1
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: inactive
 Runtimes: io.containerd.runc.v2 io.containerd.runtime.v1.linux runc
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: 269548fa27e0089a8b8278fc4fc781d7f65a939b
 runc version: ff819c7e9184c13b7c2607fe6c30ae19403a7aff
 init version: de40ad0
 Security Options:
  apparmor
  seccomp
   Profile: default
 Kernel Version: 5.10.0-0.bpo.3-amd64
 Operating System: Debian GNU/Linux 10 (buster)
 OSType: linux
 Architecture: x86_64
 CPUs: 32
 Total Memory: 31.33GiB
 Name: desktop
 ID: MJDC:OLOZ:H7N2:S5L2:LO2Z:MOSP:6OUT:NVQZ:EI4N:6ZF7:T2V3:HDDH
 Docker Root Dir: /var/lib/docker
 Debug Mode: true
  File Descriptors: 22
  Goroutines: 34
  System Time: 2021-02-27T20:48:48.951671583-08:00
  EventsListeners: 0
 Registry: https://index.docker.io/v1/
 Labels:
 Experimental: false
 Insecure Registries:
  127.0.0.0/8
 Live Restore Enabled: false

WARNING: No blkio weight support
WARNING: No blkio weight_device support

Additional environment details (AWS, VirtualBox, physical, etc.):

Running on a Debian 10 workstation. Output of uname -a:

Linux desktop 5.10.0-0.bpo.3-amd64 #1 SMP Debian 5.10.13-1~bpo10+1 (2021-02-11) x86_64 GNU/Linux
@vfoehn
Copy link

vfoehn commented Jun 21, 2021

I believe I might be experiencing the same issue when launching many containers in parallel in a VM on Google Compute Engine (GCE). I have tried to reproduce the issue on multiple bare-metal and non-GCE virtual machines, but so far I can only reproduce it on GCE where I have been able to consistently reproduce the issue on multiple different VMs.

Inspired by @GunshipPenguin, I ran a very similar experiment:

for i in {1..100}
do
   time docker run --rm alpine /bin/true &
   sleep 1
done

The reason I pause program for 1 second between invocations is so that the container schedule is "sustainable". That is, since a single command docker run --rm alpine /bin/true takes only about 0.5 seconds, it should be possible for consecutive containers to run with very little contention, especially considering that the machine has many cores and more than enough memory. The GCE VM that this experiment is run on has the type e2-highcpu-32 (32 vCPUs, 32 GB memory). As the type "highcpu" suggests, I am using a sufficiently powerful machine. During the experiments CPU usage never exceeded 25%.

In the following diagram I have plotted the response times (i.e., the elapsed time between calling docker run and the container finishing) of each docker run command. At the beginning, the response times are very consistent and low. However, after the 42nd container they increase drastically.
gce_docker_containers_sequence

If I reduce the sleep time in the aforementioned experiment, the drastic increase in response time happens for earlier containers as well. E.g., if I set the sleep time to 0.5 sec, the 28th container is already affected by the high response time.

Output of docker version:

Client: Docker Engine - Community
 Version:           20.10.7
 API version:       1.41
 Go version:        go1.13.15
 Git commit:        f0df350
 Built:             Wed Jun  2 11:56:38 2021
 OS/Arch:           linux/amd64
 Context:           default
 Experimental:      true

Server: Docker Engine - Community
 Engine:
  Version:          20.10.7
  API version:      1.41 (minimum version 1.12)
  Go version:       go1.13.15
  Git commit:       b0f5bc3
  Built:            Wed Jun  2 11:54:50 2021
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.4.6
  GitCommit:        d71fcd7d8303cbf684402823e425e9dd2e99285d
 runc:
  Version:          1.0.0-rc95
  GitCommit:        b9ee9c6314599f1b4a7f497e1f1f856fe433d3b7
 docker-init:
  Version:          0.19.0
  GitCommit:        de40ad0

Output of docker info:

Client:
 Context:    default
 Debug Mode: false
 Plugins:
  app: Docker App (Docker Inc., v0.9.1-beta3)
  buildx: Build with BuildKit (Docker Inc., v0.5.1-docker)
  scan: Docker Scan (Docker Inc., v0.8.0)

Server:
 Containers: 0
  Running: 0
  Paused: 0
  Stopped: 0
 Images: 9
 Server Version: 20.10.7
 Storage Driver: overlay2
  Backing Filesystem: extfs
  Supports d_type: true
  Native Overlay Diff: true
  userxattr: false
 Logging Driver: json-file
 Cgroup Driver: systemd
 Cgroup Version: 1
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: inactive
 Runtimes: io.containerd.runc.v2 io.containerd.runtime.v1.linux runc
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: d71fcd7d8303cbf684402823e425e9dd2e99285d
 runc version: b9ee9c6314599f1b4a7f497e1f1f856fe433d3b7
 init version: de40ad0
 Security Options:
  apparmor
  seccomp
   Profile: default
 Kernel Version: 5.8.0-1032-gcp
 Operating System: Ubuntu 20.04.2 LTS
 OSType: linux
 Architecture: x86_64
 CPUs: 32
 Total Memory: 31.36GiB
 Name: totoro
 ID: V66F:SFDV:FFFT:KJWC:VLT7:E5H2:HFUG:QG3F:HFZA:UYGZ:NFX2:TYGG
 Docker Root Dir: /var/lib/docker
 Debug Mode: false
 Registry: https://index.docker.io/v1/
 Labels:
 Experimental: false
 Insecure Registries:
  127.0.0.0/8
 Live Restore Enabled: false

Additional environment details (AWS, VirtualBox, physical, etc.):

Running on GCE VM e2-highcpu-32 (32 vCPUs, 32 GB memory). Output of uname -a:
Linux totoro 5.8.0-1032-gcp #34~20.04.1-Ubuntu SMP Wed May 19 18:19:35 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux

@thaJeztah
Copy link
Member

Do you see a big difference if you start the container with --network=host ?

@vfoehn
Copy link

vfoehn commented Jun 22, 2021

The response time of each container is roughly halved when I use --network=host. If I run the same experiment I mentioned above with --network=host, I no longer see the strange behavior. That is, all containers launch as quickly as they ought to.

However, if I reduce the sleep time between consecutive container launches from 1 second to 0.5 seconds, I see the sudden increase of response time again. As before, some of the response times easily exceed 30 seconds. It seems to me that --network=host mitigates the issue, but does not really solve it.

Upon investigating further, I found that when the response times increase drastically, there is often not a single container running. So docker must be stuck somewhere else. As @GunshipPenguin suggested, docker might be waiting for some lock to be released.

I would be happy to investigate further if you have any suggestions.

@iaindooley
Copy link

I am experiencing the same problem on GCP. It happened in June as well, but at that time I reverted to a previous snapshot of my GCP image and it seemed to solve the problem. This time around, I reverted to the same image I snapshotted after that episode but am still seeing this behaviour. When I try to start with --network=host I see the error:

docker: Error response from daemon: OCI runtime start failed: starting container: setting up network: creating interfaces from net namespace "/proc/7577/ns/net": cannot run with network enabled in root network namespace: unknown.

Docker version:

Client: Docker Engine - Community
Version: 19.03.12
API version: 1.40
Go version: go1.13.10
Git commit: 48a66213fe
Built: Mon Jun 22 15:45:49 2020
OS/Arch: linux/amd64
Experimental: false
Server: Docker Engine - Community
Engine:
Version: 19.03.12
API version: 1.40 (minimum version 1.12)
Go version: go1.13.10
Git commit: 48a66213fe
Built: Mon Jun 22 15:44:20 2020
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.2.13
GitCommit: 7ad184331fa3e55e52b890ea95e65ba581ae3429
runc:
Version: 1.0.0-rc10
GitCommit: dc9208a3303feef5b3839f4323d9beb36df0a9dd
docker-init:
Version: 0.18.0
GitCommit: fec3683

Docker info:

Client:
Debug Mode: false
Server:
Containers: 2
Running: 2
Paused: 0
Stopped: 0
Images: 23
Server Version: 19.03.12
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: runc runsc
Default Runtime: runc
Init Binary: docker-init
containerd version: 7ad184331fa3e55e52b890ea95e65ba581ae3429
runc version: dc9208a3303feef5b3839f4323d9beb36df0a9dd
init version: fec3683
Security Options:
apparmor
seccomp
Profile: default
Kernel Version: 4.19.0-9-cloud-amd64
Operating System: Debian GNU/Linux 10 (buster)
OSType: linux
Architecture: x86_64
CPUs: 8
Total Memory: 11.73GiB
Name: instancename
ID: N6SZ:ZXZL:VT6K:SWFP:HR23:LETN:B5DD:EKN5:DB4S:O4TH:I7G6:JKG3
Docker Root Dir: /var/lib/docker
Debug Mode: false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
WARNING: No swap limit support

@jespada-bc
Copy link

jespada-bc commented Dec 7, 2021

@iaindooley I've described a similar issue in #42817 but I'm only able to reproduce it in GCP, not on any other provider.

@iaindooley
Copy link

@jespada-bc in the end I just split up my monolithic instance into multiple GCP instances. Through trial and error, I found that docker starts to bottleneck at about 15 instances, so in my case each GCP instance is 2core/2GB which based on my application is enough to run linux + 15 containers. Then I scale the service up and down by spinning up new GCP instances which adds chunks of 15 workers to process the queue, then when the queue goes back down I shut down those instances.

It was a real hassle to change over but in the end it's actually a nicer system than the more monolithic scaling I was pursuing when I ran into the problem :)

@hrichardlee
Copy link

I'm having the same problem and am also happy to help investigate.

Steps to reproduce:

  • Start a c5a.16xlarge instance in AWS (64 CPU, 128GB of RAM) with the Ubuntu 20.04.4 image, install Docker and python3.9
  • Run:
    for i in {1..64}
    do
       docker run --network host python:3.9.8-slim-buster python -c "import datetime; print(datetime.datetime.now())" &
    done
    

The difference between the first timestamp and last timestamp is 2-2.5s. If you remove the --network host, the difference between the first and last timestamp is anywhere from 5-15s.

In contrast, running the same thing without docker, the difference between the first and last timestamp is about 0.02s.

for i in {1..64}
do
   python -c "import datetime; print(datetime.datetime.now())" &
done

It makes sense that Docker is doing a lot more than starting a single process, but 100x seems like a big difference.

Version info:

> docker version
Client: Docker Engine - Community
 Version:           20.10.21
 API version:       1.41
 Go version:        go1.18.7
 Git commit:        baeda1f
 Built:             Tue Oct 25 18:02:21 2022
 OS/Arch:           linux/amd64
 Context:           default
 Experimental:      true

Server: Docker Engine - Community
 Engine:
  Version:          20.10.21
  API version:      1.41 (minimum version 1.12)
  Go version:       go1.18.7
  Git commit:       3056208
  Built:            Tue Oct 25 18:00:04 2022
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.6.10
  GitCommit:        770bd0108c32f3fb5c73ae1264f7e503fe7b2661
 runc:
  Version:          1.1.4
  GitCommit:        v1.1.4-0-g5fd4c4d
 docker-init:
  Version:          0.19.0
  GitCommit:        de40ad0

> docker info
Client:
 Context:    default
 Debug Mode: false
 Plugins:
  app: Docker App (Docker Inc., v0.9.1-beta3)
  buildx: Docker Buildx (Docker Inc., v0.9.1-docker)
  compose: Docker Compose (Docker Inc., v2.12.2)
  scan: Docker Scan (Docker Inc., v0.17.0)

Server:
 Containers: 128
  Running: 0
  Paused: 0
  Stopped: 128
 Images: 677
 Server Version: 20.10.21
 Storage Driver: overlay2
  Backing Filesystem: extfs
  Supports d_type: true
  Native Overlay Diff: true
  userxattr: false
 Logging Driver: json-file
 Cgroup Driver: cgroupfs
 Cgroup Version: 1
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: inactive
 Runtimes: io.containerd.runc.v2 io.containerd.runtime.v1.linux runc
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: 770bd0108c32f3fb5c73ae1264f7e503fe7b2661
 runc version: v1.1.4-0-g5fd4c4d
 init version: de40ad0
 Security Options:
  apparmor
  seccomp
   Profile: default
 Kernel Version: 5.13.0-1029-aws
 Operating System: Ubuntu 20.04.4 LTS
 OSType: linux
 Architecture: x86_64
 CPUs: 64
 Total Memory: 124.5GiB
 Name: ip-172-31-25-56
 ID: 43X6:UUWP:J46E:LMNW:BJJA:XBHO:S2G6:DAUV:SLHM:VGOS:QWSK:TZOH
 Docker Root Dir: /var/lib/docker
 Debug Mode: false
 Registry: https://index.docker.io/v1/
 Labels:
 Experimental: false
 Insecure Registries:
  127.0.0.0/8
 Live Restore Enabled: false

@ningmingxiao
Copy link
Contributor

ningmingxiao commented Jan 9, 2023

I meet same problem, but when I update kernel after 5.15-rc1 or later, on physical machine will start quickly.
@GunshipPenguin @vfoehn @thaJeztah @hrichardlee @jespada-bc @thaJeztah @AkihiroSuda

use commit d195d7aac09bddabc2c8326fb02fcec2b0a2de02 will start quickly.
use faa6a1f9de51bc56a9384864ced067f5fa4f9bf7 commit will start slowly.

git log d195d7aac09b faa6a1f9de51 have many commits ,I don't know which commit influence docker start.

here is kernel commit

commit d195d7aac09bddabc2c8326fb02fcec2b0a2de02
Author: Joseph Gates <jgates@squareup.com>
Date:   Wed Aug 18 13:31:43 2021 +0200

    wcn36xx: Ensure finish scan is not requested before start scan
    
    If the operating channel is the first in the scan list, it was seen that
    a finish scan request would be sent before a start scan request was
    sent, causing the firmware to fail all future scans. Track the current
    channel being scanned to avoid requesting the scan finish before it
    starts.
    
    Cc: <stable@vger.kernel.org>
    Fixes: 5973a2947430 ("wcn36xx: Fix software-driven scan")
    Signed-off-by: Joseph Gates <jgates@squareup.com>
    Signed-off-by: Loic Poulain <loic.poulain@linaro.org>
    Signed-off-by: Kalle Valo <kvalo@codeaurora.org>
    Link: https://lore.kernel.org/r/1629286303-13179-1-git-send-email-loic.poulain@linaro.org

commit faa6a1f9de51bc56a9384864ced067f5fa4f9bf7
Author: Krzysztof Kozlowski <krzysztof.kozlowski@canonical.com>
Date:   Wed Aug 25 15:42:51 2021 +0200

    MAINTAINERS: clock: include S3C and S5P in Samsung SoC clock entry
    
    Cover the S3C and S5Pv210 clock controller binding headers by Samsung
    SoC clock controller drivers maintainer entry.
    
    Signed-off-by: Krzysztof Kozlowski <krzysztof.kozlowski@canonical.com>
    Reviewed-by: Sam Protsenko <semen.protsenko@linaro.org>
    Link: https://lore.kernel.org/r/20210825134251.220098-3-krzysztof.kozlowski@canonical.com
    Reviewed-by: Rob Herring <robh@kernel.org>
    Signed-off-by: Stephen Boyd <sboyd@kernel.org>

this page
https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/log/?h=linux-5.15.y&ofs=17000
search "Ensure finish scan is not requested before start scan"

@ningmingxiao
Copy link
Contributor

after this commit torvalds/linux@9d3684c24a52 will start quickly

@ningmingxiao
Copy link
Contributor

ningmingxiao commented Jan 18, 2023

this commit torvalds/linux@9d3684c24a52 will fix this issue,but I want to know why docker start container quickly after this commit?
ping @thaJeztah
@GunshipPenguin
@vfoehn
@davem330
@corhere
@AkihiroSuda

run.sh

#!/bin/sh
MAX=100
for I in `seq 1 $MAX`; do
  ip link add name v$I type veth peer name pv$I
done
for I in `seq 1 $MAX`; do
  ip link del dev v$I
done

after this commit

[root@localhost nmx]# time sh run.sh

real    0m3.184s
user    0m0.117s
sys     0m0.948s

before this commit

[root@localhost nmx]# time sh run.sh

real    0m1.900s
user    0m0.130s
sys     0m0.207s

even though docker start container more quickly than before

@ningmingxiao
Copy link
Contributor

ningmingxiao commented Jan 18, 2023


[root@localhost nmx]# cat test.sh
for i in {1..400}
do
   time docker run --rm busybox /bin/true &
   sleep 1
done

will appear one times slow before this commit, when run about per 10 times
sh test.sh

real 0m0.315s
user 0m0.015s
sys 0m0.022s

real 0m0.329s
user 0m0.023s
sys 0m0.012s

real 0m0.318s
user 0m0.017s
sys 0m0.018s

real 0m0.302s
user 0m0.014s
sys 0m0.021s

real 0m0.314s
user 0m0.016s
sys 0m0.017s

real 0m0.298s
user 0m0.021s
sys 0m0.016s

real 0m0.306s
user 0m0.022s
sys 0m0.010s

real 0m0.331s
user 0m0.017s
sys 0m0.019s

real 0m0.308s
user 0m0.029s
sys 0m0.010s

real 0m0.713s
user 0m0.025s
sys 0m0.009s

real 0m0.314s
user 0m0.020s
sys 0m0.014s

real 0m0.320s
user 0m0.022s
sys 0m0.012s

real 0m0.288s
user 0m0.017s
sys 0m0.020s

real 0m0.341s
user 0m0.018s
sys 0m0.017s

real 0m0.311s
user 0m0.016s
sys 0m0.010s

real 0m0.303s
user 0m0.019s
sys 0m0.013s

real 0m0.324s
user 0m0.023s
sys 0m0.013s

real 0m0.283s
user 0m0.021s
sys 0m0.012s

real 0m0.309s
user 0m0.018s
sys 0m0.015s

real 0m0.694s
user 0m0.022s
sys 0m0.013s

@abudawud
Copy link

abudawud commented Jan 21, 2023

Hi,
I'm facing the same issue. Now i have 52 containers running on my VM.
It's already happened two times my docker daemon stuck and to solve it i must to restart the daemon which is not a good idea. Almost all docker command is very slow to response, when i try to create container it's taking slow response and end with:

docker: Error response from daemon: cannot start a stopped process: unknown.
ERRO[1098] error waiting for container: context canceled

Before i create the container above, I also run journalctl -af command that producing this log:

Jan 21 11:16:45 vmapps kernel: docker0: port 2(veth7796ce7) entered blocking state
Jan 21 11:16:45 vmapps kernel: docker0: port 2(veth7796ce7) entered disabled state
Jan 21 11:16:45 vmapps kernel: device veth7796ce7 entered promiscuous mode
Jan 21 11:16:45 vmapps kernel: docker0: port 2(veth7796ce7) entered blocking state
Jan 21 11:16:45 vmapps kernel: docker0: port 2(veth7796ce7) entered forwarding state
Jan 21 11:16:45 vmapps kernel: docker0: port 2(veth7796ce7) entered disabled state
Jan 21 11:16:45 vmapps systemd-networkd[155]: veth7796ce7: Link UP
Jan 21 11:16:45 vmapps networkd-dispatcher[314]: WARNING:Unknown index 4771 seen, reloading interface list
Jan 21 11:16:45 vmapps systemd-udevd[114209]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable.
Jan 21 11:16:45 vmapps systemd-udevd[114209]: Using default interface naming scheme 'v245'.
Jan 21 11:16:45 vmapps systemd-udevd[114209]: veth06b401a: Could not generate persistent MAC: No data available
Jan 21 11:16:45 vmapps systemd-udevd[114210]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable.
Jan 21 11:16:45 vmapps systemd-udevd[114210]: Using default interface naming scheme 'v245'.
Jan 21 11:16:45 vmapps systemd-udevd[114210]: veth7796ce7: Could not generate persistent MAC: No data available

Hope this help and this issue can be solved soon.
Thanks

@ningmingxiao
Copy link
Contributor

this commit maybe fix this issue
moby/locker#3

@mickaelperrin
Copy link

We are facing that issue during the startup/shutdown on our bare metal servers that we use to do shared hosting for our customers.

The server is working fine with really great performance when launched, but the start and shutdown of the docker service is a hell.

We are running currently around 600-700 containers and the restart of the docker service takes literally hours.

I am thinking to stop using the restart policies and use a custom startup/shutdown script of the docker service to gracefully stop each docker-compose project and do it in a sequential way instead of trying to do it in an automatic parallel way.

@ningmingxiao
Copy link
Contributor

We are facing that issue during the startup/shutdown on our bare metal servers that we use to do shared hosting for our customers.

The server is working fine with really great performance when launched, but the start and shutdown of the docker service is a hell.

We are running currently around 600-700 containers and the restart of the docker service takes literally hours.

I am thinking to stop using the restart policies and use a custom startup/shutdown script of the docker service to gracefully stop each docker-compose project and do it in a sequential way instead of trying to do it in an automatic parallel way.

you can try this pr #44887

@asteinlein
Copy link

Came across this after encountering this same issue, when starting hundreds of containers is basically trashing the system. Any reason #44887 that were supposed to solve this stalled?

@ningmingxiao
Copy link
Contributor

Came across this after encountering this same issue, when starting hundreds of containers is basically trashing the system. Any reason #44887 that were supposed to solve this stalled?

@corhere

@corhere
Copy link
Contributor

corhere commented Jan 20, 2024

#44887 stalled because its various iterations either did not fix this issue or fixed the issue by breaking something else.

@asteinlein
Copy link

#44887 stalled because its various iterations either did not fix this issue or fixed the issue by breaking something else.

I see, thanks for the response! But is it so uncommon to spawn tens of containers at the same time this isn't prioritized? AFAICS, this isn't an edge-case apart from the number of containers being started...?

@ningmingxiao
Copy link
Contributor

ningmingxiao commented Jan 20, 2024

I thinks my commit doesn't have too much risk,can my patch solve your problem?
you can also update your kernel after this commit torvalds/linux@9d3684c24a52 to see whether solve your problem.
this kernel commit will make create bridge network slower than before, which will reduce lock competition. @asteinlein

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests