Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

container failed to start due to network issue #14788

Closed
quick-sort opened this issue Jul 21, 2015 · 28 comments
Closed

container failed to start due to network issue #14788

quick-sort opened this issue Jul 21, 2015 · 28 comments

Comments

@quick-sort
Copy link

docker version

Client version: 1.7.0
Client API version: 1.19
Go version (client): go1.4.2
Git commit (client): 0baf609
OS/Arch (client): linux/amd64
Server version: 1.7.0
Server API version: 1.19
Go version (server): go1.4.2
Git commit (server): 0baf609
OS/Arch (server): linux/amd64

docker info

Containers: 5
Images: 71
Storage Driver: aufs
 Root Dir: /var/lib/docker/aufs
 Backing Filesystem: extfs
 Dirs: 81
 Dirperm1 Supported: false
Execution Driver: native-0.2
Logging Driver: json-file
Kernel Version: 3.13.0-32-generic
Operating System: Ubuntu 14.04.2 LTS
CPUs: 2
Total Memory: 3.859 GiB

uname -a

Linux node001.d.nexttao.com 3.13.0-32-generic #57-Ubuntu SMP Tue Jul 15 03:51:08 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux

I am not sure how to reproduce this issue, because it doesn't occur on another docker host. I used docker-py to do operation like run/rm container.

I could start new container as long as the number of running containers is less than 5. However I cannot start any new container when there are five containers running.
docker run -d --name nginx nginx
It will give "Cannot start container xxxxx: no available ip addresses on network"

After service docker restart, it can start new containers successfully.
I found this in syslog

Jul 21 14:30:48 node001 kernel: [1346023.941627] device veth180e67d entered promiscuous mode
Jul 21 14:30:48 node001 kernel: [1346023.941702] docker0: port 6(veth180e67d) entered forwarding state
Jul 21 14:30:48 node001 kernel: [1346023.941709] docker0: port 6(veth180e67d) entered forwarding state
Jul 21 14:30:48 node001 kernel: [1346023.942526] docker0: port 6(veth180e67d) entered disabled state
Jul 21 14:30:48 node001 kernel: [1346023.944388] device veth180e67d left promiscuous mode
Jul 21 14:30:48 node001 kernel: [1346023.944397] docker0: port 6(veth180e67d) entered disabled state

It seems that new veth failed to start properly caused this issue. What else should I check in this docker host?
----------END REPORT ---------

@mrjana
Copy link
Contributor

mrjana commented Jul 21, 2015

@quick-sort can you post your daemon options? most likely you started the daemon with an existing bridge which has a smaller subnet range or you used --fixed-cidr option with a smaller subnet range

@quick-sort
Copy link
Author

@mrjana I didn't use --fixed-cidr option, I used the default docker network. But I did remove and create containers again and again for hundreds of times, and it worked fine after I have restarted docker daemon.
DOCKER_OPTS="$DOCKER_OPTS --insecure-registry d.nexttao.com:5000 --registry-mirror=http://e72d0657.m.daocloud.io -H tcp://10.169.14.176:4243 -H unix:///var/run/docker.sock"

@thaJeztah
Copy link
Member

@quick-sort do I understand correctly that you're no longer able to reproduce this (after the daemon was restarted)?

@quick-sort
Copy link
Author

@theJeztah yes, after the daemon was restarted, I cannot reproduce it.

@thaJeztah
Copy link
Member

@quick-sort Thanks! I'll close this issue for now, but please comment here / ping me, if you're able to reproduce and have more info 👍

@Bregor
Copy link

Bregor commented Aug 10, 2015

Same here.

# docker version
Client version: 1.7.1
Client API version: 1.19
Go version (client): go1.4.2
Git commit (client): 786b29d
OS/Arch (client): linux/amd64
Server version: 1.7.1
Server API version: 1.19
Go version (server): go1.4.2
Git commit (server): 786b29d
OS/Arch (server): linux/amd64

# docker info
Containers: 20
Images: 102
Storage Driver: aufs
 Root Dir: /var/lib/docker/aufs
 Backing Filesystem: extfs
 Dirs: 187
 Dirperm1 Supported: false
Execution Driver: native-0.2
Logging Driver: json-file
Kernel Version: 3.13.0-43-generic
Operating System: Ubuntu 14.04.2 LTS
CPUs: 8
Total Memory: 15.67 GiB
Name: web01.do.infra.ebaysocial.ru

# ps aux|grep 'docker -d'
root     14993  2.2  0.1 1056184 21612 ?       Ssl  21:08   0:50 /usr/bin/docker -d

Disappeared after daemon restart.

@Bregor
Copy link

Bregor commented Aug 10, 2015

Error response from daemon: Cannot start container 2436fcdca906e31f714cb75dbc59fc545b3c3be5b27763dc5b0e4c588284c317: no available ip addresses on network

@apatil
Copy link
Contributor

apatil commented Sep 25, 2015

@thaJeztah, I can't reproduce this on demand, but we see this state routinely. Restarting the Docker daemon does work, but causes downtime.

What information can I get for you next time we see it?

@thaJeztah
Copy link
Member

@apatil you're on the latest docker version? (1.8.2)?

ping @mrjana any information that's useful to add for you? ^^

@apatil
Copy link
Contributor

apatil commented Sep 27, 2015

Yep, 1.8.2.

@apatil
Copy link
Contributor

apatil commented Oct 1, 2015

@thaJeztah @mrjana ping. I'd love to help get this resolved.

@thaJeztah
Copy link
Member

Ok, let me reopen this

ping @mrjana #14788 (comment)

@thaJeztah thaJeztah reopened this Oct 1, 2015
@mrjana
Copy link
Contributor

mrjana commented Oct 2, 2015

@apatil what is your bridge configuration? i.e what kind of subnet are you using?

@apatil
Copy link
Contributor

apatil commented Oct 4, 2015

@mrjana it's a Flannel bridge with a /24 subnet. There were no IP address exhaustion issues with the same setup on Docker 1.6.

The problem is occurring now on one of our machines. There are 29 containers running, and 14 of them share the network namespace of another container, so there are plenty of IP addresses available in the subnet.

@osterman
Copy link

osterman commented Dec 3, 2015

+1 happening to me both on Docker 1.7.1 and 1.8.3. ~25 containers running on a /24 with flannel on 835.8.0. Had 80 days uptime until a few days ago when etcd started acting up and all of the sudden this started happening. Rebooted, no improvement. Upgraded, no improvement. etcdctl cluster-health shows healthy. Restarting docker fixes it, but problem resurfaces after a few hours-days.

@dpanelli
Copy link

Any updates on this?

@2opremio
Copy link

This is happening in my kubernetes cluster (docker 1.7.1). Any news?

@2opremio
Copy link

Here's a systematic repro (from kubernetes/kubernetes#19477 (comment) ):

# docker run -d -p 80:80 gcr.io/google_containers/nginx
# while true; do docker run -d -p 80:80 gcr.io/google_containers/nginx; done
# while true; do docker run -it busybox sh -c 'ip addr | grep eth0'; done
14765: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1460 qdisc noqueue 
    inet 10.245.1.104/24 scope global eth0
14771: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1460 qdisc noqueue 
    inet 10.245.1.225/24 scope global eth0
Error response from daemon: Cannot start container 3415f31b1ac17487c304599a2926af2cabcd7f2544738c7e4d77acf5cebb1850: no available ip addresses on network

Which apparently doesn't happen with Docker 1.9

@thaJeztah
Copy link
Member

ping @mrjana can you have another look at this if the new information mentioned above is useful for reproducing?

@mrjana
Copy link
Contributor

mrjana commented Jan 19, 2016

@2opremio Is it possible for you to upgrade to 1.9 (or atleast to 1.8)? We fixed a fundamental namespace related issue in 1.8 which can sometimes show up in different manifestations (such as yours) and so I would suggest upgrading to newer release.

@2opremio
Copy link

@mrjana The problem is reproducible with 1.8 as stated above, so upgrading to 1.8 won't help. Also, we cannot upgrade to 1.9 due to #17720, which is a blocker for kubernetes due to its aggresive polling of /containers (see kubernetes/kubernetes#12540 (comment) )

@mrjana
Copy link
Contributor

mrjana commented Jan 19, 2016

@2opremio Did you test master or 1.10-rc1 to see if #17720 has been resolved (or atleast there is no appreciable slowness)? We have completely rewritten ip allocator code in 1.9 so it won't be worthwhile to go fix some code (the ip allocator in 1.7 and 1.8) which is now obsolete. If you don't see #17720 in 1.10-rc1 then upgrading to 1.10 might be the option since it is just around the corner.

@aboch
Copy link
Contributor

aboch commented Sep 14, 2016

Given the above comments state this specific issue was fixed in 1.9.0 and we had not heard of this specific problem being reproduced on the past two docker versions (1.12.1 and 1.11.2) I am suggesting to close this issue.

@LK4D4
Copy link
Contributor

LK4D4 commented Sep 14, 2016

Closing this due to comments. Feel free to open new issue if the problem still persists.

@LK4D4 LK4D4 closed this as completed Sep 14, 2016
@kbroughton
Copy link

I'm seeing this on 1.11.2 and 1.13.0

root@myhost1# docker info
Containers: 11
Running: 8
Paused: 0
Stopped: 3
Images: 34
Server Version: 1.11.2
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 36761
Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: null host bridge
Kernel Version: 3.19.0-79-generic
Operating System: Ubuntu 14.04.5 LTS
OSType: linux
Architecture: x86_64
CPUs: 8
Total Memory: 15.67 GiB
Name: dctalcmsrv2.mdanderson.edu
ID: 6SW6:OYDJ:CVGL:HWG6:UYCG:OK6Z:J32P:Z7IA:WNHK:MM4G:BC4F:42X5
Docker Root Dir: /var/lib/docker
Debug mode (client): false
Debug mode (server): false
Registry: https://index.docker.io/v1/
WARNING: No swap limit support

root@myhost2# docker info
Containers: 22
Running: 12
Paused: 0
Stopped: 10
Images: 17
Server Version: 1.13.0
Storage Driver: btrfs
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 03e5862ec0d8d3b3f750e19fca3ee367e13c090e
runc version: 2f7393a47307a16f8cee44a37b262e8b81021e3e
init version: 949e6fa
Security Options:
apparmor
Kernel Version: 3.19.0-80-generic
Operating System: Ubuntu 14.04.5 LTS
OSType: linux
Architecture: x86_64
CPUs: 8
Total Memory: 15.67 GiB
Name: d1palscmserv2.mdanderson.edu
ID: JVNS:MFV5:ZAKY:WIUG:SWSI:GKTK:TFNH:YOGA:B2J5:4HY7:UMCL:TQ3Z
Docker Root Dir: /cs/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
WARNING: No swap limit support
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false

It happens frequently during a crash situation, but is happening at background levels unassociated with noticeable degradation at a rate of 6 events over a 5 minute period every few days. Then when there is a serious event, like docker ps hangs, there are thousands of them.

Our environment is vmware running rancher. We also see it in aws running rancher.

@davidbellem
Copy link

@kbroughton Did you find a fix for this with Rancher?

@ferhimedamine
Copy link

@kbroughton any successful workaround so far ?

@AlexShu88
Copy link

AlexShu88 commented May 20, 2022

I encounter the same issue on vmware

May 12 07:51:47 kaas-node-ef2c7b8b-e441-49ce-93e3-47d6a42941bc kernel: IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
May 12 07:51:47 kaas-node-ef2c7b8b-e441-49ce-93e3-47d6a42941bc kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
May 12 07:51:47 kaas-node-ef2c7b8b-e441-49ce-93e3-47d6a42941bc kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth2ae8e3e: link becomes ready
May 12 07:51:47 kaas-node-ef2c7b8b-e441-49ce-93e3-47d6a42941bc kernel: docker0: port 3(veth2ae8e3e) entered blocking state
May 12 07:51:47 kaas-node-ef2c7b8b-e441-49ce-93e3-47d6a42941bc kernel: docker0: port 3(veth2ae8e3e) entered forwarding state
May 12 07:52:17 kaas-node-ef2c7b8b-e441-49ce-93e3-47d6a42941bc kernel: docker0: port 3(veth2ae8e3e) entered disabled state
May 12 07:52:17 kaas-node-ef2c7b8b-e441-49ce-93e3-47d6a42941bc kernel: docker0: port 3(veth2ae8e3e) entered disabled state
May 12 07:52:17 kaas-node-ef2c7b8b-e441-49ce-93e3-47d6a42941bc kernel: device veth2ae8e3e left promiscuous mode
May 12 07:52:17 kaas-node-ef2c7b8b-e441-49ce-93e3-47d6a42941bc kernel: docker0: port 3(veth2ae8e3e) entered disabled state

which cause

time="2022-05-12T02:24:51.889775820-04:00" level=info msg="Attempting next endpoint for pull after error: Get \"https://docker-dev-kaas-virtual.artifactory-eu.mcp.mirantis.net/v2/\": dial tcp 172.19.117.87:443: connect: connection refused"
time="2022-05-12T02:24:51.891364267-04:00" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://docker-dev-kaas-virtual.artifactory-eu.mcp.mirantis.net/v2/\": dial tcp 172.19.117.87:443: connect: connection refused"
time="2022-05-12T02:24:52.301510881-04:00" level=warning msg="Error getting v2 registry: Get \"https://docker-dev-kaas-virtual.artifactory-eu.mcp.mirantis.net/v2/\": dial tcp 172.19.117.87:443: connect: connection refused"
time="2022-05-12T02:24:52.301553332-04:00" level=info msg="Attempting next endpoint for pull after error: Get \"https://docker-dev-kaas-virtual.artifactory-eu.mcp.mirantis.net/v2/\": dial tcp 172.19.117.87:443: connect: connection refused"
time="2022-05-12T02:24:52.302724897-04:00" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://docker-dev-kaas-virtual.artifactory-eu.mcp.mirantis.net/v2/\": dial tcp 172.19.117.87:443: connect: connection refused"
time="2022-05-12T02:24:52.713063188-04:00" level=warning msg="Error getting v2 registry: Get \"https://docker-dev-kaas-virtual.artifactory-eu.mcp.mirantis.net/v2/\": dial tcp 172.19.117.87:443: connect: connection refused"
time="2022-05-12T02:24:52.713112829-04:00" level=info msg="Attempting next endpoint for pull after error: Get \"https://docker-dev-kaas-virtual.artifactory-eu.mcp.mirantis.net/v2/\": dial tcp 172.19.117.87:443: connect: connection refused"
time="2022-05-12T02:24:52.714352316-04:00" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://docker-dev-kaas-virtual.artifactory-eu.mcp.mirantis.net/v2/\": dial tcp 172.19.117.87:443: connect: connection refused"
time="2022-05-12T02:24:53.110757033-04:00" level=warning msg="Error getting v2 registry: Get \"https://docker-dev-kaas-virtual.artifactory-eu.mcp.mirantis.net/v2/\": dial tcp 172.19.117.87:443: connect: connection refused"
time="2022-05-12T02:24:53.110803334-04:00" level=info msg="Attempting next endpoint for pull after error: Get \"https://docker-dev-kaas-virtual.artifactory-eu.mcp.mirantis.net/v2/\": dial tcp 172.19.117.87:443: connect: connection refused"
time="2022-05-12T02:24:53.112205756-04:00" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://docker-dev-kaas-virtual.artifactory-eu.mcp.mirantis.net/v2/\": dial tcp 172.19.117.87:443: connect: connection refused"
time="2022-05-12T02:24:53.531860542-04:00" level=warning msg="Error getting v2 registry: Get \"https://docker-dev-kaas-virtual.artifactory-eu.mcp.mirantis.net/v2/\": dial tcp 172.19.117.87:443: connect: connection refused"
time="2022-05-12T02:24:53.531901784-04:00" level=info msg="Attempting next endpoint for pull after error: Get \"https://docker-dev-kaas-virtual.artifactory-eu.mcp.mirantis.net/v2/\": dial tcp 172.19.117.87:443: connect: connection refused"
time="2022-05-12T02:24:53.533704237-04:00" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://docker-dev-kaas-virtual.artifactory-eu.mcp.mirantis.net/v2/\": dial tcp 172.19.117.87:443: connect: connection refused"
time="2022-05-12T02:24:53.945326635-04:00" level=warning msg="Error getting v2 registry: Get \"https://docker-dev-kaas-virtual.artifactory-eu.mcp.mirantis.net/v2/\": dial tcp 172.19.117.87:443: connect: connection refused"
time="2022-05-12T02:24:53.945393657-04:00" level=info msg="Attempting next endpoint for pull after error: Get \"https://docker-dev-kaas-virtual.artifactory-eu.mcp.mirantis.net/v2/\": dial tcp 172.19.117.87:443: connect: connection refused"
time="2022-05-12T02:24:53.946778808-04:00" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://docker-dev-kaas-virtual.artifactory-eu.mcp.mirantis.net/v2/\": dial tcp 172.19.117.87:443: connect: connection refused"
time="2022-05-12T02:24:54.371565587-04:00" level=warning msg="Error getting v2 registry: Get \"https://docker-dev-kaas-virtual.artifactory-eu.mcp.mirantis.net/v2/\": dial tcp 172.19.117.87:443: connect: connection refused"


time="2022-05-12T03:48:12.352496322-04:00" level=warning msg="Health check for container f574344d1385f89e7883b5cf4112fb093183a0480b074a8ad208a7f26832a5a0 error: context deadline exceeded"
time="2022-05-12T03:48:31.726526699-04:00" level=info msg="ignoring event" container=28ec11c901086dd1204ec76e7639c64801628705885266de8ebeff63ae1f8ad7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
time="2022-05-12T03:48:33.529039566-04:00" level=warning msg="Published ports are discarded when using host network mode"
time="2022-05-12T03:48:33.571052959-04:00" level=warning msg="Published ports are discarded when using host network mode"
time="2022-05-12T03:48:37.976605027-04:00" level=info msg="ignoring event" container=38f96fd12c55bc08849e2bad9566e2b3263347efe6c17dbd99c0accad1af2375 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
time="2022-05-12T03:48:38.061993653-04:00" level=warning msg="Published ports are discarded when using host network mode"
time="2022-05-12T03:48:38.158243520-04:00" level=warning msg="Published ports are discarded when using host network mode"
time="2022-05-12T03:48:38.575691140-04:00" level=info msg="ignoring event" container=638bac9c17cfc69fd27881ab6e5a681a2a4f301118b5e05b772ac178247d19dd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
time="2022-05-12T03:48:38.763624550-04:00" level=info msg="ignoring event" container=3989a78a3411c012859ad8964ac9a574b89395912ee069345a5b7667105adff3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
time="2022-05-12T03:48:40.296636344-04:00" level=info msg="ignoring event" container=954f1d6aaefc55ea0eca2e928a54d464440d1dad77372c054c9c35feec6891ca module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
time="2022-05-12T03:48:42.605742292-04:00" level=info msg="ignoring event" container=a5cb287cfb99f32644264be8e0a5e1a1b83688c10b37d49711a7a22211e1d21c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
time="2022-05-12T03:48:44.465939676-04:00" level=info msg="ignoring event" container=333cad2054dad186158c70ca2e85057765911f65af408b1ea1961d783c0151d9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
time="2022-05-12T03:48:45.795891789-04:00" level=info msg="ignoring event" container=e6f29a2bfa096e01b9f196f5d4c313fe10490782fddaf87cc5f1b6cfe0d2ea60 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
time="2022-05-12T03:48:45.955123400-04:00" level=info msg="Firewalld: interface docker0 already part of docker zone, returning"
time="2022-05-12T03:48:46.128276043-04:00" level=info msg="Firewalld: interface docker0 already part of docker zone, returning"
time="2022-05-12T03:48:50.053701781-04:00" level=warning msg="xtables contention detected while running [-t nat -C DOCKER -i docker_gwbridge -j RETURN]: Waited for 3.69 seconds and received \"\""
time="2022-05-12T03:48:50.064441099-04:00" level=warning msg="memberlist: Refuting a suspect message (from: 88b6ebaf51ca)"
time="2022-05-12T03:48:50.145048522-04:00" level=error msg="error while reading from stream" error="rpc error: code = Canceled desc = context canceled"
time="2022-05-12T03:48:50.469216588-04:00" level=info msg="Firewalld: interface docker_gwbridge already part of docker zone, returning"
time="2022-05-12T03:48:50.591130445-04:00" level=info msg="Firewalld: interface docker_gwbridge already part of docker zone, returning"
time="2022-05-12T03:48:57.479222489-04:00" level=error msg="stream copy error: reading from a closed fifo"
time="2022-05-12T03:48:57.479287021-04:00" level=error msg="stream copy error: reading from a closed fifo"
time="2022-05-12T03:48:57.482235878-04:00" level=error msg="stream copy error: reading from a closed fifo"
time="2022-05-12T03:48:57.594639341-04:00" level=error msg="stream copy error: reading from a closed fifo"
time="2022-05-12T03:48:58.037881776-04:00" level=warning msg="Health check for container 7157170f8e40e871f4174289af5705f3f0c5e38501f7b2b7709de5baf0de8217 error: context deadline exceeded"
time="2022-05-12T03:48:58.051494169-04:00" level=error msg="stream copy error: reading from a closed fifo"
time="2022-05-12T03:48:58.051714775-04:00" level=error msg="stream copy error: reading from a closed fifo"
time="2022-05-12T03:48:58.053564490-04:00" level=warning msg="Health check for container 646d91178eb9c5911be1db099a952433e8e789c3d2fef7239c75276fd839d04e error: context deadline exceeded"
time="2022-05-12T03:48:58.081381643-04:00" level=warning msg="Health check for container f574344d1385f89e7883b5cf4112fb093183a0480b074a8ad208a7f26832a5a0 error: context deadline exceeded: unknown"
time="2022-05-12T03:48:58.230203530-04:00" level=info msg="ignoring event" container=b6db36c45aa0b3d434d7d329050c1faa1caad0c3f1fb9747a5f3e89a8eed0580 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
time="2022-05-12T03:48:58.264047421-04:00" level=info msg="ignoring event" container=c9831c27a70fbc23c4d4241d2168296906851456937d725453e63624f60ea3cb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
time="2022-05-12T03:48:58.303266000-04:00" level=info msg="ignoring event" container=5cf944e75837f1a43746805764380e14b66e73241b0d29d5d8951a5725190d65 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
time="2022-05-12T03:48:59.115481507-04:00" level=info msg="Firewalld: interface br-5882ca5b6584 already part of docker zone, returning"
time="2022-05-12T03:48:59.239403703-04:00" level=info msg="Firewalld: interface br-5882ca5b6584 already part of docker zone, returning"
time="2022-05-12T03:49:00.978743487-04:00" level=info msg="ignoring event" container=e4fb6e77dc132be6db7b280a1f4b4d0458449767b31edcb95d8d536a7e9fc6d5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
time="2022-05-12T03:49:01.068808579-04:00" level=info msg="ignoring event" container=087f206ce81e89e2aaa51bd5b5d4c925762cb5c5191ca5303fcb9cae4823251d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
time="2022-05-12T03:49:01.281900960-04:00" level=info msg="ignoring event" container=6d6072b009236f35be7c11d08419d498f1e3a7f10a908b8d28bc68dbfbefbdf3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
time="2022-05-12T03:49:01.849045708-04:00" level=info msg="ignoring event" container=157dbb8cfc7825700979be7b5498babd6a077df5a99789ee9a829c4d81542b8a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
time="2022-05-12T03:49:05.144111522-04:00" level=warning msg="xtables contention detected while running [-A DOCKER-USER -j RETURN]: Waited for 2.41 seconds and received \"\""
time="2022-05-12T03:49:05.169707220-04:00" level=warning msg="memberlist: Refuting a suspect message (from: 9be73328a2de)"
time="2022-05-12T03:49:08.503186840-04:00" level=info msg="Firewalld: interface docker0 already part of docker zone, returning"
time="2022-05-12T03:49:10.278380609-04:00" level=info msg="Firewalld: interface docker0 already part of docker zone, returning"
time="2022-05-12T03:49:11.849857227-04:00" level=error msg="stream copy error: reading from a closed fifo"
time="2022-05-12T03:49:11.852525476-04:00" level=error msg="stream copy error: reading from a closed fifo"
time="2022-05-12T03:49:11.896924829-04:00" level=warning msg="Health check for container 646d91178eb9c5911be1db099a952433e8e789c3d2fef7239c75276fd839d04e error: context deadline exceeded: unknown"
time="2022-05-12T03:49:12.474093163-04:00" level=error msg="stream copy error: reading from a closed fifo"
time="2022-05-12T03:49:12.475761452-04:00" level=error msg="stream copy error: reading from a closed fifo"
time="2022-05-12T03:49:12.479965677-04:00" level=error msg="stream copy error: reading from a closed fifo"
time="2022-05-12T03:49:12.480009928-04:00" level=error msg="stream copy error: reading from a closed fifo"
time="2022-05-12T03:49:12.542064592-04:00" level=warning msg="Health check for container 7157170f8e40e871f4174289af5705f3f0c5e38501f7b2b7709de5baf0de8217 error: context deadline exceeded: unknown"
time="2022-05-12T03:49:12.552031377-04:00" level=warning msg="Health check for container f574344d1385f89e7883b5cf4112fb093183a0480b074a8ad208a7f26832a5a0 error: context deadline exceeded"
time="2022-05-12T03:49:16.976488823-04:00" level=info msg="ignoring event" container=fd60f7a02e38e96fea78fcdfa8e42bee037e00446b1c5498bfc8ef2b2036f427 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
time="2022-05-12T03:49:18.019514569-04:00" level=info msg="ignoring event" container=1916dfcc670a4d491fb36a30b2e515ea6f8d939e36b07bed42185999d0ee3bff module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
time="2022-05-12T03:49:18.288244269-04:00" level=error msg="stream copy error: reading from a closed fifo"
time="2022-05-12T03:49:18.292477075-04:00" level=error msg="stream copy error: reading from a closed fifo"
time="2022-05-12T03:49:18.298389979-04:00" level=warning msg="Health check for container 646d91178eb9c5911be1db099a952433e8e789c3d2fef7239c75276fd839d04e error: context deadline exceeded"
time="2022-05-12T03:49:19.207120132-04:00" level=info msg="ignoring event" container=7d9dd64e44984d9da6ea64b1d0d04918ff303c73b6a7af3e2e32e21b2a42c30e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
time="2022-05-12T03:49:19.444146639-04:00" level=info msg="ignoring event" container=315b43857d0de56b9466b2952485724f20685e9a79b6cf0d0bb2f5c0f9eb2943 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
time="2022-05-12T03:50:01.076124231-04:00" level=info msg="ignoring event" container=8e569d3dca355f38a836549383c684e52e471b16dff1e118ed6fa23202798c08 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
time="2022-05-12T03:53:02.737182862-04:00" level=info msg="NetworkDB stats kaas-node-ef2c7b8b-e441-49ce-93e3-47d6a42941bc(a5e4bfaddbe1) - netID:t7fimes0n9yqe60i8iy8hf7gd leaving:false netPeers:3 entries:6 Queue qLen:0 netMsg/s:0"
time="2022-05-12T03:53:46.346722133-04:00" level=info msg="ignoring event" container=3dd76e442f03c92cf26e0abe3c9e362a3cd062f29144ab5a8c5e1660f03d7675 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
time="2022-05-12T03:53:46.498347759-04:00" level=error msg="fatal task error" error="task: non-zero exit (1)" module=node/agent/taskmanager node.id=tobxp4qblb5jgc9rtpwvz4gns service.id=snk25490omosp8uvxfyd848z2 task.id=58ysw8hc5e3uhnms64r6oj048
time="2022-05-12T03:54:23.484259090-04:00" level=error msg="Error getting service 58ysw8hc5e3u: service 58ysw8hc5e3u not found"
time="2022-05-12T03:58:02.936630278-04:00" level=info msg="NetworkDB stats kaas-node-ef2c7b8b-e441-49ce-93e3-47d6a42941bc(a5e4bfaddbe1) - netID:t7fimes0n9yqe60i8iy8hf7gd leaving:false netPeers:3 entries:6 Queue qLen:0 netMsg/s:0"
time="2022-05-12T03:58:14.663123554-04:00" level=error msg="error receiving response" error="rpc error: code = DeadlineExceeded desc = context deadline exceeded"
time="2022-05-12T03:58:14.804269239-04:00" level=error msg="error while reading from stream" error="rpc error: code = Canceled desc = context canceled"
time="2022-05-12T03:58:14.807525255-04:00" level=error msg="error while reading from stream" error="rpc error: code = Canceled desc = context canceled"
time="2022-05-12T03:58:14.807622838-04:00" level=error msg="error while reading from stream" error="rpc error: code = Canceled desc = context canceled"
time="2022-05-12T03:58:15.319438058-04:00" level=warning msg="memberlist: Refuting a suspect message (from: a5e4bfaddbe1)"
time="2022-05-12T03:58:26.650160318-04:00" level=warning msg="Health check for container 7157170f8e40e871f4174289af5705f3f0c5e38501f7b2b7709de5baf0de8217 error: context deadline exceeded"
time="2022-05-12T03:58:26.650440356-04:00" level=warning msg="Health check for container 646d91178eb9c5911be1db099a952433e8e789c3d2fef7239c75276fd839d04e error: context deadline exceeded"
time="2022-05-12T03:58:27.297728732-04:00" level=info msg="memberlist: Suspect 9be73328a2de has failed, no acks received"
time="2022-05-12T03:58:29.177441660-04:00" level=warning msg="Health check for container f574344d1385f89e7883b5cf4112fb093183a0480b074a8ad208a7f26832a5a0 error: context deadline exceeded"
time="2022-05-12T03:58:29.707881043-04:00" level=warning msg="Health check for container 8979128abd32901c9c071c6e85508e829a93b944b4b3ba3f701a601551bc7966 error: context deadline exceeded"
time="2022-05-12T03:58:29.715268971-04:00" level=warning msg="memberlist: Refuting a suspect message (from: 88b6ebaf51ca)"
time="2022-05-12T03:58:30.079666328-04:00" level=error msg="error while reading from stream" error="rpc error: code = Canceled desc = context canceled"
time="2022-05-12T03:59:11.570863795-04:00" level=info msg="memberlist: Suspect 88b6ebaf51ca has failed, no acks received"
time="2022-05-12T03:59:12.092264647-04:00" level=warning msg="Health check for container f574344d1385f89e7883b5cf4112fb093183a0480b074a8ad208a7f26832a5a0 error: context deadline exceeded"
time="2022-05-12T03:59:12.336780815-04:00" level=warning msg="memberlist: Refuting a suspect message (from: a5e4bfaddbe1)"
time="2022-05-12T03:59:12.452688232-04:00" level=warning msg="Health check for container 646d91178eb9c5911be1db099a952433e8e789c3d2fef7239c75276fd839d04e error: context deadline exceeded"
time="2022-05-12T03:59:12.454733152-04:00" level=warning msg="Health check for container 7157170f8e40e871f4174289af5705f3f0c5e38501f7b2b7709de5baf0de8217 error: context deadline exceeded"
time="2022-05-12T03:59:46.099434732-04:00" level=error msg="stream copy error: reading from a closed fifo"
time="2022-05-12T03:59:46.127875037-04:00" level=error msg="heartbeat to manager {tobxp4qblb5jgc9rtpwvz4gns 172.16.58.203:2377} failed" error="rpc error: code = DeadlineExceeded desc = context deadline exceeded" method="(*session).heartbeat" module=node/agent node.id=tobxp4qblb5jgc9rtpwvz4gns session.id=ss6rwcuaifjdouu2qe4nvtjbx sessionID=ss6rwcuaifjdouu2qe4nvtjbx
time="2022-05-12T03:59:46.968548502-04:00" level=info msg="memberlist: Suspect 88b6ebaf51ca has failed, no acks received"
time="2022-05-12T03:59:47.090449206-04:00" level=error msg="error while reading from stream" error="rpc error: code = Canceled desc = context canceled"
time="2022-05-12T03:59:47.090499367-04:00" level=error msg="error while reading from stream" error="rpc error: code = Canceled desc = context canceled"
time="2022-05-12T03:59:47.090652362-04:00" level=error msg="error while reading from stream" error="rpc error: code = Canceled desc = context canceled"
time="2022-05-12T03:59:47.090828327-04:00" level=error msg="stream copy error: reading from a closed fifo"
time="2022-05-12T03:59:48.727798902-04:00" level=warning msg="memberlist: Refuting a suspect message (from: 9be73328a2de)"
time="2022-05-12T03:59:48.129779185-04:00" level=error msg="stream copy error: reading from a closed fifo"
time="2022-05-12T03:59:48.138500394-04:00" level=error msg="stream copy error: reading from a closed fifo"
time="2022-05-12T03:59:48.157446645-04:00" level=error msg="stream copy error: reading from a closed fifo"
time="2022-05-12T03:59:48.367243710-04:00" level=error msg="error while reading from stream" error="rpc error: code = Canceled desc = context canceled"
time="2022-05-12T03:59:48.157480346-04:00" level=error msg="stream copy error: reading from a closed fifo"
time="2022-05-12T03:59:48.158212258-04:00" level=error msg="error while reading from stream" error="rpc error: code = Canceled desc = context canceled"
time="2022-05-12T03:59:48.169623126-04:00" level=error msg="error while reading from stream" error="rpc error: code = Canceled desc = context canceled"
time="2022-05-12T03:59:49.105114579-04:00" level=warning msg="Health check for container f574344d1385f89e7883b5cf4112fb093183a0480b074a8ad208a7f26832a5a0 error: context deadline exceeded"
time="2022-05-12T03:59:49.182165552-04:00" level=warning msg="Health check for container 646d91178eb9c5911be1db099a952433e8e789c3d2fef7239c75276fd839d04e error: context deadline exceeded"
time="2022-05-12T03:59:49.198542087-04:00" level=error msg="agent: session failed" backoff=100ms error="rpc error: code = DeadlineExceeded desc = context deadline exceeded" module=node/agent node.id=tobxp4qblb5jgc9rtpwvz4gns
time="2022-05-12T03:59:49.198884787-04:00" level=info msg="manager selected by agent for new session: { }" module=node/agent node.id=tobxp4qblb5jgc9rtpwvz4gns
time="2022-05-12T03:59:49.199031542-04:00" level=info msg="waiting 14.365777ms before registering session" module=node/agent node.id=tobxp4qblb5jgc9rtpwvz4gns
time="2022-05-12T03:59:49.203002139-04:00" level=warning msg="Health check for container 7157170f8e40e871f4174289af5705f3f0c5e38501f7b2b7709de5baf0de8217 error: context deadline exceeded"
time="2022-05-12T03:59:49.653544426-04:00" level=warning msg="failed to deactivate service binding for container ucp-auth-worker.tobxp4qblb5jgc9rtpwvz4gns.uze53hek9gadpijbuscc9maab" error="No such container: ucp-auth-worker.tobxp4qblb5jgc9rtpwvz4gns.uze53hek9gadpijbuscc9maab" module=node/agent node.id=tobxp4qblb5jgc9rtpwvz4gns
time="2022-05-12T03:59:49.654002730-04:00" level=warning msg="failed to deactivate service binding for container ucp-auth-api.tobxp4qblb5jgc9rtpwvz4gns.5jnxclkwh7rdzoyje6kgnapg4" error="No such container: ucp-auth-api.tobxp4qblb5jgc9rtpwvz4gns.5jnxclkwh7rdzoyj

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests