Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cannot remove network due to task #31068

Open
thinkhard-j-park opened this issue Feb 16, 2017 · 49 comments
Open

Cannot remove network due to task #31068

thinkhard-j-park opened this issue Feb 16, 2017 · 49 comments
Labels
area/networking kind/bug Bugs are bugs. The cause may or may not be known at triage time so debugging may be needed. version/1.13

Comments

@thinkhard-j-park
Copy link

thinkhard-j-park commented Feb 16, 2017

Description

Steps to reproduce the issue:

  1. create network with this command: docker network create --attachable --driver overlay cluster-network
  2. run a couple of service in swarm mode and delete all services
  3. trying to delete network: docker network rm cluster-network

Describe the results you received:
docker network rm cluster-network
Error response from daemon: rpc error: code = 9 desc = network qytxrqgp7pw1915tqhdnkd4si is in use by task 8ruj7pjh65g9du0m1y7ce476i

Describe the results you expected:
Want to delete network with proper descritpion. what is the task?

Additional information you deem important (e.g. issue happens only occasionally):

docker network inspect cluster-network
[
    {
        "Name": "cluster-network",
        "Id": "qytxrqgp7pw1915tqhdnkd4si",
        "Created": "0001-01-01T00:00:00Z",
        "Scope": "swarm",
        "Driver": "overlay",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": []
        },
        "Internal": false,
        "Attachable": true,
        "Containers": null,
        "Options": {
            "com.docker.network.driver.overlay.vxlanid_list": "4097"
        },
        "Labels": null
    }
]

Output of docker version:

Client:
 Version:      1.13.0
 API version:  1.25
 Go version:   go1.7.3
 Git commit:   49bf474
 Built:        Tue Jan 17 09:58:26 2017
 OS/Arch:      linux/amd64

Server:
 Version:      1.13.0
 API version:  1.25 (minimum version 1.12)
 Go version:   go1.7.3
 Git commit:   49bf474
 Built:        Tue Jan 17 09:58:26 2017
 OS/Arch:      linux/amd64
 Experimental: false

Output of docker info:

Containers: 4
 Running: 0
 Paused: 0
 Stopped: 4
Images: 129
Server Version: 1.13.0
Storage Driver: aufs
 Root Dir: /var/lib/docker/aufs
 Backing Filesystem: extfs
 Dirs: 106
 Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: bridge host macvlan null overlay
Swarm: active
 NodeID: 7tri47t51271szj46y1sysjcf
 Is Manager: true
 ClusterID: 8jnswr0kvdlavkn0puuuhljxd
 Managers: 3
 Nodes: 6
 Orchestration:
  Task History Retention Limit: 5
 Raft:
  Snapshot Interval: 10000
  Number of Old Snapshots to Retain: 0
  Heartbeat Tick: 1
  Election Tick: 3
 Dispatcher:
  Heartbeat Period: 5 seconds
 CA Configuration:
  Expiry Duration: 3 months
 Node Address: 192.168.0.27
 Manager Addresses:
  192.168.0.27:2377
  192.168.0.32:2377
  192.168.0.33:2377
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 03e5862ec0d8d3b3f750e19fca3ee367e13c090e
runc version: 2f7393a47307a16f8cee44a37b262e8b81021e3e
init version: 949e6fa
Security Options:
 apparmor
 seccomp
  Profile: default
Kernel Version: 4.4.0-62-generic
Operating System: Ubuntu 16.04.1 LTS
OSType: linux
Architecture: x86_64
CPUs: 1
Total Memory: 1.953 GiB
Name: mngr01
ID: YNJP:5BEI:W4UN:NPUK:EJ3R:CYBC:RBMW:GO2Q:ASJA:PDTT:TZBK:CYWQ
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
WARNING: No swap limit support
Experimental: false
Insecure Registries:
 127.0.0.0/8
Live Restore Enabled: false

Additional environment details (AWS, VirtualBox, physical, etc.):
VMware vshpere, ubuntu16.04 host

@silveraid
Copy link

We are experiencing the same with the error message of:

Error response from daemon: rpc error: code = 9 desc = network k8bwtqv93w2zssjyjle9ipyql is in use by task pk76tpyscjlwgb69j26y3snsf

Grepping the /var/log/messages we have found a reference to the task which says the following:

dockerd: time="2017-02-15T16:19:14.233444929-05:00" level=error msg="fatal task error" error="Pool overlaps with other one on this address space" module="node/agent/taskmanager" task.id=pk76tpyscjlwgb69j26y3snsf

docker version

Client:
 Version:      1.13.1
 API version:  1.26
 Go version:   go1.7.5
 Git commit:   092cba3
 Built:        Wed Feb  8 06:38:28 2017
 OS/Arch:      linux/amd64

Server:
 Version:      1.13.1
 API version:  1.26 (minimum version 1.12)
 Go version:   go1.7.5
 Git commit:   092cba3
 Built:        Wed Feb  8 06:38:28 2017
 OS/Arch:      linux/amd64
 Experimental: true

docker info

Containers: 6
 Running: 0
 Paused: 0
 Stopped: 6
Images: 11
Server Version: 1.13.1
Storage Driver: overlay
 Backing Filesystem: extfs
 Supports d_type: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins: 
 Volume: local
 Network: bridge host ipvlan macvlan null overlay
Swarm: active
 NodeID: m3f6y40zr5mdd18qkn9dlqlzn
 Is Manager: true
 ClusterID: cub8ocln3cqr9tv92y0swou7a
 Managers: 3
 Nodes: 3
 Orchestration:
  Task History Retention Limit: 5
 Raft:
  Snapshot Interval: 10000
  Number of Old Snapshots to Retain: 0
  Heartbeat Tick: 1
  Election Tick: 3
 Dispatcher:
  Heartbeat Period: 5 seconds
 CA Configuration:
  Expiry Duration: 3 months
 Node Address: 10.113.0.121
 Manager Addresses:
  10.113.0.121:2377
  10.113.0.122:2377
  10.113.0.123:2377
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: aa8187dbd3b7ad67d8e5e3a15115d3eef43a7ed1
runc version: 9df8b306d01f59d3a8029be411de015b7304dd8f
init version: 949e6fa
Security Options:
 seccomp
  Profile: default
Kernel Version: 3.10.0-327.36.3.el7.x86_64
Operating System: CentOS Linux 7 (Core)
OSType: linux
Architecture: x86_64
CPUs: 1
Total Memory: 3.703 GiB
Name: ymr-vme-intdkr1
ID: BLTI:KGET:RDEL:MORF:P3FS:5CIZ:XXZE:PO5C:OCOY:PC3Y:3EFM:KSA4
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Labels:
 zone=int
 zonemember=int1
Experimental: true
Insecure Registries:
 127.0.0.0/8
Live Restore Enabled: false

docker network inspect

[
    {
        "Name": "XXX",
        "Id": "k8bwtqv93w2zssjyjle9ipyql",
        "Created": "0001-01-01T00:00:00Z",
        "Scope": "swarm",
        "Driver": "overlay",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "xx.xx.xx.xx/20",
                    "Gateway": "xx.xx.xx.xx"
                }
            ]
        },
        "Internal": false,
        "Attachable": true,
        "Containers": null,
        "Options": {
            "com.docker.network.driver.overlay.vxlanid_list": "4097"
        },
        "Labels": null
    }
]

@vdemeester
Copy link
Member

I think #31073 fixes it
/cc @aboch
/cc @vieux for 1.13.2 maybe ? 👼

@aboch
Copy link
Contributor

aboch commented Feb 16, 2017

Yes @thinkhard-j-park report looks a duplicate of #31066, I am assuming a manual docker run was attempted on that attachable network and failed the allocation.

In @silveraid report instead, I am not sure why a task allocation would fail with an error that should happen during a network allocation phase only(subnet chosen for the network overlap with an existing one). I need to double check the code.

@ahmedsajid
Copy link

I'm experiencing exactly same issue as @silveraid
Any help would be appreciated.

@thaJeztah
Copy link
Member

@ahmedsajid are you still experiencing this issue on the current (17.03) release?

@sitamet
Copy link

sitamet commented Apr 18, 2017

I'm experiencing same issue running 17.03.0-ce

root@dk1w:~# docker info
Containers: 14
 Running: 10
 Paused: 0
 Stopped: 4
Images: 21
Server Version: 17.03.0-ce
Storage Driver: aufs
 Root Dir: /var/lib/docker/aufs
 Backing Filesystem: extfs
 Dirs: 255
 Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins: 
 Volume: local
 Network: bridge host ipvlan macvlan null overlay
Swarm: active
 NodeID: l5ug7tsu7wyjd2n1qeersvo0u
 Is Manager: false
 Node Address: 192.168.100.211
 Manager Addresses:
  192.168.100.201:2377
  192.168.100.202:2377
  192.168.100.203:2377
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 977c511eda0925a723debdc94d09459af49d082a
runc version: a01dafd48bc1c7cc12bdb01206f9fea7dd6feb70
init version: 949e6fa
Security Options:
 apparmor
 seccomp
  Profile: default
Kernel Version: 4.4.0-64-generic
Operating System: Ubuntu 16.04.1 LTS
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 5.823 GiB
Name: dk1w
ID: ......
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Username: sitamet
Registry: https://index.docker.io/v1/
WARNING: No swap limit support
Experimental: true
Insecure Registries:
 registry.......
 127.0.0.0/8
Live Restore Enabled: false

@cirocosta
Copy link
Contributor

cirocosta commented May 23, 2017

Update: my bad, actually, I had a container (not started with service create) running attached to the network (the network has --attachable set). So in my case it actually was a problem of communicating that a task was attached to the network while it was only a normal container.

Experiencing here as well - Server Version: 17.05.0-ce-rc1 in docker-for-aws

 docker info
Containers: 5
 Running: 5
 Paused: 0
 Stopped: 0
Images: 14
Server Version: 17.05.0-ce-rc1
Storage Driver: overlay2
 Backing Filesystem: extfs
 Supports d_type: true
 Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins: 
 Volume: local
 Network: bridge host ipvlan macvlan null overlay
Swarm: active
 NodeID: 4ypru6pvfe8mxublqrupl5uyv
 Is Manager: true
 ClusterID: xdahte25v3j6i0cjjewbnzwel
 Managers: 3
 Nodes: 9
 Orchestration:
  Task History Retention Limit: 5
 Raft:
  Snapshot Interval: 10000
  Number of Old Snapshots to Retain: 0
  Heartbeat Tick: 1
  Election Tick: 3
 Dispatcher:
  Heartbeat Period: 5 seconds
 CA Configuration:
  Expiry Duration: 3 months
 Node Address: 172.31.12.245
 Manager Addresses:
  172.31.12.245:2377
  172.31.29.200:2377
  172.31.33.4:2377
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 9048e5e50717ea4497b757314bad98ea3763c145
runc version: 9c2d8d184e5da67c95d601382adf14862e4f2228
init version: 949e6fa
Security Options:
 seccomp
  Profile: default
Kernel Version: 4.9.21-moby
Operating System: Alpine Linux v3.5
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 3.854GiB
Name: ip-172-31-12-245.us-west-2.compute.internal
ID: PWGX:D5OW:MNZC:KTLN:GJ3V:Z43D:7SZC:GWQ3:XPP7:WB5O:OVDY:YCSB
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): true
 File Descriptors: 179
 Goroutines: 349
 System Time: 2017-05-23T13:49:37.335845818Z
 EventsListeners: 0
Username: wedeployci
Registry: https://index.docker.io/v1/
Labels:
 com.wedeploy.node.type=manager
 os=linux
 region=us-west-2
 availability_zone=us-west-2a
 instance_type=t2.medium
 node_type=manager
Experimental: true
Insecure Registries:
 127.0.0.0/8
Live Restore Enabled: false

Note that I removed all the services and then inspecting the task gives me:

~ $ docker network rm mynet 
Error response from daemon: rpc error: code = 9 desc = network l58m22zj9z9t8xhu9tgndtdnl is in use by task 8awbbulm89s5tuo08nt82cxih
~ $ docker inspect 8awbbulm89s5tuo08nt82cxih
[
    {
        "ID": "",
        "Version": {},
        "CreatedAt": "0001-01-01T00:00:00Z",
        "UpdatedAt": "0001-01-01T00:00:00Z",
        "Labels": null,
        "Spec": {
            "ContainerSpec": {},
            "ForceUpdate": 0
        },
        "Status": {
            "Timestamp": "0001-01-01T00:00:00Z",
            "ContainerStatus": {},
            "PortStatus": {}
        }
    }
]

@taiidani
Copy link

We are seeing the same behavior immediately after an upgrade from 17.03 to 17.06:

ryannixon@vc-docker-m01-live:~$ docker network inspect rates_default
[
    {
        "Name": "rates_default",
        "Id": "uwwkgh48blp549osprprx77n9",
        "Created": "0001-01-01T00:00:00Z",
        "Scope": "swarm",
        "Driver": "overlay",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "10.62.144.64/27",
                    "Gateway": "10.62.144.65"
                }
            ]
        },
        "Internal": false,
        "Attachable": true,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": null,
        "Options": {
            "com.docker.network.driver.overlay.vxlanid_list": "4100"
        },
        "Labels": null
    }
]

ryannixon@vc-docker-m01-live:~$ docker network rm rates_default
Error response from daemon: rpc error: code = 9 desc = network uwwkgh48blp549osprprx77n9 is in use by task fcouwkvxorn7opwb995gyzskv

ryannixon@vc-docker-m01-live:~$ docker inspect fcouwkvxorn7opwb995gyzskv
[
    {
        "ID": "",
        "Version": {},
        "CreatedAt": "0001-01-01T00:00:00Z",
        "UpdatedAt": "0001-01-01T00:00:00Z",
        "Labels": null,
        "Spec": {
            "ContainerSpec": {},
            "ForceUpdate": 0
        },
        "Status": {
            "Timestamp": "0001-01-01T00:00:00Z",
            "ContainerStatus": {},
            "PortStatus": {}
        }
    }
]

No fixes have been discovered yet. Will comment if we manage to figure out a way.

@thaJeztah
Copy link
Member

Be sure to check if the task is not running on a different node; i'm not sure if it's shown in the inspect output if it is

@asrgomes
Copy link

asrgomes commented Aug 22, 2017

I have the same issue on a single node swarm, with no container running. Using 17.06.1-ce. It happened when it failed to connect an existing container (that was removed later) to an overlay network. The only way I found to remove the network was to re-initialize the swarm (leave then init)

@taiidani
Copy link

Sorry @thaJeztah I missed your comment. Yes, the task is not defined on any of our manager nodes; it seems to simply not exist. We can't inspect tasks from worker nodes but have confirmed that the rates_default network doesn't extend to any of them.

Right now our only workaround has been to deploy our stack to a second rates2_default network and update all of our references -- the rates_default orphan is still unable to be removed.

@thaJeztah thaJeztah added the kind/bug Bugs are bugs. The cause may or may not be known at triage time so debugging may be needed. label Sep 18, 2017
@dac388
Copy link

dac388 commented Oct 16, 2017

Same issue with dtr-ol network and it is preventing me from (re)installing dtr. Seeing others have this issue makes it seem like Docker Datacenter is not production ready.

@kaykay88
Copy link

@thaJeztah is there any fix expected for this issue ? Or the only painful workaround is to reinitialize the swarm ?

@AhmadAbdelghany
Copy link

I had to restart the docker daemon on the swarm master to get rid of the task
systemctl restart docker
Then
docker network rm <network-id>

@cuericlee
Copy link

Reproduce issue of network deletion

docker network rm dtr-ol
Error response from daemon: rpc error: code = 9 desc = network m8arbm84viyrc7izifz8t7tfg is in use by task 2tkeoqvtpluy50om7pwgrooab

Server Version: 17.06.2-ee-6

Expected solution to allow user to remove network by force and allow user to inspect task: docker network rm -f

This PR #35246 looks make sense, would you merge into docker EE 17.06.2-ee-6.

Workaroud

  1. Install swarmctl tools from https://github.com/docker/swarmkit/tree/master/cmd/swarmctl to clean up the legacy task existing in network
# ln -s /var/run/docker/swarm/control.sock /var/run/swarmd.sock
# ./swarmctl task ls | grep 2tkeoqvtpluy50om7pwgrooab
2tkeoqvtpluy50om7pwgrooab  .0                                          RUNNING        RUNNING 1 day ago  iZbp1i817jjyh2vnc0t0scZ
# ./swarmctl task inspect 2tkeoqvtpluy50om7pwgrooab
ID                     : 2tkeoqvtpluy50om7pwgrooab
Slot                   : 0
Service                :
Status
  Desired State        : RUNNING
  Last State           : RUNNING
  Timestamp            : 2018-03-07T01:05:12.829126125Z
  Message              : started
Node                   : iZbp1i817jjyh2vnc0t0scZ
Spec
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x9df5de]

goroutine 1 [running]:
github.com/docker/swarmkit/cmd/swarmctl/task.printTaskSummary(0xc4202d6000, 0xc4201cdcc0)
        /go/src/github.com/docker/swarmkit/cmd/swarmctl/task/inspect.go:67 +0x57e
github.com/docker/swarmkit/cmd/swarmctl/task.glob..func1(0xf154a0, 0xc420045ab0, 0x1, 0x1, 0x0, 0x0)
        /go/src/github.com/docker/swarmkit/cmd/swarmctl/task/inspect.go:137 +0x5e8
github.com/docker/swarmkit/vendor/github.com/spf13/cobra.(*Command).execute(0xf154a0, 0xc420045a50, 0x1, 0x1, 0xf154a0, 0xc420045a50)
        /go/src/github.com/docker/swarmkit/vendor/github.com/spf13/cobra/command.go:565 +0x3c1
github.com/docker/swarmkit/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0xf15aa0, 0xb682d9, 0x33, 0xc420014d05)
        /go/src/github.com/docker/swarmkit/vendor/github.com/spf13/cobra/command.go:656 +0x368
main.main()
        /go/src/github.com/docker/swarmkit/cmd/swarmctl/main.go:21 +0x46

# docker inspect --type task 2tkeoqvtpluy50om7pwgrooab
[
    {
        "ID": "",
        "Version": {},
        "CreatedAt": "0001-01-01T00:00:00Z",
        "UpdatedAt": "0001-01-01T00:00:00Z",
        "Labels": null,
        "Spec": {
            "ContainerSpec": {},
            "ForceUpdate": 0
        },
        "Status": {
            "Timestamp": "0001-01-01T00:00:00Z",
            "ContainerStatus": {},
            "PortStatus": {}
        }
    }
]

~ # ./swarmctl task rm 2tkeoqvtpluy50om7pwgrooab
2tkeoqvtpluy50om7pwgrooab

~ # docker inspect --type task 2tkeoqvtpluy50om7pwgrooab
[]
Error: No such task: 2tkeoqvtpluy50om7pwgrooab

~ # ./swarmctl task inspect 2tkeoqvtpluy50om7pwgrooab
Error: task 2tkeoqvtpluy50om7pwgrooab not found

@ashish235
Copy link

Facing the same issue.

Error response from daemon: rpc error: code = FailedPrecondition desc = network bbosggv6eg8o3342py6w5acsa is in use by task 1qpmy1luiijre5m11nus6m8td
root@fabric001:/var/run/docker/netns# docker inspect 1qpmy1luiijre5m11nus6m8td
[
{
"ID": "",
"Version": {},
"CreatedAt": "0001-01-01T00:00:00Z",
"UpdatedAt": "0001-01-01T00:00:00Z",
"Labels": null,
"Spec": {
"ForceUpdate": 0
},
"Status": {
"Timestamp": "0001-01-01T00:00:00Z",
"ContainerStatus": {},
"PortStatus": {}
}
}
]

@thaJeztah
Copy link
Member

@ashish235 is the network "attachable", and is a container (that's not part of a swarm service) attached to that network?

@ashish235
Copy link

@thaJeztah , yes. The n/w is an attachable one nut no other service was attached to it. All the stacks running on cluster, I removed them. So there were 0 containers running .

@sergey-safarov
Copy link

Hello @thaJeztah
I have same issue on 18.03.1-ce version CentOS
Also swarm "attachable" network and all containers is not part of a swarm service (swarm services is not exists).

@sergey-safarov
Copy link

I have inspected task but cannot delete. As workaround i destroyed swarm cluster and recreated again

[root@node13 ~]# docker inspect 0iis4whgw7d5px935oi27dvl3
[
    {
        "ID": "",
        "Version": {},
        "CreatedAt": "0001-01-01T00:00:00Z",
        "UpdatedAt": "0001-01-01T00:00:00Z",
        "Labels": null,
        "Spec": {
            "ForceUpdate": 0
        },
        "Status": {
            "Timestamp": "0001-01-01T00:00:00Z",
            "PortStatus": {}
        }
    }
]

@sergey-safarov
Copy link

After recreating swarm issue is present again

[root@node11 ~]# docker network rm kazoo
Error response from daemon: rpc error: code = FailedPrecondition desc = network ijfl4wzlytlflg3bthuy4pmaz is in use by task 1fu9plrcby2bnpxy5uvk9g9nb
[root@node11 ~]# 
[root@node11 ~]# 
[root@node11 ~]# 
[root@node11 ~]# docker inspect 1fu9plrcby2bnpxy5uvk9g9nb
[
    {
        "ID": "",
        "Version": {},
        "CreatedAt": "0001-01-01T00:00:00Z",
        "UpdatedAt": "0001-01-01T00:00:00Z",
        "Labels": null,
        "Spec": {
            "ForceUpdate": 0
        },
        "Status": {
            "Timestamp": "0001-01-01T00:00:00Z",
            "PortStatus": {}
        }
    }
]
[root@node11 ~]#

Will downgrade docker

@sergey-safarov
Copy link

After downgrade to 17.12.1-ce i can delte and create attachable swarm network but other error is present

May 02 21:39:30 node11.docker.nga911.com dockerd[25954]: time="2018-05-02T21:39:30.727321895Z" level=error msg="task allocation failure" error="failed to allocate network IP for task s0s191pud5ozsmyjtqn9r4uhg network urt4hisrgeukn0s54no93yb1l: could not allocate IP from IPAM: Address already in use" module=node node.id=iaarrabaqehouzaqxpujwac6d
May 02 21:39:50 node11.docker.nga911.com dockerd[25954]: time="2018-05-02T21:39:50.754656960Z" level=error msg="bbc353e514a923d23d480dbea24b7e1cedec828df9fbedbb6b3072361b222f4b cleanup: failed to delete container from containerd: no such container"
May 02 21:39:50 node11.docker.nga911.com dockerd[25954]: time="2018-05-02T21:39:50.754705806Z" level=error msg="Handler for POST /v1.35/containers/bbc353e514a923d23d480dbea24b7e1cedec828df9fbedbb6b3072361b222f4b/start returned error: attaching to network failed, make sure your network options are correct and check manager logs: context deadline exceeded"
May 02 21:39:52 node11.docker.nga911.com dockerd[25954]: time="2018-05-02T21:39:52.231154679Z" level=error msg="task allocation failure" error="failed to allocate network IP for task j2orbgyvx7ylt9sbbgmas6buc network urt4hisrgeukn0s54no93yb1l: could not allocate IP from IPAM: Address already in use" module=node node.id=iaarrabaqehouzaqxpujwac6d

May 02 21:40:12 node11.docker.nga911.com dockerd[25954]: time="2018-05-02T21:40:12.255648654Z" level=error msg="b10fa4320b225781ee3b713e9392573a7c296ed8a986933abf354fc24f0911cf cleanup: failed to delete container from containerd: no such container"
May 02 21:40:12 node11.docker.nga911.com dockerd[25954]: time="2018-05-02T21:40:12.255699842Z" level=error msg="Handler for POST /v1.35/containers/b10fa4320b225781ee3b713e9392573a7c296ed8a986933abf354fc24f0911cf/start returned error: attaching to network failed, make sure your network options are correct and check manager logs: context deadline exceeded"
May 02 21:40:13 node11.docker.nga911.com dockerd[25954]: time="2018-05-02T21:40:13.769850421Z" level=error msg="task allocation failure" error="failed to allocate network IP for task z1ncot2kacorl4bwjkz94apwn network urt4hisrgeukn0s54no93yb1l: could not allocate IP from IPAM: Address already in use" module=node node.id=iaarrabaqehouzaqxpujwac6d

Also i:

  1. leaved swarm on all nodes
  2. Then stoped docker on all nodes
  3. executed "rm -Rf /var/lib/docker/network/files/" on all nodes
  4. started docker
  5. joined nodes to swarm

And issue is still exists.

@sergey-safarov
Copy link

Issue also reproduced after switching to weave network

@sergey-safarov
Copy link

I resolved issue after switching IP network mask from /27 to /23
Looks as first IP in subnetwork is reserver and cannot be assigned to containers.
In my case i cannot assign 10.0.9.3 to container when network is 10.0.9.0/27

@mefyl
Copy link

mefyl commented Jul 2, 2018

Just reproduced with 18.03:

Error response from daemon: rpc error: code = FailedPrecondition desc = network qewym9htq0a36kxqq5kd5wh5t is in use by task ye75nfrjlmjmh20u3eyrqiza3
$ docker service ls
ID                  NAME                MODE                REPLICAS            IMAGE               PORTS
$ docker inspect ye75nfrjlmjmh20u3eyrqiza3
[
    {
        "ID": "ye75nfrjlmjmh20u3eyrqiza3",
        ...
    }
]

From the worker itself, the task has been running for some time (the service was removed more than 20 minutes ago):

CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
52bdf588f106        alpine:latest       "top"               26 minutes ago      Up 24 minutes                           F1StressService154.3.ye75nfrjlmjmh20u3eyrqiza3

Version:

 Engine:
  Version:      18.03.1-ee-1
  API version:  1.37 (minimum version 1.12)
  Go version:   go1.10.2
  Git commit:   d5375b4
  Built:        Wed Jun 27 01:30:27 2018
  OS/Arch:      linux/amd64
  Experimental: false

@man4j
Copy link

man4j commented Sep 3, 2018

I have same issue in Docker 18.06 ((

@straurob
Copy link

Experiencing this on 18.09.0 as well. We attached a manually started container to the network db-net which seems to having caused this happen.

root@dev-001:~# docker network rm db-net
Error response from daemon: rpc error: code = FailedPrecondition desc = network cti7sq1lkmks9dom9y1v12q3z is in use by task 9zlxcpeuc8122ppmp6xsgx6q4
root@dev-001:~# docker network inspect db-net
[
    {
        "Name": "db-net",
        "Id": "cti7sq1lkmks9dom9y1v12q3z",
        "Created": "2019-02-18T10:04:09.351539305Z",
        "Scope": "swarm",
        "Driver": "overlay",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "10.1.246.0/26",
                    "Gateway": "10.1.246.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": true,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": null,
        "Options": {
            "com.docker.network.driver.overlay.vxlanid_list": "4121"
        },
        "Labels": null
    }
]

Is there any fix or workaround to get rid of this?

@thaJeztah
Copy link
Member

Is that container still attached / running? If so, the error is legitimate, as it won't remove networks if they're still in use.

@straurob
Copy link

straurob commented Mar 1, 2019

Is that container still attached / running? If so, the error is legitimate, as it won't remove networks if they're still in use.

No, the container is neither running nor attached. In fact it doesn't even exist anymore. My only chance was killing that task using tasknuke: https://hub.docker.com/r/dperny/tasknuke/

@kkbruce
Copy link

kkbruce commented Mar 29, 2019

I upgrade docker engine from 18.09 to 18.09.3 have same issue.

Version

PS C:\> docker version
Client:
 Version:           18.09.3
 API version:       1.39
 Go version:        go1.10.8
 Git commit:        142dfcedca
 Built:             02/28/2019 06:33:17
 OS/Arch:           windows/amd64
 Experimental:      false

Server:
 Engine:
  Version:          18.09.3
  API version:      1.39 (minimum version 1.24)
  Go version:       go1.10.8
  Git commit:       142dfcedca
  Built:            02/28/2019 06:31:15
  OS/Arch:          windows/amd64
  Experimental:     true

Network

PS C:\> docker network ls
813d10dde3e1        nat                       nat                 local
9672907242d0        none                      null                local
ft9hz8uehgy4        portainer_agent_network   overlay             swarm

Issue problem

PS C:\> docker network rm ft9hz8uehgy4
Error response from daemon: rpc error: code = FailedPrecondition desc = network ft9hz8uehgy4161rtp0j6v921 is in use by task 3aws65041dl44x8tcn1zys0p2

network inspect

PS C:\> docker network inspect portainer_agent_network
[
    {
        "Name": "portainer_agent_network",
        "Id": "ft9hz8uehgy4161rtp0j6v921",
        "Created": "2019-03-27T14:27:48.328851+08:00",
        "Scope": "swarm",
        "Driver": "overlay",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "10.0.0.0/24",
                    "Gateway": "10.0.0.1"
                }
            ]
        },
        "Internal": true,
        "Attachable": true,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {
            "lb-portainer_agent_network": {
                "Name": "portainer_agent_network-endpoint",
                "EndpointID": "b626fa986a016da6267425f29bcea2dd834bac514abff382611e1fff478ed724",
                "MacAddress": "00:15:5d:69:97:f2",
                "IPv4Address": "10.0.0.5/24",
                "IPv6Address": ""
            }
        },
        "Options": {
            "com.docker.network.driver.overlay.vxlanid_list": "4102",
            "com.docker.network.windowsshim.hnsid": "fecc2f29-b8e1-4ec0-bea6-30e8b8a9fd08"
        },
        "Labels": {},
        "Peers": [
            {
                "Name": "5dc59b9a21bc",
                "IP": "172.21.5.199"
            }
        ]
    }
]

docker object info

PS C:\> docker inspect 3aws65041dl44x8tcn1zys0p2
[
    {
        "ID": "3aws65041dl44x8tcn1zys0p2",
        "Version": {
            "Index": 519186
        },
        "CreatedAt": "2019-03-27T06:16:42.7302836Z",
        "UpdatedAt": "2019-03-27T09:42:13.8861598Z",
        "Labels": {},
        "Spec": {
            "NetworkAttachmentSpec": {
                "ContainerID": "6a54881ec55e54ccd87ec86a33faf886b2fda971a704dc5c0db9467af6bdc4fa"
            },
            "Networks": [
                {
                    "Target": "ft9hz8uehgy4161rtp0j6v921"
                }
            ],
            "ForceUpdate": 0,
            "Runtime": "attachment"
        },
        "NodeID": "0nxfvz8vums6x7cy45o860jge",
        "Status": {
            "Timestamp": "2019-03-27T06:27:48.4318985Z",
            "State": "starting",
            "Message": "starting",
            "PortStatus": {}
        },
        "DesiredState": "running",
        "NetworksAttachments": [
            {
                "Network": {
                    "ID": "ft9hz8uehgy4161rtp0j6v921",
                    "Version": {
                        "Index": 519181
                    },
                    "CreatedAt": "2018-12-26T09:17:50.7122426Z",
                    "UpdatedAt": "2019-03-27T09:42:13.2867227Z",
                    "Spec": {
                        "Name": "portainer_agent_network",
                        "Labels": {},
                        "DriverConfiguration": {
                            "Name": "overlay"
                        },
                        "Internal": true,
                        "Attachable": true,
                        "IPAMOptions": {
                            "Driver": {
                                "Name": "default"
                            }
                        },
                        "Scope": "swarm"
                    },
                    "DriverState": {
                        "Name": "overlay",
                        "Options": {
                            "com.docker.network.driver.overlay.vxlanid_list": "4102"
                        }
                    },
                    "IPAMOptions": {
                        "Driver": {
                            "Name": "default"
                        },
                        "Configs": [
                            {
                                "Subnet": "10.0.0.0/24",
                                "Gateway": "10.0.0.1"
                            }
                        ]
                    }
                },
                "Addresses": [
                    "10.0.0.4/24"
                ]
            }
        ]
    }
]

No any container is neither running nor attached.

@topikettunen
Copy link

Been having occasionally the same issue when working with stacks. I can reproduce this when running script that starts swarm stack and then connects few containers to this stack's network. This issue rises when I CTRL-c out of this script after stack has been created and maybe one or two container has been connected to the stack's network.

After CTRL-c, I can then prune everything related to stack and these spawned containers, but just can't delete the network even though there are no containers or services running related to it. I can write a minimal script for helping to debug this, but since this occurs on a work-related script I need to rewrite something similar.

Necessary info:

$ docker version
Client:
 Version:           18.09.5
 API version:       1.39
 Go version:        go1.10.8
 Git commit:        e8ff056
 Built:             Thu Apr 11 04:43:57 2019
 OS/Arch:           linux/amd64
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          18.09.2
  API version:      1.39 (minimum version 1.12)
  Go version:       go1.10.6
  Git commit:       6247962
  Built:            Sun Feb 10 04:13:06 2019
  OS/Arch:          linux/amd64
  Experimental:     false
~ $ docker network rm locklib_default
Error response from daemon: rpc error: code = FailedPrecondition desc = network prpox4dh2qjx2ittlwtr60eno is in use by task oowhiznjv7vszezqll02cr29o
$ docker inspect locklib_default
[
    {
        "Name": "locklib_default",
        "Id": "prpox4dh2qjx2ittlwtr60eno",
        "Created": "2019-05-21T06:01:19.8515193Z",
        "Scope": "swarm",
        "Driver": "overlay",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "10.0.0.0/24",
                    "Gateway": "10.0.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": true,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": null,
        "Options": {
            "com.docker.network.driver.overlay.vxlanid_list": "4097"
        },
        "Labels": {
            "com.docker.stack.namespace": "locklib"
        }
    }
]
$ docker inspect oowhiznjv7vszezqll02cr29o
[
    {
        "ID": "oowhiznjv7vszezqll02cr29o",
        "Version": {
            "Index": 19272
        },
        "CreatedAt": "2019-05-21T06:08:54.0494636Z",
        "UpdatedAt": "2019-05-21T06:08:54.2787945Z",
        "Labels": {},
        "Spec": {
            "NetworkAttachmentSpec": {
                "ContainerID": "c4e0a35fb0fd79fcaadb9d6c34ac31e3385810fc5a961b70726617345b144086"
            },
            "Networks": [
                {
                    "Target": "prpox4dh2qjx2ittlwtr60eno"
                }
            ],
            "ForceUpdate": 0,
            "Runtime": "attachment"
        },
        "NodeID": "wvfx5gq6b8o0taaxj9twogsio",
        "Status": {
            "Timestamp": "2019-05-21T06:08:54.2280174Z",
            "State": "running",
            "Message": "started",
            "PortStatus": {}
        },
        "DesiredState": "running",
        "NetworksAttachments": [
            {
                "Network": {
                    "ID": "prpox4dh2qjx2ittlwtr60eno",
                    "Version": {
                        "Index": 19220
                    },
                    "CreatedAt": "2019-05-21T06:01:19.8515193Z",
                    "UpdatedAt": "2019-05-21T06:06:10.7315856Z",
                    "Spec": {
                        "Name": "locklib_default",
                        "Labels": {
                            "com.docker.stack.namespace": "locklib"
                        },
                        "DriverConfiguration": {
                            "Name": "overlay"
                        },
                        "Attachable": true,
                        "Scope": "swarm"
                    },
                    "DriverState": {
                        "Name": "overlay",
                        "Options": {
                            "com.docker.network.driver.overlay.vxlanid_list": "4097"
                        }
                    },
                    "IPAMOptions": {
                        "Driver": {
                            "Name": "default"
                        },
                        "Configs": [
                            {
                                "Subnet": "10.0.0.0/24",
                                "Gateway": "10.0.0.1"
                            }
                        ]
                    }
                },
                "Addresses": [
                    "10.0.0.9/24"
                ]
            }
        ]
    }
]

@thaJeztah
Copy link
Member

even though there are no containers or services running related to it

Did you check if there's a stopped container that's attached?

@topikettunen
Copy link

@thaJeztah Yeah. When this happens and there are some containers running or close, I usually just run docker container prune and if that doesn't clean every container I just remove them manually. But still I'm not able to remove the network. To workaround this issue I need to restart the daemon, which then allows me to remove the network.

@thaJeztah
Copy link
Member

I see your daemon version is a few versions behind; might be worth upgrading to the latest version to be sure it's not an issue that was fixed already 🤔

@topikettunen
Copy link

topikettunen commented May 22, 2019

Decided to tackle with this on my home computer, and seems like that I can reproduce this with the latest versions and also in a new freshly installed docker. I try to write some MVP reproducible script for debugging this, since currently I'm not entirely sure if there is something funky happening in the container itself or is this related to somewhere else.

@orbitwebsig
Copy link

orbitwebsig commented Jan 6, 2020

had same issue on macbook, just had to restart Docker Desktop, then
docker network rm [network]

And recreate whatever thereafter.

@baznikin
Copy link

baznikin commented Jun 4, 2020

Same on recent docker-ce 19.03.11! Looks like some stale info.

$ docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
4a1696fd87e1        bridge              bridge              local
ff27a7d91741        docker_gwbridge     bridge              local
xkcu859m7ugh        etcd_etcd           overlay             swarm
53684254f46f        host                host                local
rpgueh357qcl        ingress             overlay             swarm
418210bbdf82        none                null                local
ppd1w6ipq3q0        pg_pg               overlay             swarm
tpoq5ncst7nj        pg_pgdb             overlay             swarm
$ docker network rm pg_pg
Error response from daemon: rpc error: code = FailedPrecondition desc = network ppd1w6ipq3q0isc1hyy1jssiq is in use by task azf9a5bezexrpmmbr3qu4ww3k
$ docker network inspect ppd1w6ipq3q0isc1hyy1jssiq
[
    {
        "Name": "pg_pg",
        "Id": "ppd1w6ipq3q0isc1hyy1jssiq",
        "Created": "2020-06-03T13:48:52.786214384Z",
        "Scope": "swarm",
        "Driver": "overlay",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "10.0.9.0/24",
                    "Gateway": "10.0.9.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": true,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": null,
        "Options": {
            "com.docker.network.driver.overlay.vxlanid_list": "4105"
        },
        "Labels": {
            "com.docker.stack.namespace": "pg"
        }
    }
]
$ docker inspect azf9a5bezexrpmmbr3qu4ww3k
[
    {
        "ID": "azf9a5bezexrpmmbr3qu4ww3k",
        "Version": {
            "Index": 10934
        },
        "CreatedAt": "2020-06-03T14:17:29.485442958Z",
        "UpdatedAt": "2020-06-03T16:19:22.98355388Z",
        "Labels": {},
        "Spec": {
            "NetworkAttachmentSpec": {
                "ContainerID": "3894a33c0999787179d8244d8f555fbf072142d49898d653ca8248e424a96b4a"
            },
            "Networks": [
                {
                    "Target": "ppd1w6ipq3q0isc1hyy1jssiq"
                }
            ],
            "ForceUpdate": 0,
            "Runtime": "attachment"
        },
        "NodeID": "xc7nyfx1lyyunfty4vwvwjje0",
        "Status": {
            "Timestamp": "2020-06-03T14:17:29.643598774Z",
            "State": "running",
            "Message": "started",
            "PortStatus": {}
        },
        "DesiredState": "running",
        "NetworksAttachments": [
            {
                "Network": {
                    "ID": "ppd1w6ipq3q0isc1hyy1jssiq",
                    "Version": {
                        "Index": 10918
                    },
                    "CreatedAt": "2020-06-03T13:48:52.786214384Z",
                    "UpdatedAt": "2020-06-03T16:19:19.969251994Z",
                    "Spec": {
                        "Name": "pg_pg",
                        "Labels": {
                            "com.docker.stack.namespace": "pg"
                        },
                        "DriverConfiguration": {
                            "Name": "overlay"
                        },
                        "Attachable": true,
                        "Scope": "swarm"
                    },
                    "DriverState": {
                        "Name": "overlay",
                        "Options": {
                            "com.docker.network.driver.overlay.vxlanid_list": "4105"
                        }
                    },
                    "IPAMOptions": {
                        "Driver": {
                            "Name": "default"
                        },
                        "Configs": [
                            {
                                "Subnet": "10.0.9.0/24",
                                "Gateway": "10.0.9.1"
                            }
                        ]
                    }
                },
                "Addresses": [
                    "10.0.9.214/24"
                ]
            }
        ]
    }
]
$ docker node ls
ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS      ENGINE VERSION
4le67f1l9oigutkjcyo47tef3 *   xxxxxx-app          Ready               Active              Leader              19.03.11
xc7nyfx1lyyunfty4vwvwjje0     xxxxxx-db           Ready               Active              Reachable           19.03.11
9a04m4tiqhkh8eft102pf9gfw     xxxxxx-db2          Ready               Active              Reachable           19.03.11
0oi0sg37516yycfoy9oxj4555     xxxxxx-stor         Ready               Active              Reachable           19.03.11

However, no traces of running container on node xc7nyfx1lyyunfty4vwvwjje0, no files, only those records in logfile:

support@xxxxxx-db:~$ sudo find /var/lib/ -iname '*azf9a5bezexrpmmbr3qu4ww3k*'
support@xxxxxx-db:~$ sudo find /var/lib/ -iname '*3894a33c0999*'
support@xxxxxx-db:~$

support@xxxxxx-db:~$ journalctl | grep 3894a33c0999
Jun 03 21:17:29 xxxxxx-db dockerd[1097]: time="2020-06-03T21:17:29.893926521+07:00" level=error msg="3894a33c0999787179d8244d8f555fbf072142d49898d653ca8248e424a96b4a cleanup: failed to delete container from containerd: no such container"
Jun 03 21:17:29 xxxxxx-db dockerd[1097]: time="2020-06-03T21:17:29.909828191+07:00" level=error msg="Handler for POST /v1.40/containers/3894a33c0999787179d8244d8f555fbf072142d49898d653ca8248e424a96b4a/start returned error: failed to get network during CreateEndpoint: network ppd1w6ipq3q0isc1hyy1jssiq not found"
Jun 04 19:56:56 xxxxxx-db dockerd[1097]: time="2020-06-04T19:56:56.090549761+07:00" level=error msg="Error getting service 3894a33c0999787179d8244d8f555fbf072142d49898d653ca8248e424a96b4a: service 3894a33c0999787179d8244d8f555fbf072142d49898d653ca8248e424a96b4a not found"
Jun 04 19:56:56 xxxxxx-db dockerd[1097]: time="2020-06-04T19:56:56.094526155+07:00" level=error msg="Error getting task 3894a33c0999787179d8244d8f555fbf072142d49898d653ca8248e424a96b4a: task 3894a33c0999787179d8244d8f555fbf072142d49898d653ca8248e424a96b4a not found"
Jun 04 19:56:56 xxxxxx-db dockerd[1097]: time="2020-06-04T19:56:56.099420427+07:00" level=error msg="Error getting node 3894a33c0999787179d8244d8f555fbf072142d49898d653ca8248e424a96b4a: node 3894a33c0999787179d8244d8f555fbf072142d49898d653ca8248e424a96b4a not found"
Jun 04 20:07:30 xxxxxx-db sudo[23616]:  support : TTY=pts/1 ; PWD=/home/support ; USER=root ; COMMAND=/usr/bin/find /var/lib/containerd/ -name *3894a33c0999*
Jun 04 20:07:33 xxxxxx-db sudo[23633]:  support : TTY=pts/1 ; PWD=/home/support ; USER=root ; COMMAND=/usr/bin/find /var/lib/containerd/ -name *3894a33c0999*
Jun 04 20:07:37 xxxxxx-db sudo[23638]:  support : TTY=pts/1 ; PWD=/home/support ; USER=root ; COMMAND=/usr/bin/find /var/lib/ -name *3894a33c0999*

You can't delete this container or task, but can inspect it:

support@xxxxxx-db:~$ docker rm azf9a5bezexrpmmbr3qu4ww3k
Error: No such container: azf9a5bezexrpmmbr3qu4ww3k
support@xxxxxx-db:~$
support@xxxxxx-db:~$ docker rm 3894a33c0999787179d8244d8f555fbf072142d49898d653ca8248e424a96b4a
Error: No such container: 3894a33c0999787179d8244d8f555fbf072142d49898d653ca8248e424a96b4a
support@xxxxxx-db:~$ docker inspect azf9a5bezexrpmmbr3qu4ww3k
[
    {
        "ID": "azf9a5bezexrpmmbr3qu4ww3k",

Version:

$ docker version
Client: Docker Engine - Community
 Version:           19.03.11
 API version:       1.40
 Go version:        go1.13.10
 Git commit:        42e35e61f3
 Built:             Mon Jun  1 09:12:22 2020
 OS/Arch:           linux/amd64
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          19.03.11
  API version:      1.40 (minimum version 1.12)
  Go version:       go1.13.10
  Git commit:       42e35e61f3
  Built:            Mon Jun  1 09:10:54 2020
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.2.13
  GitCommit:        7ad184331fa3e55e52b890ea95e65ba581ae3429
 runc:
  Version:          1.0.0-rc10
  GitCommit:        dc9208a3303feef5b3839f4323d9beb36df0a9dd
 docker-init:
  Version:          0.18.0
  GitCommit:        fec3683
$ docker version
Client: Docker Engine - Community
 Version:           19.03.11
 API version:       1.40
 Go version:        go1.13.10
 Git commit:        42e35e61f3
 Built:             Mon Jun  1 09:12:22 2020
 OS/Arch:           linux/amd64
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          19.03.11
  API version:      1.40 (minimum version 1.12)
  Go version:       go1.13.10
  Git commit:       42e35e61f3
  Built:            Mon Jun  1 09:10:54 2020
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.2.13
  GitCommit:        7ad184331fa3e55e52b890ea95e65ba581ae3429
 runc:
  Version:          1.0.0-rc10
  GitCommit:        dc9208a3303feef5b3839f4323d9beb36df0a9dd
 docker-init:
  Version:          0.18.0
  GitCommit:        fec3683
support@xxxxxx-db:~$
support@xxxxxx-db:~$
support@xxxxxx-db:~$ docker info
Client:
 Debug Mode: false

Server:
 Containers: 2
  Running: 2
  Paused: 0
  Stopped: 0
 Images: 5
 Server Version: 19.03.11
 Storage Driver: overlay2
  Backing Filesystem: extfs
  Supports d_type: true
  Native Overlay Diff: true
 Logging Driver: json-file
 Cgroup Driver: cgroupfs
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: active
  NodeID: xc7nyfx1lyyunfty4vwvwjje0
  Is Manager: true
  ClusterID: 4l4svb3l2orf3dudnwypog52g
  Managers: 4
  Nodes: 4
  Default Address Pool: 10.0.0.0/8
  SubnetSize: 24
  Data Path Port: 4789
  Orchestration:
   Task History Retention Limit: 5
  Raft:
   Snapshot Interval: 10000
   Number of Old Snapshots to Retain: 0
   Heartbeat Tick: 1
   Election Tick: 10
  Dispatcher:
   Heartbeat Period: 5 seconds
  CA Configuration:
   Expiry Duration: 3 months
   Force Rotate: 0
  Autolock Managers: false
  Root Rotation In Progress: false
  Node Address: 192.168.154.131
  Manager Addresses:
   192.168.154.130:2377
   192.168.154.131:2377
   192.168.154.132:2377
   192.168.154.133:2377
 Runtimes: runc
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: 7ad184331fa3e55e52b890ea95e65ba581ae3429
 runc version: dc9208a3303feef5b3839f4323d9beb36df0a9dd
 init version: fec3683
 Security Options:
  apparmor
  seccomp
   Profile: default
 Kernel Version: 4.15.0-101-generic
 Operating System: Ubuntu 18.04.4 LTS
 OSType: linux
 Architecture: x86_64
 CPUs: 24
 Total Memory: 246GiB
 Name: xxxxxx-db
 ID: YFXE:KSEF:UBH6:5AKJ:AIPO:AQT5:EKW2:XU6R:BCYI:WXEY:REGC:6KTH
 Docker Root Dir: /var/lib/docker
 Debug Mode: false
 Registry: https://index.docker.io/v1/
 Labels:
 Experimental: false
 Insecure Registries:
  127.0.0.0/8
 Live Restore Enabled: false

WARNING: No swap limit support

@lv10nightmare
Copy link

lv10nightmare commented Jul 30, 2020

Same on docker-ce 19.03.11 & 19.03.12, no running containers

$docker node ls
ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS      ENGINE VERSION
ep27n7o2tet0iov37de2ykq0u     vm-kvm6-app     Ready               Active                                  19.03.11
9ry5t46doovjch00c6hr55jgt *   vm-kvm7-app     Ready               Active              Leader              19.03.11
nijew7qb4ayiv0bomb67sx6ku     vm-kvm8-app     Ready               Active                                  19.03.12
$docker network rm hub-network
Error response from daemon: rpc error: code = FailedPrecondition desc = network gvo1cdcball903nh7izvw62zh is in use by task hwhfxvwgp4cq8t9yzynqmwuqc

@lv10nightmare
Copy link

Same on docker-ce 19.03.11 & 19.03.12, no running containers
$docker node ls ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION ep27n7o2tet0iov37de2ykq0u vm-kvm6-app Ready Active 19.03.11 9ry5t46doovjch00c6hr55jgt * vm-kvm7-app Ready Active Leader 19.03.11 nijew7qb4ayiv0bomb67sx6ku vm-kvm8-app Ready Active 19.03.12

$docker network rm hub-network Error response from daemon: rpc error: code = FailedPrecondition desc = network gvo1cdcball903nh7izvw62zh is in use by task hwhfxvwgp4cq8t9yzynqmwuqc

I removed a node from the swarm network, then I successfully removed the network.

@frankli0324
Copy link

so this bug existed for four years now, still not fixed?
I'm running the latest 20.10.2, and the debian-maintained 18.09.1, still runs into this occasionally

@jinfosec
Copy link

jinfosec commented Feb 4, 2021

The problem is not yet solved I have this problem regularly, I start the containers with fixed IP. After some time of cleaning all containers and then deploy again some IP addresses stay allocated giving the error:
docker: Error response from daemon: attaching to network failed, make sure your network options are correct and check manager logs: context deadline exceeded.
My workaround is to remove the network and rebuilding the docker swarm
Does someone have a clue on how to prevent this issue?

@Vimtekken
Copy link

Issue still persists in 20.10.8. Currently we restart the docker daemons and that will allow us to remove the network. Really only useful in testing and is no real solution long term.

@jasondalysb
Copy link

Same issue here

@masip85
Copy link

masip85 commented Jul 19, 2022

Same issue here with 20.10.14

@ptrxyz
Copy link

ptrxyz commented Oct 11, 2022

Yep, bringing this up too. Still exists in Ubuntu 22.04.

What's the workaround to fix this? Do I REALLY have to recreate my cluster? O.o

@A-VIKASH-KUMAR
Copy link

First You have to remove all the service attached to the network
docker service rm

Next you have to remove the network you want to delete
docker network rm

@cobolbaby
Copy link

cobolbaby commented Jun 25, 2023

Same issue with 20.10.24. Using swarmctl, I can retrieve many tasks with service name .0.

2023-06-25 19-21-08屏幕截图

@laurikimmel
Copy link

Witnessing same issue using Docker 24.0.7.

$ docker --version
Docker version 24.0.7, build afdd53b

I was able to remove network after restarting Docker daemon (in single node swarm)

$ docker network rm --force my-broken-network
Error response from daemon: rpc error: code = FailedPrecondition desc = network wgxjotssiygmljdluuosq5rmj is in use by task xszbmso8yehtwpf9xp16rm58a
$ sudo systemctl restart docker
$ docker network rm my-broken-network
my-broken-network

@HDLP9
Copy link

HDLP9 commented Dec 30, 2023

Same on docker-ce 19.03.11 & 19.03.12, no running containers
$docker node ls ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION ep27n7o2tet0iov37de2ykq0u vm-kvm6-app Ready Active 19.03.11 9ry5t46doovjch00c6hr55jgt * vm-kvm7-app Ready Active Leader 19.03.11 nijew7qb4ayiv0bomb67sx6ku vm-kvm8-app Ready Active 19.03.12
$docker network rm hub-network Error response from daemon: rpc error: code = FailedPrecondition desc = network gvo1cdcball903nh7izvw62zh is in use by task hwhfxvwgp4cq8t9yzynqmwuqc

I removed a node from the swarm network, then I successfully removed the network.

The only thing that works...Thanks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/networking kind/bug Bugs are bugs. The cause may or may not be known at triage time so debugging may be needed. version/1.13
Projects
None yet
Development

No branches or pull requests