Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to remove network "has active endpoints" #17217

Closed
rmb938 opened this issue Oct 20, 2015 · 89 comments · Fixed by #21261
Closed

Unable to remove network "has active endpoints" #17217

rmb938 opened this issue Oct 20, 2015 · 89 comments · Fixed by #21261
Assignees
Milestone

Comments

@rmb938
Copy link
Contributor

rmb938 commented Oct 20, 2015

Not to sure if this belongs in this repo or libnetwork.

docker version: Docker version 1.9.0-rc1, build 9291a0e
docker info:

Containers: 0
Images: 5
Engine Version: 1.9.0-rc1
Storage Driver: devicemapper
 Pool Name: docker-253:0-390879-pool
 Pool Blocksize: 65.54 kB
 Base Device Size: 107.4 GB
 Backing Filesystem: xfs
 Data file: /dev/loop0
 Metadata file: /dev/loop1
 Data Space Used: 2.023 GB
 Data Space Total: 107.4 GB
 Data Space Available: 11.62 GB
 Metadata Space Used: 1.7 MB
 Metadata Space Total: 2.147 GB
 Metadata Space Available: 2.146 GB
 Udev Sync Supported: true
 Deferred Removal Enabled: false
 Deferred Deletion Enabled: false
 Deferred Deleted Device Count: 0
 Data loop file: /var/lib/docker/devicemapper/devicemapper/data
 Metadata loop file: /var/lib/docker/devicemapper/devicemapper/metadata
 Library Version: 1.02.93-RHEL7 (2015-01-28)
Execution Driver: native-0.2
Logging Driver: json-file
Kernel Version: 3.10.0-229.14.1.el7.x86_64
Operating System: CentOS Linux 7 (Core)
CPUs: 2
Total Memory: 1.797 GiB
Name: carbon1.rmb938.com
ID: IAQS:6E74:7NGG:5JOG:JXFM:26VD:IAQV:FZNU:E23J:QUAA:NI4O:DI3S

uname -a: Linux carbon1.rmb938.com 3.10.0-229.14.1.el7.x86_64 #1 SMP Tue Sep 15 15:05:51 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux

List the steps to reproduce the issue:

  1. Create a network with a remote driver
  2. Run a container connected to the network
  3. Kill and Remove the container
  4. Remove the network

Describe the results you received:

If the remote network driver gives an error when processing /NetworkDriver.Leave docker still kills and removes the container but does not remove the endpoint. This allows docker's internal db to think that the endpoint still exists even though the container is removed.

When you try and remove the network this error is returned

docker network rm net1      
Error response from daemon: network net1 has active endpoints

Describe the results you expected:

Docker should not be allowed to kill or remove the container if /NetworkDriver.Leave returned an error.

@rmb938
Copy link
Contributor Author

rmb938 commented Oct 20, 2015

This issue seems to be very intermittent and does not happen very often.

@mavenugo
Copy link
Contributor

@rmb938 we had a few issues with dangling endpoints and has been addressed via #17191. RC2 should have a fix for that (or the latest master). For RC1 testers (huge thanks), we might need an additional workaround to cleanup the states before starting RC2. we will update with proper docs.

@rmb938
Copy link
Contributor Author

rmb938 commented Oct 20, 2015

Awesome. Thanks.

@rmb938 rmb938 closed this as completed Oct 20, 2015
@brendandburns
Copy link

@mavenugo I just repro'd this in 1.10.0:

seems that #17191 wasn't a complete fix...

Do you have a work around? Even bouncing the docker daemon doesn't seem to resolve things.

(and let me know if I can get you more debug info, its still repro'ing on my machine)

@keithbentrup
Copy link

I also just reproduced this in 1.10.3 and landed here via google looking for a work around. I can't force disconnect the active endpoints b/c none of the containers listed via docker network inspect still exist.

I eventually had to recreate my consul container and restart the docker daemon.

@thaJeztah
Copy link
Member

ping @mavenugo do you want this issue reopened, or prefer a new issue in case it has a different root cause?

@brendandburns
Copy link

Clarification, docker 1.10.1

Client:
 Version:      1.10.1
 API version:  1.22
 Go version:   go1.4.3
 Git commit:   9e83765
 Built:        Fri Feb 12 12:41:05 2016
 OS/Arch:      linux/arm

Server:
 Version:      1.10.1
 API version:  1.22
 Go version:   go1.4.3
 Git commit:   9e83765
 Built:        Fri Feb 12 12:41:05 2016
 OS/Arch:      linux/arm

@thaJeztah
Copy link
Member

Let me reopen this for investigation

@thaJeztah thaJeztah reopened this Mar 12, 2016
@thaJeztah thaJeztah added this to the 1.11.0 milestone Mar 12, 2016
@thaJeztah
Copy link
Member

Madhu, assigned you, but feel free to reassign, of point to the related workaround if it's there already 😄

@mavenugo
Copy link
Contributor

@keithbentrup @brendandburns thanks for raising the issue. Couple of questions

  1. Are you using any multi-host network driver (such as Overlay driver). Can you please share the docker network ls output.
  2. If you dont' use multi-host driver, then can you please share the /var/lib/docker/network/files/local-kv.db file (via some file-sharing website) and which network are you trying to remove ? And how was the network originally created ?

FYI. for a multi-host network driver, docker maintains the endpoints for a network across the cluster in the KV-Store. Hence, if any host in that cluster still has an endpoint alive in that network, we will see this error and this is an expected condition.

@mavenugo
Copy link
Contributor

@thaJeztah PTAL my comment above and based on the scenario, this need not be a bug. am okay to keep this issue open if that helps.

@keithbentrup
Copy link

@mavenugo Yes, I'm using the overlay driver via docker-compose with a swarm host managing 2 nodes.

When I docker network inspect the network on each individual node, 1 node had 1 container listed that no longer existed and so could not be removed by docker rm -fv using the container name or id.

@mavenugo
Copy link
Contributor

@keithbentrup This is a stale endpoint case. Do you happen to have the error log when that container that was originally removed (which left the endpoint in this state).
BTW, if the container is removed, but the endpoint is still seen, then one can force disconnect the endpoint using docker network disconnect -f {network} {endpoint-name} . You can get the endpoint-name from the docker network inspect {network} command.

@mavenugo
Copy link
Contributor

@brendandburns can you please help reply to #17217 (comment) ?

@brendandburns
Copy link

@mavenugo sorry for the delay. I'm not using docker multi-host networking afaik. Its a single node raspberry pi and I haven't done anything other than install docker via hypriot.

Here's the output you requested (network is the network I can't delete)

$ docker network ls
NETWORK ID          NAME                DRIVER
d22a34456cb9        bridge              bridge              
ef922c6e861e        network             bridge              
c2859ad8bda4        none                null                
150ed62cfc44        host                host 

The kv file is attached, I had to name it .txt to get around github filters, but its the binary file.

local-kv.db.txt

I created the network via direct API calls (dockerode)

This has worked (create and delete) numerous times, I think in this instance, I docker rm -f <container-id> but I'm not positive, I might have power-cycled the machine...

Hope that helps.
--brendan

@keithbentrup
Copy link

@mavenugo If by docker network disconnect -f {network} {endpoint-name} you mean docker network disconnect [OPTIONS] NETWORK CONTAINER per docker network disconnect --help, I tried that, but it complained (not surprisingly) with No such container.

If you meant the EndpointID instead of the container name/id, I did not try that (but will next time) because that's not what the --help suggested.

@mavenugo
Copy link
Contributor

@keithbentrup i meant the -f option which is available in v1.10.x. Force option also considers endpoint-name from other nodes in the cluster as well. Hence, my earlier instructions will work just fine with -f option if you are using docker v1.10.x.

@mavenugo
Copy link
Contributor

@brendandburns thanks for the info and it is quite useful to narrow down the issue. There is a stale reference to the endpoint which is causing this issue. The stale reference is most likely caused by the power-cycle when the endpoints were being cleaned up. we will get this inconsistency issue resolved in 1.11.

@brendandburns
Copy link

@mavenugo glad it helped. In the meantime, if I blow away that file, will things still work?

thanks
--brendan

@mavenugo
Copy link
Contributor

@brendandburns yes. pls go ahead. it will just work fine for you.

@keithbentrup
Copy link

@mavenugo I think you misunderstood me. I was using the -f option (verified in my shell history) on v1.10.x but with the container id (not the endpoint id) b/c that's what help suggests (the container not the endpoint). If it's meant to work with either the container id or endpoint id, then it's a bug b/c it certainly does not disconnect with the container id and the -f option when the container no longer exists.

@keithbentrup
Copy link

I was able to recreate a condition when trying to remove docker_gwbridge that might alleviate some of the confusion.
When I used the docker client pointing to a swarm manager, I experienced this output:

~/D/e/m/compose (develop) $ docker network inspect docker_gwbridge
[
    {
        "Name": "docker_gwbridge",
        "Id": "83dfeb756951d3d175e9058d0165b6a4997713c3e19b6a44a7210a09cd687d54",
        "Scope": "local",
        "Driver": "bridge",
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "172.18.0.0/16",
                    "Gateway": "172.18.0.1/16"
                }
            ]
        },
        "Containers": {
            "41ebd4fc365ae07543fd8454263d7c049d8e73036cddb22379ca1ce08a65402f": {
                "Name": "gateway_41ebd4fc365a",
                "EndpointID": "1cb2e4e3431a4c2ce1ed7c0ac9bc8dee67c06982344a75312e20e4a7d6e8972c",
                "MacAddress": "02:42:ac:12:00:02",
                "IPv4Address": "172.18.0.2/16",
                "IPv6Address": ""
            }
        },
        "Options": {
            "com.docker.network.bridge.enable_icc": "false",
            "com.docker.network.bridge.enable_ip_masquerade": "true",
            "com.docker.network.bridge.name": "docker_gwbridge"
        }
    }
]
~/D/e/m/compose (develop) $ docker network disconnect -f docker_gwbridge 41ebd4fc365ae07543fd8454263d7c049d8e73036cddb22379ca1ce08a65402f
Error response from daemon: No such container: 41ebd4fc365ae07543fd8454263d7c049d8e73036cddb22379ca1ce08a65402f
~/D/e/m/compose (develop) $ docker network disconnect -f docker_gwbridge 1cb2e4e3431a4c2ce1ed7c0ac9bc8dee67c06982344a75312e20e4a7d6e8972c
Error response from daemon: No such container: 1cb2e4e3431a4c2ce1ed7c0ac9bc8dee67c06982344a75312e20e4a7d6e8972c
~/D/e/m/compose (develop) $ docker network rm docker_gwbridge
Error response from daemon: 500 Internal Server Error: network docker_gwbridge has active endpoints

I first tried removing the container by container name (not shown), then by id, then by container endpoint id. None were successful. Then I logged onto the docker host, and used the local docker client to issue commands via the docker unix socket:

root@dv-vm2:~# docker network disconnect -f docker_gwbridge 41ebd4fc365ae07543fd8454263d7c049d8e73036cddb22379ca1ce08a65402f
Error response from daemon: endpoint 41ebd4fc365ae07543fd8454263d7c049d8e73036cddb22379ca1ce08a65402f not found
root@dv-vm2:~# docker network disconnect -f docker_gwbridge 1cb2e4e3431a4c2ce1ed7c0ac9bc8dee67c06982344a75312e20e4a7d6e8972c
Error response from daemon: endpoint 1cb2e4e3431a4c2ce1ed7c0ac9bc8dee67c06982344a75312e20e4a7d6e8972c not found
root@dv-vm2:~# docker network rm docker_gwbridge
Error response from daemon: network docker_gwbridge has active endpoints
root@dv-vm2:~# docker network disconnect -f docker_gwbridge gateway_41ebd4fc365a
root@dv-vm2:~# docker network rm docker_gwbridge
root@dv-vm2:~# docker network inspect docker_gwbridge
[]
Error: No such network: docker_gwbridge
  1. Notice the output from swarm vs direct docker client: swarm refers to containers; docker refers to endpoints. That should probably be made consistent.
  2. The only successful option was providing an endpoint name (not container name or id, or endpoint id). The --help should clear that up or multiple inputs should be made acceptable.
  3. I did not test endpoint name with swarm, so I don't know if that would have worked.

@mavenugo
Copy link
Contributor

@keithbentrup thats correct. as I suggested earlier. the docker network disconnect -f {network} {endpoint-name} ... pls use the endpoint-name. We can enhance this to support endpoint-id as well. But I wanted to confirm that by using the force option, were you able to make progress.

@keithbentrup
Copy link

@mavenugo but what you suggest is not what the help says. furthermore it lacks the consistency of the most cmds where id/name are interchangeable.

unless others find this thread, others will repeat this same issue, so before adding support for endpoint-id, fix the --help.

yajo pushed a commit to Tecnativa/doodba-copier-template that referenced this issue Jun 9, 2021
Apply workaround from moby/moby#17217 (comment) to see if it fixes those nasty errors in CI.

In any case, it seems sensible and safe to remove orphans always by default.
yajo pushed a commit to Tecnativa/doodba-copier-template that referenced this issue Jun 9, 2021
Apply workaround from moby/moby#17217 (comment) to see if it fixes those nasty errors in CI.

In any case, it seems sensible and safe to remove orphans always by default.
yajo pushed a commit to Tecnativa/doodba-copier-template that referenced this issue Jun 9, 2021
Apply workaround from moby/moby#17217 (comment) to see if it fixes those nasty errors in CI.

In any case, it seems sensible and safe to remove orphans always by default.
@laszukdawid
Copy link

For a closed issue this ticket is rather active.
Just had a situation where docker-compose -f file down finished with the error in question but there was no active (or any) endpoint. Restarting docker service fixed allowed removing the network.

@tonyfarney
Copy link

I just experienced the bug that makes impossible to remove a network that has no containers attached to. Restarting the docker daemon solved the issue. Thank you @davidroeca!

pypt added a commit to mediacloud/backend that referenced this issue Jul 15, 2021
@lkaupp
Copy link

lkaupp commented Sep 14, 2021

Solution:
The different server version made it suspicious. Found a working solution for me: https://askubuntu.com/a/1315837/1199816 .In my case, our admin installed docker during the ubuntu setup, which made a snap installation. But, as an experienced long-term ubuntu / debian user, I installed docker using the command line as usual. Apt list will only show the packages installed via apt and do not show the snap packages! Now, two separate versions of the docker server run side-by-side and the docker client connects either to one of the competing background servers - with the side-effect that only one has the running containers and the other told me there are no running containers with docker ps. I just uninstalled the snap version (we are not using snap by default) and everything is working fine and is up-to-date. I just leave my post here to help others.

Old post with error behavior:
Still an issue. I can specify it further. The containers seem to run "hidden" and are accessible from the internet. But, docker ps just show one or two containers (out of 7) in my application. Now, if I want to docker-compose down the file, the network cannot be removed because of the containers running "hidden". What I have recognized earlier is, if you uninstall docker completely and reinstall it from the ground, the containers are attached backed and you can simply down them and remove the network. Nevertheless, this cannot be the intended behavior. As a side-effect, you cannot "up" the compose file because it throws an error that the ports are in use or:
ERROR: for db Cannot create container for service db: failed to mount local volume: mount /digLab/GG/db:/var/snap/docker/common/var-lib-docker/volumes/dbdata/_data, flags: 0x1000: no such file or directory

I am on Ubuntu 20.04 LTS, with:

Client: Docker Engine - Community
 Version:           20.10.8
 API version:       1.40
 Go version:        go1.16.6
 Git commit:        3967b7d
 Built:             Fri Jul 30 19:54:27 2021
 OS/Arch:           linux/amd64
 Context:           default
 Experimental:      true

Server:
 Engine:
  Version:          19.03.13
  API version:      1.40 (minimum version 1.12)
  Go version:       go1.13.15
  Git commit:       bd33bbf
  Built:            Fri Feb  5 15:58:24 2021
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          v1.3.7
  GitCommit:        8fba4e9a7d01810a393d5d25a3621dc101981175
 runc:
  Version:          1.0.0-rc10
  GitCommit:
 docker-init:
  Version:          0.18.0
  GitCommit:        fec3683

@thaJeztah
Copy link
Member

@lkaupp from the output of your error, I suspect you have the docker "snap" installed, which are packaged and maintained by Canonical / Ubuntu. I know there's various issues with those; are you seeing the same problem when running the official docker packages (https://docs.docker.com/engine/install/ubuntu/)?

@lkaupp
Copy link

lkaupp commented Sep 14, 2021

@lkaupp from the output of your error, I suspect you have the docker "snap" installed, which are packaged and maintained by Canonical / Ubuntu. I know there's various issues with those; are you seeing the same problem when running the official docker packages (https://docs.docker.com/engine/install/ubuntu/)?

yes you are correct, I just found the solution myself and uninstalled the snap version. Added a description for others that run into the problem. Thank you for the fast response @thaJeztah :)

@sorenwacker
Copy link

sorenwacker commented Jan 10, 2022

I have this error running airflow.

Running

sudo aa-remove-unknown

fixed it.

@bennycode
Copy link

Instead, if I include project name with the down then the containers are torn down first and then the custom network: docker-compose --file docker-compose.tests.yml --project-name myprojecttests down

Thanks for sharing your fix, @Wallace-Kelly. Adding a --project-name to my docker-compose up and docker-compose down commands fixed the "network has active endpoints" error for me. 👍

@fgm
Copy link

fgm commented Feb 4, 2022

Still having this issue with docker 20.10.12 on Big Sur on darwin/amd64. Adding the -p <project name> does not fix it.

@timdonovanuk
Copy link

timdonovanuk commented Mar 23, 2022

Still an issue, had to restart docker service to remove a spurious network that existed with supposed active endpoints to a container that no longer exists.

@SupaMario123
Copy link

i had this problem after changing my docker-compose file and trying to shut down the "old" setup with my changes, where i removed some containers and a network. I rolled back my code to the "old" setup and then the docker-compose down worked as expected. after that i readded my changes and did the compose up.

perhaps someone did the same thing.

@phlax
Copy link

phlax commented Aug 15, 2022

seeing a similar issue, albeit very very intermittently in CI (for https://github.com/envoyproxy/envoy) for an example that scales backend services

for ref, the related compose file is here https://github.com/envoyproxy/envoy/blob/main/examples/locality-load-balancing/docker-compose.yaml

and the script that is testing it in CI is here https://github.com/envoyproxy/envoy/blob/main/examples/locality-load-balancing/verify.sh

im going to add the --remove-orphans flag, altho as this happens so infrequently it wont be easy to see if it helps

phlax added a commit to phlax/envoy that referenced this issue Aug 15, 2022
This might resolve a very infrequent CI bug in which docker doesnt
clean up containers before removing the network.

This relates to a very long-standing docker bug that never seems to
have been fully resolved.

cf moby/moby#17217

Signed-off-by: Ryan Northey <ryan@synca.io>
phlax added a commit to envoyproxy/envoy that referenced this issue Aug 29, 2022
This might resolve a very infrequent CI bug in which docker doesnt
clean up containers before removing the network.

This relates to a very long-standing docker bug that never seems to
have been fully resolved.

cf moby/moby#17217

Signed-off-by: Ryan Northey <ryan@synca.io>
@Jyrno42
Copy link

Jyrno42 commented Sep 1, 2022

Having similar issues in some of our CI machines. Essentially network removal fails since it tells me it has active endpoints. But docker network inspect shows no containers.

Not sure what the cause is but a restart to docker service fixes the problem.

Docker info below:

Client:
 Context:    default
 Debug Mode: false
 Plugins:
  app: Docker App (Docker Inc., v0.9.1-beta3)
  buildx: Docker Buildx (Docker Inc., v0.8.2-docker)
  compose: Docker Compose (Docker Inc., v2.9.0)
  scan: Docker Scan (Docker Inc., v0.17.0)

Server:
 Containers: 8
  Running: 4
  Paused: 0
  Stopped: 4
 Images: 4733
 Server Version: 20.10.17
 Storage Driver: overlay2
  Backing Filesystem: xfs
  Supports d_type: true
  Native Overlay Diff: false
  userxattr: false
 Logging Driver: json-file
 Cgroup Driver: cgroupfs
 Cgroup Version: 1
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: inactive
 Runtimes: io.containerd.runc.v2 io.containerd.runtime.v1.linux runc
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: 9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6
 runc version: v1.1.4-0-g5fd4c4d
 init version: de40ad0
 Security Options:
  apparmor
  seccomp
   Profile: default
 Kernel Version: 5.13.0-1031-azure
 Operating System: Ubuntu 20.04.5 LTS
 OSType: linux
 Architecture: x86_64
 CPUs: 2
 Total Memory: 3.828GiB
 Name: ci-secure
 ID: I2R3:DGJW:MXUM:7XJS:YLGE:IAMB:726Z:V24J:IT7G:VGI6:XJPI:3CZB
 Docker Root Dir: /datadrive/docker
 Debug Mode: false
 Registry: https://index.docker.io/v1/
 Labels:
 Experimental: false
 Insecure Registries:
  127.0.0.0/8
 Live Restore Enabled: false

docker-compose --version
Docker Compose version v2.9.0

oschaaf pushed a commit to maistra/envoy that referenced this issue Oct 26, 2022
This might resolve a very infrequent CI bug in which docker doesnt
clean up containers before removing the network.

This relates to a very long-standing docker bug that never seems to
have been fully resolved.

cf moby/moby#17217

Signed-off-by: Ryan Northey <ryan@synca.io>
@jac1013
Copy link

jac1013 commented Nov 17, 2022

Restarting docker didn't do the trick for me. Added --remove-orphans option to docker-compose down and the network was successfully removed.

@kobs30
Copy link

kobs30 commented Jan 9, 2023

docker-compose down --remove-orphans
Removing network test_default
ERROR: error while removing network: network test_default id 4169314d8c79786bd7ed0a9d8df9a153ee26827f9b00f582c4fd981019bc2137 has active endpoints

@abhandari88
Copy link

Running into same issue with Docker client and server : v23.0.1 on centos .
Not able to delete the network using docker compose down
docker compose down with --remove-orphans and -p project name which gives the error as
failed to remove network : Error response from daemon: error while removing network: network id has active endpoints
This seems to be reoccurring issue and the only possible fix is to restart docker service and then remove the network
This is not a feasible solution to restart docker engine for removing a network when running this part of CI process. Please let us know how this can be fixed and I am not sure why this issue is closed?

@girishgbgithub
Copy link

Same issue happening in our ci pipeline. This is happening too often now a days and failing out pipelines. docker-compose down with --remove-orphans also same issue. Restarting docker is the only solution now and retriggering the jobs .

@Tailslide
Copy link

Tailslide commented May 1, 2023

I run into this occasionally. Not sure how to resolve it without restarting everything.

sudo docker-compose down --remove-orphans
WARNING: Some networks were defined but are not used by any service: global
Removing network testcontainer_default
ERROR: error while removing network: network testcontainer_default id 1cd15dd40f9fe1cfa32285fb960f8c41ced421e51e6d0a945733c2d178a63ba7 has active endpoints

EDIT: Ok the error totally made sense if I had just read it.
Went into portainer and click on the network interface.. there were some other containers outside the stack referencing it.
Stopped them and all is good.

@sniiick
Copy link

sniiick commented Oct 27, 2023

Similar situation occurs in out CI on CentOS 7/8.

Docker version 24.0.4, build 3713ee1
Docker Compose version v2.19.1

BuildKit
Name: default
Driver: docker

Nodes:
Name: default
Endpoint: default
Status: running
Buildkit: v0.11.7-0.20230525183624-798ad6b0ce9f
Platforms: linux/amd64, linux/amd64/v2, linux/amd64/v3, linux/amd64/v4, linux/386
Labels:
org.mobyproject.buildkit.worker.moby.host-gateway-ip: 172.17.0.1

/etc/docker/daemon.json
{"mtu": 1450, "default-address-pools" : [{"base" : "172.17.0.0/16", "size" : 24}]}

docker network rm 6176dd27
or
docker compose -p 6176dd27 down --remove-orphans


Error response from daemon: error while removing network: network 6176dd27 id 3d6c34a04362e8a69959e897788d6b886563312984f20d2d8f0f054a21528dd9 has active endpoints
docker network inspect 6176dd27
[
    {
        "Name": "6176dd27",
        "Id": "3d6c34a04362e8a69959e897788d6b886563312984f20d2d8f0f054a21528dd9",
        "Created": "2023-10-25T13:24:44.771659981+03:00",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "172.17.120.0/24",
                    "Gateway": "172.17.120.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {},
        "Options": {
            "com.docker.network.driver.mtu": "1450"
        },
        "Labels": {
            "com.docker.compose.network": "default",
            "com.docker.compose.project": "6176dd27",
            "com.docker.compose.version": "2.19.1"
        }
    }
]

According to response network does not have active endpoints. Restart of docker service fixes the problem.

PS. if it could help, we are dynamically creating and adding some containers to compose network during run, from the inside container (mounting docker socket), then removing them.

@mbenhalima
Copy link

mbenhalima commented Nov 20, 2023

Restarting docker helped clear this issue

# systemctl restart docker

# systemctl status docker

# docker-compose -f docker-compose.yml -f docker-compose-staging.yml --project-name myproj down --remove-orphans
Removing network myproj_default

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.