Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error response from daemon: could not find an available, non-overlapping IPv4 address pool among the defaults to assign to the network #599

Closed
letmp opened this issue Feb 20, 2019 · 23 comments

Comments

@letmp
Copy link

letmp commented Feb 20, 2019

I have a fresh docker installation running on Ubuntu 18.04 LTE

When I try to create a new network it fails:

$ docker network create test
Error response from daemon: could not find an available, non-overlapping IPv4 address pool among the defaults to assign to the network
$ docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
fed3902b248e        bridge              bridge              local
979af7e91a80        host                host                local
5b17cb218199        none                null                local

ifconfig says:

docker0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 172.17.0.1  netmask 255.255.0.0  broadcast 172.17.255.255
        ether 02:42:88:98:67:9c  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

ens160: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.21.196.46  netmask 255.255.255.0  broadcast 172.21.196.255
        inet6 fe80::250:56ff:fea4:2e24  prefixlen 64  scopeid 0x20<link>
        ether 00:50:56:a4:2e:24  txqueuelen 1000  (Ethernet)
        RX packets 3949  bytes 460088 (460.0 KB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 3004  bytes 529203 (529.2 KB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

ens192: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.21.136.46  netmask 255.255.255.0  broadcast 172.21.136.255
        inet6 fe80::250:56ff:fea4:1267  prefixlen 64  scopeid 0x20<link>
        ether 00:50:56:a4:12:67  txqueuelen 1000  (Ethernet)
        RX packets 702  bytes 71397 (71.3 KB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 121  bytes 11141 (11.1 KB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Lokale Schleife)
        RX packets 1795  bytes 297877 (297.8 KB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 1795  bytes 297877 (297.8 KB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

routes:

0.0.0.0         172.21.136.254  0.0.0.0         UG    20101  0        0 ens192
10.0.0.0        172.21.196.254  255.0.0.0       UG    200    0        0 ens160
129.247.0.0     172.21.196.254  255.255.0.0     UG    200    0        0 ens160
169.254.0.0     0.0.0.0         255.255.0.0     U     1000   0        0 ens160
172.16.0.0      172.21.196.254  255.240.0.0     UG    200    0        0 ens160
172.17.0.0      0.0.0.0         255.255.0.0     U     0      0        0 docker0
172.21.136.0    0.0.0.0         255.255.255.0   U     101    0        0 ens192
172.21.196.0    0.0.0.0         255.255.255.0   U     100    0        0 ens160
192.108.54.0    172.21.196.254  255.255.255.0   UG    200    0        0 ens160
192.168.0.0     172.21.196.254  255.255.0.0     UG    200    0        0 ens160

iptables:

Chain INPUT (policy ACCEPT)
target     prot opt source               destination         

Chain FORWARD (policy DROP)
target     prot opt source               destination         
DOCKER-USER  all  --  anywhere             anywhere            
DOCKER-ISOLATION-STAGE-1  all  --  anywhere             anywhere            
ACCEPT     all  --  anywhere             anywhere             ctstate RELATED,ESTABLISHED
DOCKER     all  --  anywhere             anywhere            
ACCEPT     all  --  anywhere             anywhere            
ACCEPT     all  --  anywhere             anywhere            

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination         

Chain DOCKER (1 references)
target     prot opt source               destination         

Chain DOCKER-ISOLATION-STAGE-1 (1 references)
target     prot opt source               destination         
DOCKER-ISOLATION-STAGE-2  all  --  anywhere             anywhere            
RETURN     all  --  anywhere             anywhere            

Chain DOCKER-ISOLATION-STAGE-2 (1 references)
target     prot opt source               destination         
DROP       all  --  anywhere             anywhere            
RETURN     all  --  anywhere             anywhere            

Chain DOCKER-USER (1 references)
target     prot opt source               destination         
RETURN     all  --  anywhere             anywhere 

Any ideas what could be wrong?

@letmp
Copy link
Author

letmp commented Feb 22, 2019

small update:

when i run
docker network create "test" --subnet="172.17.0.0/16"
everything works fine.

but what is wrong with my docker setup so that
docker network create "test"
fails all the time?

@VariableDeclared
Copy link

VariableDeclared commented Mar 28, 2019

Same issue here,

Running on Debian-Buster.
--------------------EDIT

Found out this was being caused by the expressvpn client.

@HLeithner
Copy link

I have the same issue, the reason is if you have an IP with the range 172.16.0.0/12 on your network device.

I think there is a problem in this code path: https://github.com/docker/libnetwork/blob/a79d3687931697244b8e03485bf7b2042f8ec6b6/ipam/allocator.go#L429-L444

If I take my local interface down before I start docker, it's possible to create a network. It doesn't matter which networks I added to the daemon.json config

ip link set down dev eth0
systemctl restart docker
ip link set up dev eth0
docker network create test

Docker is also unable to create a bridge if eth0 has a 172.16.0.0/12 ip it only works if I create a docker0 bridge by my self.

@lgandras
Copy link

This also happens with nordvpn. I would tend to believe the vpn device and the routing mess up with docker's expectation of the network configuration:

$ ip route
default via 192.168.42.129 dev enp0s20f0u1 proto dhcp metric 20100 
default via 192.168.102.1 dev wlp4s0 proto dhcp metric 20600 
10.8.8.0/24 dev tun0 proto kernel scope link src 10.8.8.5 
***.***.***.*** via 192.168.42.129 dev enp0s20f0u1 
128.0.0.0/1 via 10.8.8.1 dev tun0 
169.254.0.0/16 dev docker_gwbridge scope link metric 1000 
172.19.0.0/16 dev br-ad809486928f proto kernel scope link src 172.19.0.1 linkdown 
192.168.42.0/24 dev enp0s20f0u1 proto kernel scope link src 192.168.42.214 metric 100 
192.168.102.0/24 dev wlp4s0 proto kernel scope link src 192.168.102.124 metric 600 

$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
2: enp0s31f6: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc fq_codel state DOWN group default qlen 1000
    link/ether **:**:**:**:**:** brd ff:ff:ff:ff:ff:ff
3: wlp4s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether **:**:**:**:**:** brd ff:ff:ff:ff:ff:ff
    inet 192.168.102.124/24 brd 192.168.102.255 scope global dynamic noprefixroute wlp4s0
       valid_lft 27763sec preferred_lft 27763sec
5: docker_gwbridge: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:8e:81:68:a2 brd ff:ff:ff:ff:ff:ff
6: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:14:e6:fa:d7 brd ff:ff:ff:ff:ff:ff
16: veth88912a7@if15: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker_gwbridge state UP group default 
    link/ether 66:9a:30:e1:47:69 brd ff:ff:ff:ff:ff:ff link-netnsid 2
27: veth8f32300@if26: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker_gwbridge state UP group default 
    link/ether e6:24:b6:09:10:63 brd ff:ff:ff:ff:ff:ff link-netnsid 4
33: enp0s20f0u1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UNKNOWN group default qlen 1000
    link/ether a6:be:5a:fa:a1:4a brd ff:ff:ff:ff:ff:ff
    inet 192.168.42.214/24 brd 192.168.42.255 scope global dynamic noprefixroute enp0s20f0u1
       valid_lft 3295sec preferred_lft 3295sec
37: br-ad809486928f: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:38:61:d1:df brd ff:ff:ff:ff:ff:ff
    inet 172.19.0.1/16 brd 172.19.255.255 scope global br-ad809486928f
       valid_lft forever preferred_lft forever
38: tun0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UNKNOWN group default qlen 100
    link/none 
    inet 10.8.8.5/24 brd 10.8.8.255 scope global tun0
       valid_lft forever preferred_lft forever

In my case, I have two devices which provide internet: enp0s20f0u1 (usb thethering) and wlp4s0 (wlan). The nordvpn client decided to route encrypted traffic through enp0s20f0u1 as you can notice in the ip table. After stopping the VPN, it looks like this:

$ ip route
default via 192.168.42.129 dev enp0s20f0u1 proto dhcp metric 20100 
default via 192.168.102.1 dev wlp4s0 proto dhcp metric 20600 
169.254.0.0/16 dev docker_gwbridge scope link metric 1000 
172.19.0.0/16 dev br-ad809486928f proto kernel scope link src 172.19.0.1 linkdown 
192.168.42.0/24 dev enp0s20f0u1 proto kernel scope link src 192.168.42.214 metric 100 
192.168.102.0/24 dev wlp4s0 proto kernel scope link src 192.168.102.124 metric 600 

$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: enp0s31f6: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc fq_codel state DOWN group default qlen 1000
    link/ether **:**:**:**:**:** brd ff:ff:ff:ff:ff:ff
3: wlp4s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether **:**:**:**:**:** brd ff:ff:ff:ff:ff:ff
    inet 192.168.102.124/24 brd 192.168.102.255 scope global dynamic noprefixroute wlp4s0
       valid_lft 27770sec preferred_lft 27770sec
    inet6 fe80::759e:bac9:67f1:b6d6/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
5: docker_gwbridge: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:8e:81:68:a2 brd ff:ff:ff:ff:ff:ff
6: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:14:e6:fa:d7 brd ff:ff:ff:ff:ff:ff
16: veth88912a7@if15: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker_gwbridge state UP group default 
    link/ether 66:9a:30:e1:47:69 brd ff:ff:ff:ff:ff:ff link-netnsid 2
    inet6 fe80::649a:30ff:fee1:4769/64 scope link 
       valid_lft forever preferred_lft forever
27: veth8f32300@if26: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker_gwbridge state UP group default 
    link/ether e6:24:b6:09:10:63 brd ff:ff:ff:ff:ff:ff link-netnsid 4
    inet6 fe80::e424:b6ff:fe09:1063/64 scope link 
       valid_lft forever preferred_lft forever
33: enp0s20f0u1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UNKNOWN group default qlen 1000
    link/ether a6:be:5a:fa:a1:4a brd ff:ff:ff:ff:ff:ff
    inet 192.168.42.214/24 brd 192.168.42.255 scope global dynamic noprefixroute enp0s20f0u1
       valid_lft 3303sec preferred_lft 3303sec
37: br-ad809486928f: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:38:61:d1:df brd ff:ff:ff:ff:ff:ff
    inet 172.19.0.1/16 brd 172.19.255.255 scope global br-ad809486928f
       valid_lft forever preferred_lft forever

The diff between both looks like this:

5,7d4
<     10.8.8.0/24 dev tun0 proto kernel scope link src 10.8.8.5 
<     ***.***.***.*** via 192.168.42.129 dev enp0s20f0u1 
<     128.0.0.0/1 via 10.8.8.1 dev tun0 
17a15,16
>         inet6 ::1/128 scope host 
>            valid_lft forever preferred_lft forever
23c22,24
<            valid_lft 27763sec preferred_lft 27763sec
---
>            valid_lft 27770sec preferred_lft 27770sec
>         inet6 fe80::759e:bac9:67f1:b6d6/64 scope link noprefixroute 
>            valid_lft forever preferred_lft forever
29a31,32
>         inet6 fe80::649a:30ff:fee1:4769/64 scope link 
>            valid_lft forever preferred_lft forever
31a35,36
>         inet6 fe80::e424:b6ff:fe09:1063/64 scope link 
>            valid_lft forever preferred_lft forever
35c40
<            valid_lft 3295sec preferred_lft 3295sec
---
>            valid_lft 3303sec preferred_lft 3303sec
39,42d43
<            valid_lft forever preferred_lft forever
<     38: tun0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UNKNOWN group default qlen 100
<         link/none 
<         inet 10.8.8.5/24 brd 10.8.8.255 scope global tun0

With the VPN disabled (right hand side), i'm able to create the network (through docker stack deploy)

@melakonda
Copy link

In my case it was an issue with docker_gwbridge docker network. This network was missing on the worker node which result following error on that worker node.

Error:
network sandbox join failed: subnet sandbox join failed for "14.2.1.0/24": error creating vxlan interface: file exists

starting container failed: error creating external connectivity network: could not find an available, non-overlapping IPv4 address pool among the defaults to assign to the network

To fix the above errors i have created the docker network manually using below command..
docker network create \ --subnet 172.18.0.0/16 \ --gateway 172.18.0.1 \ -o com.docker.network.bridge.enable_icc=false \ -o com.docker.network.bridge.name=docker_gwbridge \ docker_gwbridge

@skywli
Copy link

skywli commented Oct 12, 2019

small update:

when i run
docker network create "test" --subnet="172.17.0.0/16"
everything works fine.

but what is wrong with my docker setup so that
docker network create "test"
fails all the time?

I also have this problem ,why creating network subnet must be specified.

[root@VM_68_63_centos /home/soft/pkg/envoy/examples/redis]# docker network create "test"
Error response from daemon: could not find an available, non-overlapping IPv4 address pool among the defaults to assign to the network
[root@VM_68_63_centos /home/soft/pkg/envoy/examples/redis]# docker network ls
NETWORK ID NAME DRIVER SCOPE
9ef75882fd29 bridge bridge local
210bfa67ad26 host host local
a5566eded8cf none null local
[root@VM_68_63_centos /home/soft/pkg/envoy/examples/redis]# docker network create "test" --subnet="172.12.0.0/16"
3e39117327c28d06bc9e9e5828533973d748a949f188d30ec19dd74335f26fdf
[root@VM_68_63_centos /home/soft/pkg/envoy/examples/redis]# docker network create "test1"
Error response from daemon: could not find an available, non-overlapping IPv4 address pool among the defaults to assign to the network
[root@VM_68_63_centos /home/soft/pkg/envoy/examples/redis]# docker network ls
NETWORK ID NAME DRIVER SCOPE
9ef75882fd29 bridge bridge local
210bfa67ad26 host host local
a5566eded8cf none null local
3e39117327c2 test bridge local

@tstibbs
Copy link

tstibbs commented Oct 30, 2019

In case it helps get a bit of traction on this issue, it's trivial to reproduce on centos:

docker network create test1 # works

defaultGateway=$(/sbin/ip route | awk '/default/ { print $3 }')
ip route add 192.168.0.0/16 via $defaultGateway dev eth0
ip route add 172.16.0.0/12 via $defaultGateway dev eth0

docker network create test2 # fails
docker network create test1 --subnet 172.18.0.0/16 # works

Note that in my setup those routes would already have been going to the default gateway, so the two routes added above should have no impact. If docker is needing to add further entries to the route table to change routing for a particular (new) subnet, then I don't see that existing entries should have any impact on that (because the two entries above are way less specific than the entries that docker would add). I'm going to guess that the reason people are seeing this when they have VPN software installed is because that also modifies the route table.

So, is it the case that docker is by default incompatible with certain route table entries?

As an aside, is this still the right place for this issue? Or should it be in the moby project somewhere? (I'm not sure where the split is)

@mmuszynskifln
Copy link

Had the same issue with openvpn, I had to kill the openvpn process and then docker was able to create the network

@cpuguy83
Copy link
Collaborator

What's wrong is the pre-defined bridge ranges are all in use already so Docker can't use them.
You can set the default address pools at the daemon level or just manually specify the subnet each time you create a network.

Closing since this is not a bug, but feel free to discuss.

@tstibbs
Copy link

tstibbs commented Jan 2, 2020

@cpuguy83 yes, the default address pools can be changed, but to what?

According to docker/docs#8663 the default pools are two out of three of the private address ranges (172.16.0.0/12 and 192.168.0.0/16 but not 10.0.0.0/8). So if you're hitting this issue then it looks like you've probably got route table entries for both of these ranges. And I'd argue it's therefore likely that you've also got an entry for 10.0.0.0/8.

So, to change the default address pools to something which you don't have an existing route table entry for will mean changing to an address range outside of the private network space - i.e. letting docker occupy public address space, which seems like a bad idea.

So what do you recommend they're changed too?

Note that I tried setting the default pools to be subsets of the ranges noted above, but that prevents the daemon even starting up, with the error Error initializing network controller: list bridge addresses failed: PredefinedLocalScopeDefaultNetworks List: [ every address in range listed here... ]: no available network.

What's particularly frustrating and confusing is that if you set the subnet specifically e.g. docker network create abc --subnet 172.16.0.0/24 docker will happily add a route table entry for that range, even though it's within the pool that it previously says it can't use! So seems like if you're not explicit, it tries to save you from a mistake by checking for an overlap that may not actually cause a problem. Some way of overriding/disabling this check would be very useful for people in my situation. I don't think setting the subnet for every network manually is an option, because I've have to go through every script and compose file trying to allocate unique subnets across everything, which sounds like a maintenance nightmare.

@cpuguy83
Copy link
Collaborator

cpuguy83 commented Jan 2, 2020

Docker will happily do the same when you specify an address pool. The error is from the automatic pool selector when it can't find an available range. The error is also specifically for the default bridge network.

For the default network you can use --bip <bridge ip>/<bridge subnet> to set the bridge ip/subnet.
You can also setup a bridge yourself and tell Docker to use it with --bridge=<interface>.
Additionally you can set--fixed-cidr to use a subset of the configured subnet.

@tstibbs
Copy link

tstibbs commented Jan 3, 2020

@cpuguy83 Even when specifying bip it still fails in the same way. I.e. running:

dockerd -D --bip 111.1.1.1/24 --fixed-cidr 111.1.1.1/25 --default-address-pool base=172.16.0.0/22,size=24

fails with this error:

failed to start daemon: Error initializing network controller: list bridge addresses failed: PredefinedLocalScopeDefaultNetworks List: [172.16.0.0/24 172.16.1.0/24 172.16.2.0/24 172.16.3.0/24]: no available network

Note that the addresses I've selected for bip and fixed-cidr are just random addresses that don't appear to conflict with anything on my system - but the key thing is that the list of networks in the error message is the list from the default-address-pool param, not from bip.

So I'm not sure if I've misunderstood what you were suggesting or if there's a bug where bip is ignored when default-address-pool is also specified?

(btw I'm happy to take the conversation out of this ticket and then post back once there's some consensus, if there's someone more appropriate)

@chadfurman
Copy link

Had the same issue with openvpn, I had to kill the openvpn process and then docker was able to create the network

Killing the vpn is not needed. Create a network with:

docker network create your-network --subnet 172.24.24.0/24

Then, at the bottom of docker-compose.yaml, put this:

networks:
  dfefault:
    external: 
      name: your-network

@ghost
Copy link

ghost commented May 21, 2020

Same issue here,

Running on Debian-Buster.
--------------------EDIT

Found out this was being caused by the expressvpn client.

Thank you! Saved me hours of searching for the solution

@brunocampos01
Copy link

I experienced this issue while running an OpenVPN client which was also my default route. Stopping the OpenVPN client worked around the issue.

@poriam
Copy link

poriam commented Dec 21, 2020

@chadfurman 's solution worked for me (though I used 10.0.1.0/24). But the problem is that now I need to access my docker's PostgreSQL on 10.0.1.2:5432 instead of 127.0.0.1:5432 although I've defined this in my docker-compose.yml:

ports:
      - '127.0.0.1:5432:5432'

Can I map it to 127.0.0.1?

@ubaranouski
Copy link

NordVPN also may cause this problem

@mattjamesaus
Copy link

Had this happen to me when using aws client vpn and trying to use docker-compose. disconnecting from client VPN fixed the issue (ubuntu 21)

@mueller-fr
Copy link

Yes, this person has the same solution as @mattjamesaus
https://stackoverflow.com/questions/43720339/docker-error-could-not-find-an-available-non-overlapping-ipv4-address-pool-am/45377351#45377351

Also there is a link to a solution for running both vpn and docker-compose within the linked stackoverflow post.

@kiranparajuli589
Copy link

OpenVPN is also one of the cause

@KES777
Copy link

KES777 commented May 20, 2022

@cpuguy83
I think that is the bug of docker container. It thinks that network is already used when I have next route at my system:

192.168.0.0     0.0.0.0         255.255.0.0     U         0 0          0 tun0

But requested network by docker is not used and free to use. I can easily add route 192.168.129.0/28

my /etc/docker/daemon.json

{
  "bip": "192.168.128.1/24",
  "default-address-pools":
  [
    {"base":"192.168.128.0/17","size":28}
  ]
}

my compose file:

version: '3.7'

services:
  rabbitmq:
    image: rabbitmq:latest
    networks:
      net:
        aliases:
          - 'rabbitmq.local'

networks:
  net:
    driver: bridge

The new route that should be assigned to new docker network is 192.168.129.0/28 is more specific, it is free (not used on my host) and has no conflict with added by VPN route 192.168.0.0/16

This is ugly every time to stop VPN connection to recreate docker container.

@mrtargaryen
Copy link

Had the same issue with openvpn, I had to kill the openvpn process and then docker was able to create the network

Killing the vpn is not needed. Create a network with:

docker network create your-network --subnet 172.24.24.0/24

Then, at the bottom of docker-compose.yaml, put this:

networks:
  dfefault:
    external: 
      name: your-network

What is “your-network” and how do I find it? Is it just 192.168.50.1?

@mueller-fr
Copy link

mueller-fr commented Feb 9, 2024

@mrtargaryen I just stumbled upon your question.

Remark
This issue is closed. I think opening a new one is better style and would have given you an answer earlier.

Answer
Reading your quoted text, it looks like "your-network" is a name you can choose when creating the network. Any name. In the docker-compose.yaml you then have to use that same name.

Here is documentation on docker network create:

Usage: docker network create [OPTIONS] NETWORK

Create a network

Options:
--attachable Enable manual container attachment
...

Documentation from the shell, viewed by typing docker network create --help in my shell.

More info on the command can be found in the official docker documentation: CLI reference, docker network create.

The names in their examples are "my-bridge-network" or "my-multihost-network".

Happy learning! :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests