New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
External Network Gateway #20758
Comments
+1 Like to have too |
+1 Like to have also |
+1 It would be a nice feature |
This is definitely a must have. |
Hi brousselle, the ability to assign a specific IP address to a container is already there in 1.10. You can manage the IP of a container using the "--ip" flag when doing the "docker run". What I'm suggesting is being able to tell Docker to do not manage the default Gateway on the bridge Interface when I create a Docker network (docker network create command). In other word, I would like Docker to let the bridge just act as a bridge and not act as a Layer3 gateway. By doing this, Docker would assume the IP address of the default gateway exists and is handle somewhere else (by a physical Router for instance). To have this feature working with multiple host deployment, you would need to bridge/extend the same VLAN on multiple host just like "traditional VM world" . Thanks for your comments. |
@Yesbut also consider the following. What you configure with On the docker The equivalent option for the user-defined network is achieved passing the container's default gateway via auxiliary addresses option: |
@aboch You made my day ! What you suggested worked for me! I can now remove my custom scripts. Many thanks ! There is still a trick with the Layer 3 configuration on the bridge. When I create the network without specifying the Gateway, Docker still try to configure the .1 on the bridge.
In my case, the workaround will be to set a
The L3 gateway is configured on the bridge interface, this is something I dont really want.
But at least, my containers gets the right Gateway IP and I dont have any IP conflict anymore !
Thanks again for your sharing ! |
Looks like this issue is resolved, so I'll close this, but feel free to comment if you think theres something that needs to be done |
Hi @thaJeztah Thanks in advance, |
@Yesbut alright, I'll keep it open for now |
Thank you @thaJeztah ! I appreciate. |
I agree with @Yesbut, we do need a parameter to prevent Docker to assign IP address to the bridge. Sum up :
What do we want :We want to have containers with ip in the 192.168.1.0/24 network (like computers) without any NAT/PAT/translation/port-forwarding/etc... ProblemWhen doing this :
we are able to give containers the IP we want to, but the bridge created by docker ( Solution1. Setup the Docker networkUse the
By default, Docker will give to the bridge interface (
We can specify the bridge name by adding 2. Bridge the bridge !Now we have a bridge (
3. Delete the bridge IPWe can now delete the IP address from the bridge, since we don't need one :
The IP |
It seems like you are asking for a pure L2 bridge. Being a driver specific configuration, I am thinking it could be passed down during network creation with something like:
In case mode==L2, bridge driver will not configure the gateway address and other stuff. Feel free to open a feature request in libnetwork for this. Thanks. |
Thanks @jeromepin for posting your solution on stackoverflow. |
Ran across this today. I'm trying to dockerize a DLNA server which works with bridging only due to the nature of the protocol. Problem is the bridge I need to create a docker network for is already configured by the OS at boot and attached to a bonded link. When I start up docker engine it will proceed to reconfigure the bridge IP and network connectivity for the host gets hosed. I really liked suggestion put forward by @aboch , I think it solves this issue nicely. |
@devsr I am trying to debug an issue. Can you please do me a favor and check that your It can be tested easily with For Example: id@emachines-e520:~/docker-images$ mkdir t
id@emachines-e520:~/docker-images$ cd t
id@emachines-e520:~/docker-images/t$ echo FROM busybox >> Dockerfile
id@emachines-e520:~/docker-images/t$ echo RUN ping -c1 8.8.8.8 >> Dockerfile
id@emachines-e520:~/docker-images/t$ docker build .
Sending build context to Docker daemon 2.048 kB
Step 1 : FROM busybox
latest: Pulling from library/busybox
385e281300cc: Pull complete
a3ed95caeb02: Pull complete
Digest: sha256:4a887a2326ec9e0fa90cce7b4764b0e627b5d6afcb81a3f73c85dc29cea00048
Status: Downloaded newer image for busybox:latest
---> 47bcc53f74dc
Step 2 : RUN ping -c1 8.8.8.8
---> Running in a75814e9f094
PING 8.8.8.8 (8.8.8.8): 56 data bytes
--- 8.8.8.8 ping statistics ---
1 packets transmitted, 0 packets received, 100% packet loss
The command '/bin/sh -c ping -c1 8.8.8.8' returned a non-zero code: 1
id@emachines-e520:~/docker-images/t$ echo $?
1 |
Wondering if the new experimental Macvlan and Ipvlan Network Drivers feature are resolving this? |
I'm running into the similar issue. What we want is pure L2 w/o default gateway. Also we want to give a container the first address in the subnet e.g. when I have custom network created with We can tell docker not to use the first address for its bridge interface by specifying I tried to remove automatically assigned first IP address from the generated br-xxx interface using I believe new macvlan or ipvlan drivers will solve the original issue. |
I tried with experimental macvlan driver but still no luck:(
|
@s2ugimot
That is expected. The IP address assignment is done by the IPAM driver independently of the network driver (this is by design, see moby/libnetwork#489), even before the network driver is asked to create the network. If you want the first container to get the first IP of the subnet, your only option is to specify the gateway with
No, address management is independent from network drivers and initiated by libnetwork core. |
@aboch Thanks for the update.
Sorry, I refer "the original issue" as
and that can be resolved by using macvlan L2 mode, I believe. |
Just to add @s2ugimot we tried macvlan and was very happy until our Ubuntu 14.04.5 with LTS 4.4 Kernel just paniced, so I found regular bridging much stabler than macvlan. |
@mustafaakin have you reported the kernel panic with Ubuntu? Kernel panics are a bug in the kernel, so not something that can be resolved in the docker codebase |
@s2ugimot Just curious, what command are you using to create the actual network with macvlan? |
Hey @aboch So I tried your solution, but I find that when I get to:
It locks my VM out after I type in the command.. Luckily after a restart it's back up however. Any idea? I'm only running 1 NIC (eth0). EDIT: So in order to fix this I simply added a NIC to my VM (via libvirt) and set it the proper way via the tutorial to eth1. If you set it to the same interface, it'll eventually lock you out after a short period of time. |
+1 This feature is a Must Have for our network setup. |
@jeromepin ,I did as you said,but I have problem like this: so ,my container can not connect by other mochine whitout the host which it running on ,how can I do |
docker version:1.11.2 |
my question fixed by : but my container always can not connect by other mochine whitout the host which it running on ,how can I do or what I not config |
I solved this problem by using the ipvlan driver in experimental.
L3 information on the mybridge interface is unchanged, and I can run a container accessing the network connected to mybridge with this command:
|
+1: it is non-intuitive that the --gateway IP address is forced onto the bridge device. Maybe a bridge driver option like The --gateway=<unused_ip_on_subnet> --aux-address "DefaultGatewayIPv4=<real_gateway>" is a lucky workaround, so this is not crucial to fix but really really nice to have!! |
This is ridiculous. I just want an L2 network for interconnecting a set of containers with point to point links. First, I'm forced to make the IPv4 network /31 or shorter (#20758). Then I can't tell docker to just get out of the way and not provision a gateway IP in the network. Is it really sensible to require the user to create a network with a fake gateway, edit the network to remove the gateway, and then plug hosts into it? What if I want to provision the networks with compose rather than pre-provisioning them statically? Is a |
Is there any actual solution to this that doesn't involve a lot of hacking? This problem was raised more than 1.5 years ago and it doesn't seem to have been addressed yet even though the issue at hand is quite reasonable. In my case I am trying to replicate a physical topology where I am connecting two containers with a /30 and it's failing because docker decided to take one of the only available IPs for itself. |
Just ran into this. The fact that the configured docker gateway is forced as address on the bridge device is really contra-intuitive. I'd like to attach a container to a pre-existing bridge not managed by docker. |
@dbarrosop The introduction of the macvlan driver (back in 1.10 I think) to docker networking has provided reasonable solution to this issue. As I mentioned higher up in the thread, my use case was dockerizing an application that needed layer 2 networking (a DLNA server). I wanted to attach it a preexisting bridge (over a bonded link) on the host server, but docker kept trying to reconfigure the bridge and hosing up connectivity. The solution now is I can create a macvlan network attached to the bonded link directly (no bridge) and then add containers to it, those containers now have layer 2 network access to the host network. A minor caveat is that the host interface is invisible to the containers and vice versa, that was not a problem in my particular deployment. |
Ok, I will test but that feels like a very cumbersome solution for something that should be as simple as just creating a bridge with no IP :( |
This offloads the issue onto the host operating system and doesn't address the root issue. For complex topologies that require multiple inter-container links (my scenario is simulating network devices e.g. switches, routers, load balancers), I would say this is the wrong approach. I have no need for exposing these inter-container L2 links to the outside world or host operating system as they are significant/relevant only to the container pairs that are being interconnected; they should be entirely logical constructs that can/should exist only within the docker networking realm, not the "real world". |
@devsr just tested the macvlan driver and that's not a solution:
Or am I doing something wrong? The only thing I need is for docker to stay out of the way so I can bridge containers without any routing whatsoever. |
Not sure where this should be exactly, but +1 for @hslabbert's request for simple inter-container L2 links which exist only within the docker networking realm. docker/for-linux#568 (comment) |
Hello guys! Look out the window - there is 2k22!!1 We still want but can't use simple l2 bridge without ip address on the host! Come on :( |
Hi everyone,
Not sure if someone raised this issue yet..
I would like to be able to use an existing bridge (VLAN) with the default gateway IP configured on an external router. By doing this, all the containers would have it's own IP address directly accessible from the external network without the NAT and port address translation stuff.
Container ---> Docker network ---> Bridge --> Physical switch ---> Physical Router (Gateway IP).
When I create the network with the flag --gateway, this configure the IP address of the gateway on the Bridge interface. This cause a conflict IP address with my physical Router that is on the same VLAN.
That would be useful to be able to tell Docker to not configure the IP address on the bridge but still manage the IPAM. It could be a key word like "--external-gateway=true" or "--external-gateway=192.168.1.1" instead of " --gateway=192.168.1.1".
Even if the gateway address is external, I still want to use the IPAM provided by Docker to assign IP addresses to my containers with the flag --ip. Moreover, I still want Docker to assign a dynamic IP with the right parameters (default route or dns for instance) to my containers.
The workaround I'm using right now:
I'm running one script before the Docker daemon start to shutdown the VLAN interface to avoid the Gateway IP conflict. Then, I'm running a script after the Docker daemon is started to remove the IP address from the bridge and bring up the external interface.
The scripts :
cat /lib/systemd/system/docker.service
[Service]
Type=notify
ExecStartPre=/opt/IT/docker-startpre.sh
ExecStartPost=/opt/IT/docker-startpost.sh
ExecStart=/usr/bin/docker daemon -H fd://
cat /opt/IT/docker-startpre.sh
!/bin/bash
logger "DOCKER PRE SCRIPT"
logger "Disabling interfaces"
ifdown ens224.555
cat /opt/IT/docker-startpost.sh
!/bin/bash
logger "DOCKER POST SCRIPT"
logger "Remove IP address from the bridge"
ip address delete 10.199.45.1/24 dev br-vlan555
logger "Bring up interfaces"
ifup ens224.555
The setup:
External VLAN interface (TAG)
[root@mouimet177 ~]# ifconfig ens224.555
ens224.555: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet6 fe80::250:56ff:fe84:a755 prefixlen 64 scopeid 0x20
Bridge interface attached with the VLAN interface
[root@mouimet177 ~]# ifconfig br-vlan555
br-vlan555: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet6 fe80::250:56ff:fe84:a755 prefixlen 64 scopeid 0x20
Bridge
[root@mouimet177 ~]# brctl show
bridge name bridge id STP enabled interfaces
br-vlan555 8000.00505684a755 no ens224.555
[root@mouimet177 ~]# docker network inspect net555
[
{
"Name": "net555",
"Id": "064395b1573a5ee08c61411dabe750b0b06b0d619fcb5a91c9fbaadde968e039",
"Scope": "local",
"Driver": "bridge",
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "10.199.45.0/24",
"Gateway": "10.199.45.1"
}
]
},
"Containers": {
"66cc3e0e67343b6c1acc6efcfb1a8d28b9bf8344671fe83247d77aa15e43e15a": {
"Name": "web1",
"EndpointID": "075535a4ae0c0a65406ab108a3a1e2c7663c80e746e448b538ec57c6e9bba77a",
"MacAddress": "02:42:0a:c7:2d:0b",
"IPv4Address": "10.199.45.11/24",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.bridge.name": "br-vlan555"
}
}
]
In this particular case, "Gateway": "10.199.45.1" is an external router.
Thank you in advance for your support,
The text was updated successfully, but these errors were encountered: