Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Create docker bridge on existing system bridge. #2310

Open
Grimeton opened this issue Dec 6, 2018 · 13 comments
Open

Create docker bridge on existing system bridge. #2310

Grimeton opened this issue Dec 6, 2018 · 13 comments

Comments

@Grimeton
Copy link

Grimeton commented Dec 6, 2018

Hello,

situation on the Linux box:

ip link create dev br0 type bridge
ip link set dev eth0 master br0
ip link set dev eth0 up
ip address add 10.1.1.254/24 dev br0
ip link set dev br0 up
ip route add default via 10.1.1.1

Now you tell the daemon to use br0 as the default bridge. So far so good. Now you create a container with --ip 10.1.1.249. Works as well. But when you start the container:

Error response from daemon: user specified IP address is supported on user defined networks only

Not nice.

When you create a different network with a bridge, then the bridge is created but it does NOT contain a host interface that would allow real bridging to work. Based on the information provided here: https://docs.docker.com/engine/reference/commandline/network_create/#usage there doesn't seem to be a way to add one.

If you create a different network with a bridge and name it "br0" as well, then docker starts to mess with the system's network configuration which is a no go.

What I want:

The docker daemon using the existing bridge and allowing containers to be assigned an ip address from the existing network with the help of the --ip configuration option that is available in the create context.

That would allow simple configurations with REAL bridging, WITHOUT iptables and other trickery and to run the container in the local network even with a gateway and internet access. The way bridging is intended to work and what bridges have been designed for in the first place.

I googled and searched on github and it seems the issue is as old as docker while the problem is usually solved by just closing the ticket.

I wonder...

Cu

@nrdvana
Copy link

nrdvana commented Nov 21, 2019

I'd like to add a use case to the discussion, in case it helps emphasize why this should be supported by docker.

I have a pre-existing bridge named "vlan1". This is configured in the host system with a VPN that links it with other bridges on other hosts: 10.8.2.0/24 exists on the local host and can ping to 10.8.3.0/24 on a remote host through an IPSec connection. It is also bridged with a physical ethernet port on a protected LAN, so there is already a dnsmasq running to be able to serve IP addresses to virtual machines on this network.

The only "normal" way to integrate Docker into this network is to run the container on a different network and then expose ports onto 10.8.2.1:xxx as if they were normal programs running on the host. This makes that service available to both the VPN and the local LAN.

However, I have two use cases for which there is not a good solution:

  • I want the ability to take a real physical host connected on the 10.8.2.X LAN and port those services into docker containers without having to change the IP address and update everything else referring to them.
  • I want to be able to open ports on the fly within a container and communicate with the rest of the VPN. For example, docker exec -it myapp run_myapp --mode=debug --port 5000 and then connect my web browser to port 5000 on the internal IP address to get some quick debugging without having to re-create the container. I can't point my browser to the docker internal IP because it isn't one of the nets routed through the VPN.

What I want to do is:

docker network create --driver=bridge \
  --ip-range=10.8.2.0/24 --subnet=10.8.2.0/24 \
  -o "com.docker.network.bridge.name=vlan1" vlan1

and then connect my docker containers directly to this network either specifying a literal IP address, or getting one auto-assigned by the dnsmasq that was already running on this bridge.

There is currently no way to do this (that I've found) without docker trying to take control of IP address assignment on that bridge.

I think docker has left a path open for my use case by writing a custom --driver and --ipam-driver but I don't see a list to choose from of drivers already written.

Here is what I would want from my ideal driver:

  • When the network is created, check to make sure the bridge exists and do nothing else at all.
  • When a container is created on this bridge, create a virtual ethernet interface like normal.
    • If an IP address is given, just assume the IP address is valid and use it.
    • If no IP address is given, run a DHCP request on the bridge and assign that to the container. If the DHCP server can't assign an IP, abort creating the container.
  • Do not change routing rules for the bridge.
  • Do not change firewall rules for the bridge.

It doesn't seem like this should be too hard. The only non-trivial code in this driver would be a dhcp client, and it could just call out to an existing tool.

@metajiji
Copy link

In most cases all you need is macvlan https://docs.docker.com/network/macvlan/

@wenerme
Copy link

wenerme commented Mar 17, 2020

macvlan is different from bridge, mostly, can no access each other by ip, can not use existing ifaces, because can not access each other, if only require egress connection, macvlan is fine.

@Grimeton
Copy link
Author

You can just use Brouting to get the job done, via proxy arp. That's just one of those dirty tricks that don't work too well because docker is still maintaining the authority over a piece of your subnet which isn't an option at all.

I mean, look at the iptables fiasco...

@nrdvana
Copy link

nrdvana commented Mar 24, 2020

@Grimeton I'm confused whether you are saying that bridge routing will or won't work. If it will, I'd be interested in a link to read about how to try that.

@Grimeton
Copy link
Author

Grimeton commented Mar 24, 2020

@nrdvana "BridgeD routing" is something you can easily create via proxy_arp on IP4 and proxy_ndp on IP6.

Simple example:

Subnet: 10.1.1.0/24
Docker-Host-IP on eth0: 10.1.1.10/24
Docker-Host-IP on the bridge: 10.1.1.10/24 (if you even want to use this)
Subnet for Docker on the bridge br-docker: 10.1.1.0/24 (br-docker)
IP-Range for Docker: 10.1.1.192 to 10.1.1.254 (basically a /26, yes we start at .192, remember that subnetting and routing aren't the same so 10.1.1.192/26 in routing isn't the same as 10.1.1.192/26 in subnetting)

eth0 is NOT part of br-docker !

Now you enable proxy_arp on eth0 and create a route for 10.1.1.192/26 on dev br-docker.

ip route add 10.1.1.192/26 dev br-docker

From this moment on, the host, will respond to all arp requests for anything 10.1.1.192/26 and will forward the packets between the br-docker bridge and the main eth0 interface.

Unless docker messes with your iptables rules even when it's disabled. Just in case:

iptables -P FORWARD ACCEPT

On a side note: people that DROP network packets really do NOT understand how networking works.

Cu

@mcronce
Copy link

mcronce commented Apr 29, 2021

Adding another use case - I've got an existing bridge, br0, which I use to provide qemu VMs with network access - giving them an IP on the physical LAN.

With this configuration, and "bridge":"br0" in daemon.json, containers are not able to route all the way out to the Internet at all without --network=host. Without "bridge":"br0", they can't even reach other hosts on the LAN, including the gateway.

@crazybert
Copy link

Those of you who land on this page looking for answers - google "com.docker.network.bridge.inhibit_ipv4=true"

@nazar-pc
Copy link

nazar-pc commented Jul 4, 2023

Interestingly, com.docker.network.bridge.inhibit_ipv4 seems to be missing in official documentation: https://docs.docker.com/network/drivers/bridge/#options

@nazar-pc
Copy link

nazar-pc commented Jul 4, 2023

I found that I can define custom bridge network in Docker, disconnect veth from that bridge and reconnect to the bridge that I want. It seems to work, but it is also ugly. Specifying existing bridge in com.docker.network.bridge.name breaks that bridge, so that is not helpful.

@albert-a
Copy link

albert-a commented Jan 20, 2024

@metajiji,

In most cases all you need is macvlan https://docs.docker.com/network/macvlan/

@crazybert,

Those of you who land on this page looking for answers - google "com.docker.network.bridge.inhibit_ipv4=true"

Thank you both very much, brilliant! Both of your solutions work! (Actually they don't, see update..) The only exception I found so far is that in macvlan network containers can not ping docker host, but in bridge network they can. Though it is not significant.

Other than that, the containers are exposed to the network, can connect to the network hosts and to each other and can be connected from the network.

@nazar-pc, I have no Idea why it didn't work for you. I added network named lan like this:

docker network create --subnet=192.168.51.0/24 --gateway=192.168.51.250  -o com.docker.network.bridge.name=br0 -o com.docker.network.bridge.inhibit_ipv4=true lan

Then created the container:

docker run -it --rm --network lan --ip 192.168.51.10 jonlabelle/network-tools bash

Try pinging, everything should work!

You can do the same with macvlan (assuming you removed the bridge on the docker host):

docker network create -d macvlan --subnet=192.168.51.0/24 --gateway=192.168.51.250 -o parent=eth0 lan

Creatig test container with docker compose:

version: '3.4'

services:
  test:
    image: jonlabelle/network-tools
    networks:
      lan:
        ipv4_address: 192.168.51.10
      command: sleep infinity
        
networks:
  lan:
    external: true

Update:

Unfortunately, I found out that former approach (i.e. creating bridge on the docker host and creating a bridge network) actually breaks the internet access from the containers that use standard network configuration. It happens because docker isolates the bridge network from other networks, thus while the containers connected to the bridge network work as expected and as I described above they are exposed to the network, the other containers which use default per-container neworks loose the connectivity to the LAN/internet. I don't know how to prevent docker from inserting iptables rules permanentaly, but when I insert ALLOW rule manually in the DOCKER-ISOLATION-STAGE-2 chain:

iptables -I DOCKER-ISOLATION-STAGE-2 1 -j ACCEPT

Everything works. Of cause it's not a solution, just the explanation how docker blocks the traffic.

Luckily the latter (macvlan) approach works. It exposes required containers to the LAN and does not break connectivity of the continers with default network configuration.

@jzvikart
Copy link

jzvikart commented Feb 4, 2024

+1. There needs to be an official and documented way of using a bridge that exists outside of docker itself. From docker's perspective this means skipping creating/configuring/destroying the bridge and just using the existing bridge that was provided by the user.

@albert-a
Copy link

albert-a commented Apr 4, 2024

Unfortunately it appeared that the bridge approach breaks the connectivity of the other containers that use default network configuration. But macvlan approach still works as expected.
I updated my previous comment.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

9 participants