New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow user to choose the IP address for the container #19001

Merged
merged 2 commits into from Jan 8, 2016

Conversation

Projects
None yet
@aboch
Contributor

aboch commented Dec 30, 2015

Fixes #6743
Fixes #18297 (from libnetwork vendoring)
Fixes #18910 (from libnetwork vendoring)

This PR will allow user to choose the IP address(es) for the container during docker run and docker network connect. The configuration is persisted across container restart.

Example:

$ docker run -d --net=n0 --name=c0 --ip=172.20.55.66 --ip6=2001:db8::5566 busybox top
b2f39b19c58ace10196d92da404b217cad38802ae196c67cfe2a8dec89a5fa46
$
$ docker network connect --ip=172.21.55.66 n1 c0
$
$ docker exec c0 ip addr show eth0 | grep inet
    inet 172.20.55.66/16 scope global eth0
    inet6 2001:db8::5566/64 scope global 
    inet6 fe80::42:acff:fe14:3742/64 scope link
$
$ docker exec c0 ip addr show eth1 | grep inet
    inet 172.21.55.66/16 scope global eth1
    inet6 fe80::42:acff:fe15:3742/64 scope link
$
$ docker stop c0
c0
$ docker start c0
c0
$ docker exec c0 ip addr show eth0 | grep inet
    inet 172.20.55.66/16 scope global eth0
    inet6 2001:db8::5566/64 scope global 
    inet6 fe80::42:acff:fe14:3742/64 scope link
$ docker exec c0 ip addr show eth1 | grep inet
    inet 172.21.55.66/16 scope global eth1
    inet6 fe80::42:acff:fe15:3742/64 scope link 
$
$ docker run -d --name=c1 --ip=172.17.55.66 busybox top
docker: Error response from daemon: User specified IP address is supported on user defined networks only.
$
$ docker network connect --ip 172.13.44.55 n4 c0
Error response from daemon: User specified IP address is supported only when connecting to networks with user configured subnets
$
$ docker run -d  --ip 172.13.44.55 --net n4 busybox
e7fe4a7e8f4e0a6a61c0f6507b84fc6e01190f77d8b8b45d242c67623720db02
docker: Error response from daemon: User specified IP address is supported only when connecting to networks with user configured subnets.
$
@thaJeztah

This comment has been minimized.

Member

thaJeztah commented Dec 30, 2015

Nice! Is an IP-address reserved for a container, even if it's stopped, or can another container claim it during that period?

@aboch

This comment has been minimized.

Contributor

aboch commented Dec 30, 2015

@thaJeztah
When container is stopped, it disconnects from the network(s) and the endpoints are deleted, along with releasing the IP address.

To cover the scenario you detailed, user will define an --ip-range when creating the network. And will specify a preferred IP from outside of that range.

This will assure no other dynamic container will grab that address while the container is stopped.

@thaJeztah

This comment has been minimized.

Member

thaJeztah commented Dec 30, 2015

@aboch thanks, just wondering what people can expect from this feature; clear!
Don't forget to update the docs and man pages as well

@aboch

This comment has been minimized.

Contributor

aboch commented Dec 30, 2015

@thaJeztah
Yes will update the docs/man after design review.

@thaJeztah

This comment has been minimized.

Member

thaJeztah commented Dec 30, 2015

Possible question w.r.t. design is if we want to have all network options directly as options on docker run, or group them as a --net-opt flag, e.g. --net-opt ip=xxx.xxx.xxx.xxx. Having that might be more scalable, and allows different network drivers to have different options.

@aboch

This comment has been minimized.

Contributor

aboch commented Dec 30, 2015

@thaJeztah
--ip and --ip6 are here considered as first class docker cli options (like --name for ex.), not as driver options. The driver specific options are opaque to docker and we pass them down as a list of key:value strings.

Regarding the driver options (ipam driver or network driver), we allow the user to specify them only during network create. In other words as of now we only allow per-network driver specific options. We do not allow per-container driver specific options (so during docker run or docker network connect)

@mavenugo

This comment has been minimized.

Contributor

mavenugo commented Dec 31, 2015

@aboch from a design stand-point, I think we should allow the --ip or --ip6 only if the user has created a network with a proper --subnet while creating the network.

If the user has not specified a subnet while creating the network, then there is no guarantee that the network will have the same subnet on a daemon restart. That will affect the container with the specified --ip.

Also, we have to check if we have to support this feature for any network for which the subnet can potentially change. For example, the default bridge network (docker0) is the only network for which the user can change the --bip during the restart and any container with a specified --ip will be impacted.

WDYT ?

@aboch

This comment has been minimized.

Contributor

aboch commented Dec 31, 2015

@mavenugo
Good point. This should be constrained to only the network for which the subnet cannot change. Unfortunately this rules out the default bridge network as there is no guarantee that at next daemon reload user will not change the bridge or bridge ip.

Will make the change.

@thaJeztah

This comment has been minimized.

Member

thaJeztah commented Dec 31, 2015

@aboch @mavenugo FYI, we were briefly discussing this feature, and, although there are lots of +1s to the feature, browsing through those requests, most are possibly "invalid" use cases, where possibly;

  • stable IP addresses already solve this (i.e., keep same IP upon container restart)
  • the new networking / discovery already resolves this (links could not be used if a new container is started to replace an old container, but the new networking does support this)

We're struggling a bit if there are valid use cases, I came up with;

  • running a DNS in a container (--dns expects an IP-address, not a hostname/fqdn)
  • IPv6 - having a static, world-reachable address

The last one could probably be resolved with a DHCP IPAM

Wondering what your thoughts are on this; are there valid use-cases, or is this a feature "because we can"?

Just adding this, happy to hear (and best wishes for the new year!)

@aboch

This comment has been minimized.

Contributor

aboch commented Dec 31, 2015

@thaJeztah

Not sure I followed, please correct me if I got it wrong, following my reply:

  • We do not support stable IP upon container restart. Because same address simply cannot be guaranteed. Unless user explicitly expresses it, and this is what this PR brings in.

True, if a user deploys a DHCP based IPAM plugin then I guess he can indirectly achieve this by setting the mac-address for the container and have static never expiring mac to IP binding in the DHCP DB.
But docker does not have this as default battery, I also feel it is out of docker's scope. This means a big chunk of users that are just fine using the default IPAM driver battery will not be able to achieve IP stability.

  • IP stability via in-built DNS will be there, but only for services, not for any container. DNS caching issues may be there, which would not allow the container to start for some time.

We simply can't forecast all use cases:
I am thinking of containers which may play non-service level functions:

  • A container which plays a L2/L3 network device role connecting containers networks to physical device networks.
  • A container which is the central bridging for conveying network/data analytic to a monitoring device
  • Containers running in embedded devices: I do not think a service level or DHCP based addressing might make sense in there.
  • Using a container as a VM
  • User won't be able to define secondary IP addresses (which I was planning next). For use cases about this I will post what @jc-m posted in another PR:
Anycast DNS
Load Balancer Direct Server Return
Cloudflare like anycast in DC (https://blog.cloudflare.com/cloudflares-architecture-eliminating-single-p/)
  • And last but not least the whole IPv6 use cases which we have delayed for long 'cause we had no time to analyze yet

My biggest concern in not allowing users to directly define ip stability for their containers of choice is about limiting the composability of networking features blocks.

@thaJeztah

This comment has been minimized.

Member

thaJeztah commented Dec 31, 2015

@aboch thanks for adding your thoughts, that's useful.

We do not support stable IP upon container restart. Because same address simply cannot be guaranteed

Oh, right, I thought I'd seen a comment recently that we now had more stable ip-addresses for containers, I probably misread that.

@vieux

This comment has been minimized.

Collaborator

vieux commented Jan 5, 2016

👍 would be super useful for swarm

@mavenugo mavenugo added this to the 1.10.0 milestone Jan 5, 2016

@jc-m

This comment has been minimized.

jc-m commented Jan 5, 2016

As @aboch commented earlier, the capability to specify IP addresses is really critical. In addition, because the IPAM plugin is virtually not provided with any context about the container (beside the requested network), it's not possible to use it to achieve more complex IP allocation schemes. In these cases, an entity outside of docker will manage the IPs and potentially influence the placement based on networking requirements.
Another use case is container migration (live or not live), which would require the migrated container to preserve its IP. I know that this is maybe a cloud anti pattern, but in the case where an application cold start takes +15 min, this is a very appealing option.

@thaJeztah

This comment has been minimized.

Member

thaJeztah commented Jan 5, 2016

Thanks for adding your use-case @jc-m 👍

@tonistiigi

This comment has been minimized.

Member

tonistiigi commented Jan 5, 2016

I played around with this a bit. Seemed to work fine but one thing I noticed is that IsUserDefined() call that checks if this feature is supported only reflects the container creation state. Meaning that I can't use network connect --ip if I started the container with bridge network or if I started container with user defined network I can later disconnect it and connect to the brige network with a custom IP. Not critical at all if this is hard to solve and could be addressed later.

if container.HostConfig.NetworkMode.IsContainer() {
return runconfig.ErrConflictSharedNetwork
}
if !container.HostConfig.NetworkMode.IsUserDefined() && networktypes.HasUserDefinedIPAddress(endpointConfig) {

This comment has been minimized.

@mavenugo

mavenugo Jan 5, 2016

Contributor

adding to @tonistiigi's comment, I think it is incorrect to check for HostConfig.NetworkMode here.
it must be the network that is being connected to using containertypes.NetworkMode(idOrName).IsUserDefined.

If we are going down this path, then I think it is proper to also check if the network has a configured subnet.
The user should not be depending on the auto-generated subnet and choose preferred-ip. It can cause the same container restart issues.

This comment has been minimized.

@aboch

aboch Jan 5, 2016

Contributor

Thanks @tonistiigi @mavenugo, yes that check is wrong, it needs to check on the network which we are connecting to.

@mavenugo I am already checking for the subnet being configured: https://github.com/docker/docker/pull/19001/files#diff-0f20873a38571444bac38770160648a5R715

@aboch

This comment has been minimized.

Contributor

aboch commented Jan 6, 2016

@tonistiigi Thanks for finding the issue.
I fixed the code and added that specific use-case to the test code at the end of TestDockerNetworkConnectPreferredIP()

@tonistiigi

This comment has been minimized.

Member

tonistiigi commented Jan 6, 2016

LGTM

@jessfraz

This comment has been minimized.

Contributor

jessfraz commented Jan 6, 2016

oh my gosh i have wanted this feature actually i think i remember @vishh implementing this too, maybe he can take a look as well :)

@jessfraz

This comment has been minimized.

Contributor

jessfraz commented Feb 3, 2016

you pass the subnet and gateway for your public cidr when you create a new
network, then the bridge is made from those values

On Wed, Feb 3, 2016 at 10:23 AM, Avi Deitcher notifications@github.com
wrote:

I don't get something. If I create a network with --subnet a.b.c.d, it
still is creating a bridge, unless I specify some other driver, right? And
when I assign an IP, it is connecting to that bridge, and NAT (--ports)
still is used?

Then how does @jfrazelle https://github.com/jfrazelle 's public IPs
case work? If it still is NATing and proxying, a public or private IP won't
be visible.

How does it work with public (or at least native fabric network-visible)
IPs?


Reply to this email directly or view it on GitHub
#19001 (comment).

Jessie Frazelle
4096R / D4C4 DD60 0D66 F65A 8EFC 511E 18F3 685C 0022 BFF3
pgp.mit.edu http://pgp.mit.edu/pks/lookup?op=get&search=0x18F3685C0022BFF3

@aboch

This comment has been minimized.

Contributor

aboch commented Feb 3, 2016

@deitch
Also, why would you use --ports if your container is accessible from anywhere.
Clients will directly connect to your container' IP + TCP port

@deitch

This comment has been minimized.

deitch commented Feb 3, 2016

@jfrazelle bridge to what? That is what I don't get? The bridge just passes L2 traffic along to its various ports. Linux host NAT acts like the L3 router. Or are you saying that the bridge with --subnet will connect directly as is to the underlying fabric and bypass the NAT? Then how does the host route traffic?

@aboch exactly what I don't get. If it is a public, fabric-addressable IP, then --ports is meaningless. But if it is a bridge, how is the container accessible from everywhere?

In one client of mine, we use fixed IPs, and do pipework magic (@jpetazzo rocks) to wire up a macvlan link on the interface, so each container has its own IP directly on the fabric. Then I don't need --ports because the IP is accessible from my physical network. (It also is great for low-latency apps, but that is a different story)

I am missing something here.

Let me try it this way. Let's say the underlying fabric is 10.0.0.0/16. Host is at 10.0.100.10/16. Default docker bridge means container A gets 172.16.25.50/24 and all packets route via the host, which uses ip_forward to forward the packets and NATs them out. Inbound packets to, say, port 80, come to 10.0.100.10:80, which has an iptables NAT rule and translates to 172.16.25.50:80, which the host then uses ip_forward to give to docker0 bridge, and it receives them.

What happens in the new option? I create a network --subnet 10.0.200.0/24. Won't the outbound packets still follow the same route? Isn't there still a bridge, that goes through host forwarding and NAT? Won't inbound packets still come to 10.0.100.10:80 and need to get NATed and then forwarded?

What am I missing?

NOTE: I used --subnet 10.0.200.0/24 in the second example, because it is a range that in theory is addressable directly from the underlying 10.0.0.0/16 fabric.

@withinboredom

This comment has been minimized.

withinboredom commented Feb 3, 2016

I'm curious as well, is this basically 1:1 NAT?

@jessfraz

This comment has been minimized.

Contributor

jessfraz commented Feb 3, 2016

you should get a public cidr range and try it out, i have no idea how to
explain this better haha

On Wed, Feb 3, 2016 at 1:00 PM, Rob Landers notifications@github.com
wrote:

I'm curious as well, is this basically 1:1 NAT?


Reply to this email directly or view it on GitHub
#19001 (comment).

Jessie Frazelle
4096R / D4C4 DD60 0D66 F65A 8EFC 511E 18F3 685C 0022 BFF3
pgp.mit.edu http://pgp.mit.edu/pks/lookup?op=get&search=0x18F3685C0022BFF3

@deitch

This comment has been minimized.

deitch commented Feb 3, 2016

@jfrazelle yeah, I guess we could, but much rather just know how it works. :-)

@deitch

This comment has been minimized.

deitch commented Feb 3, 2016

OK, I did install it. Spun up a digital ocean ubuntu instance and installed docker 1.10.0-rc3.

I don't see any difference at all. It has another bridge named iptest (which is what I called it in network create), but the packets behave on that one just like on docker0. I don't even understand how I could use publicly accessible IPs. It looks to me like just another private network with its own bridge.

I get how it is good to control the range... but how does it help me use IPs visible on the fabric or Internet? @aboch when you said you could use a public IP, what does that mean?

@aboch

This comment has been minimized.

Contributor

aboch commented Feb 4, 2016

@deitch As long as the public IP subnet you chose when creating the docker network is advertised outside of the host, you will be able to route up to any the container on that network.

@withinboredom

This comment has been minimized.

withinboredom commented Feb 4, 2016

@deitch Google compute engine will allow you to give a host a public range, from what I hear. I have a pretty decent block at my home server rack, but I don't have any spare physical hosts to experiment with this on to tell you what happens. I looked over the PR and it looks like it will basically be 1:1 nat'd so, yes, you will still have a 'network' like before, but that network will be transparent for hosts inside and outside the physical host. It looks like it uses iptables on the host to setup that 1:1 nat. Can someone tell me if I'm right or wrong?

@deitch

This comment has been minimized.

deitch commented Feb 4, 2016

@aboch how does that work? When you say "advertised outside of the host", do you mean that there is a route that puts it to that host, so essentially, in my example, 10.0.200.0/24 via 10.0.100.10? Then why would I need the bridge at all? That sounds a lot like what http://projectcalico.org does, but without the bridge, all L3. And how would docker know if it is advertised, and thus not need NAT rules in iptables, vs. "normal", and require rules?

@deitch

This comment has been minimized.

deitch commented Feb 4, 2016

@withinboredom, what do you mean by "1:1 nat"?

@withinboredom

This comment has been minimized.

withinboredom commented Feb 4, 2016

A quick google search of "one to one nat" can explain it better than I can ... but basically:

WEB:        123.456.789.002       123.456.789.003      123.456.789.004
                 |-----------------------|-------------------|
                                         |
                                      ROUTER
                                         |
                 |-----------------------|-------------------|
INTERNAL    123.456.789.002       123.456.789.003      123.456.789.004

As you can see, you still need a network interface, to talk to the router. You also need something to do the NAT'ing ... but the internal ip matches the external ip so it seems as though everything is external, even though its internal. This makes dns super super easy, with the only caveat is that you better be running a firewall somewhere in the chain or you may find yourself having a world of fun.

@jessfraz

This comment has been minimized.

Contributor

jessfraz commented Feb 4, 2016

yeah so ovh when you buy their extra ips hooks up the router but all the
configuration is on their end not in /etc/network/interfaces or anything
like that, i think that is where your confusion is stemming from

On Thu, Feb 4, 2016 at 8:38 AM, Rob Landers notifications@github.com
wrote:

A quick google search of "one to one nat" can explain it better than I can
... but basically:

WEB: 123.456.789.002 123.456.789.003 123.456.789.004
|-----------------------|-------------------|
|
ROUTER
|
|-----------------------|-------------------|
INTERNAL 123.456.789.002 123.456.789.003 123.456.789.004

As you can see, you still need a network interface, to talk to the router.
You also need something to do the NAT'ing ... but the internal ip matches
the external ip so it seems as though everything is external, even though
its internal. This makes dns super super easy, with the only caveat is that
you better be running a firewall somewhere in the chain or you may find
yourself having a world of fun.


Reply to this email directly or view it on GitHub
#19001 (comment).

Jessie Frazelle
4096R / D4C4 DD60 0D66 F65A 8EFC 511E 18F3 685C 0022 BFF3
pgp.mit.edu http://pgp.mit.edu/pks/lookup?op=get&search=0x18F3685C0022BFF3

@deitch

This comment has been minimized.

deitch commented Feb 4, 2016

@withinboredom I get that, I meant how does it play in here.

What do you gain from the NAT, then? All you really are doing is routing. That is largely how Calico works. But if you are doing NAT, why even use public IPs? You could just as easily use private and put the public ones on the host. Or is that what they mean by this?

@jfrazelle "when you buy extra IPs, they hook them up on the router", but you still need to map the incoming (and maybe the outgoing) traffic to an internal address, n'est ce-pas?

@aboch

This comment has been minimized.

Contributor

aboch commented Feb 4, 2016

@deitch @withinboredom

One disclaimer:
This PR is to allow user to select the IP address for their containers.
The IP address management is separated from the network driver plumbing.
So in no way the fact you can allocate a public subnet/ip with the IPAM driver dictates how the network plumbing is being done.

In other words, the discussion about the different container networking strategies does not belong in here. Feel free to open an issue in docker/libnetwork or docker/docker

@deitch
One way to advertise it is to install on the router to which your server is attached a static route that says what the next hop is if you want to reach the container's bridge subnet.

@jessfraz

This comment has been minimized.

Contributor

jessfraz commented Feb 4, 2016

No literally all I did was what was in the blog post, no configuration on
the server.

On Thu, Feb 4, 2016 at 8:46 AM, Avi Deitcher notifications@github.com
wrote:

@withinboredom https://github.com/withinboredom I get that, I meant how
does it play in here.

What do you gain from the NAT, then? All you really are doing is routing.
That is largely how Calico works. But if you are doing NAT, why even use
public IPs? You could just as easily use private and put the public ones on
the host. Or is that what they mean by this?

@jfrazelle https://github.com/jfrazelle "when you buy extra IPs, they
hook them up on the router", but you still need to map the incoming (and
maybe the outgoing) traffic to an internal address, n'est ce-pas?


Reply to this email directly or view it on GitHub
#19001 (comment).

Jessie Frazelle
4096R / D4C4 DD60 0D66 F65A 8EFC 511E 18F3 685C 0022 BFF3
pgp.mit.edu http://pgp.mit.edu/pks/lookup?op=get&search=0x18F3685C0022BFF3

@withinboredom

This comment has been minimized.

withinboredom commented Feb 4, 2016

@aboch thanks

@deitch

This comment has been minimized.

deitch commented Feb 4, 2016

@aboch thanks. So the networking strategies remain the same. The apparent usage of directly accessible public IPs was what had thrown me off.

@jfrazelle thanks. Got it now. Enjoying the posts by the way (your excitement and energy are a little impressive).

@aboch so if you want to use public IPs (i.e. IPs that are valid on the underlying fabric, rather than on the bridge or other overlay), then that is a different networking strategy. The way it is done now is mostly pipework with macvlan or similar, but as a different strategy, it would be a different networking plugin.

Thanks for clarifying. Much appreciated.

@dhlwing

This comment has been minimized.

dhlwing commented Feb 26, 2016

great!

@Hostile

This comment has been minimized.

Hostile commented Mar 11, 2016

need more instructions please

@gedl

This comment has been minimized.

gedl commented May 18, 2016

I am a bit late to the party but here it goes.

I find the inability to specify the IP on a pre-defined network quite limiting. In my specific case I am pre-configuring a bridge that docker will get to use. The said bridge is exposed to the LAN and my goal is to have all containers at the same level as any other network node (some called this using docker as VMs). This fixed IP will then be used for QoS and other IP centered activities.

The problem is a set of design decisions prevent users from achieving this:

  • It is not possible to specify an IP on the default bridge network
  • It is not possible to delete the default bridge network
  • It is not possible to not create a default bridge network
  • It is not possible to create a user defined network using the pre-configured bridge (as the pre-defined network is using it)
  • My last idea was to let docker create its bridge (that I will not use for anything but am happy to let sit there) and then create a user defined network specifying my pre-configured bridge as an opaque driver option on network creation. Unfortunately this destroys my configuration, including an unrequested changed of the bridge IP

Am I left with any option? Perhaps specifying the container ID should be allowed, provided that the docker daemon is launched with a subnet specification (as a matter of fact it always is when a bridge is specified, as the subnet is inferred from the bridge net mask).

Any chances to account for this?

@aboch

This comment has been minimized.

Contributor

aboch commented May 18, 2016

@gedl

There might not be a way to satisfy your requirements, but I am posting something I did not see mentioned in your comment.

You can create a user-defined network using an existing linux bridge (as you mentioned via driver options). At network creation if you also properly specify the subnet and network gateway, the IP of the bridge won't be changed.

$ ifconfig mybridge
mybridge  Link encap:Ethernet  HWaddr e2:0a:61:df:e3:7b  
          inet addr:200.200.0.20  Bcast:0.0.0.0  Mask:255.255.0.0
          inet6 addr: fe80::e00a:61ff:fedf:e37b/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:0 (0.0 B)  TX bytes:648 (648.0 B)

$
$ docker network create --subnet 200.200.0.0/16 --gateway 200.200.0.20 --opt com.docker.network.bridge.name=mybridge myuserdefinednw
5d46c16d66332c48e76d7466cfa516854d29337c3e2e42b852a2188296fa4d28
$
$ ifconfig mybridge
mybridge  Link encap:Ethernet  HWaddr e2:0a:61:df:e3:7b  
          inet addr:200.200.0.20  Bcast:0.0.0.0  Mask:255.255.0.0
          inet6 addr: fe80::e00a:61ff:fedf:e37b/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:0 (0.0 B)  TX bytes:648 (648.0 B)
@gedl

This comment has been minimized.

gedl commented May 18, 2016

@aboch that is true, but as soon as I stop the docker daemon, or delete the user defined network, my bridge is deleted. Also, I suspect (haven't tried yet) this will impact the routing table, disregarding the existing rules involving the pre-defined bridge.

Perhaps the ideal solution would be to allow a -b switch on network creation, as suggested by someone else on #20349. This would have a similar behaviour as launching the daemon with -b (not touch the configuration, don't delete the bridge on shutdown).

I have since found out that specifying -b=none on daemon launch does not create the bridge. However, that fact alone doesn't solve the problem, as I am not able to create an exact copy of it using network create.

It boils down to:

  • being able to specify a fixed address on a pre-defined network
  • being able to create a user-defined bridge network with the exact same configuration of a pre-defined one

mYmNeo pushed a commit to mYmNeo/docker that referenced this pull request Jun 30, 2016

Cherry-pick moby#19001 manually
Signed-off-by: Chun Chen <ramichen@tencent.com>
@aboch

This comment has been minimized.

Contributor

aboch commented Sep 15, 2016

@gedl Regarding

[...] but as soon as I stop the docker daemon, or delete the user defined network, my bridge is deleted.

be aware docker/libnetwork#1301 was merged and the change will for sure be part of docker 1.13

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment