New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Proposal: Links: Dynamic Links #7468

Closed
erikh opened this Issue Aug 7, 2014 · 36 comments

Comments

Projects
None yet
@erikh
Copy link
Contributor

erikh commented Aug 7, 2014

This is a two part proposal. Please see #7467 for the other part.

Problem

Links currently do not satisfy the needs of users for a couple of reasons:

  • linking is static: you cannot change the associations links provide without destroying (at least one) container
  • docker’s links system externally does not provide an elegant UI to manipulating and querying links

Solution:

  • Provide new docker links UI
  • Provide guarantees about what happens when a link is added or removed

New docker links UI

Four new inferior commands will be added under the new subcommand docker links

  • docker links list - list the existing links, and the state of the linked containers
  • docker links add <consumer> <producer> <name> - add a link to existing containers
  • docker links remove <name> - remove a link from existing containers
  • docker links move <name> <producer2> - to migrate a set of links from one container to another

Links Guarantees

We should guarantee that:

  • Adding a link should:
    • rewrite the hosts files for all linked resources with updated content
    • expose the ports described in the Dockerfile
  • Removing a link should:
    • rewrite the hosts files for all linked resources with updated content
    • tear down the exposed ports brought by docker links add

Tickets Resolved:

#5186
#2733
#3285
#3155
#2658
#2588 (I think)

@thaJeztah

This comment has been minimized.

Copy link
Member

thaJeztah commented Aug 7, 2014

+1 this was always my idea; links should be the patch-cables between containers, not something that is fixed once a container is started. This proposal allows me to (for example) migrate my application to another database-container, without having to stop them, which is awesome.

Some additional ideas;

  • allow specifying which port to link in where not all exposed ports are required (e.g. Establish a link for port 3306, but don't link port 22). This would allow more granular control over what is reachable by the consumer and probably better for security
  • allow specifying ports on both ends, e.g. remap port 3307 on the producer to port 3306 on the consumer-end of the link
  • bi-directional links
@erikh

This comment has been minimized.

Copy link
Contributor

erikh commented Aug 7, 2014

bi-directional links are a thing we want to do in a separate proposal. the goal is to keep the scope here as limited as possible for the purposes of review and release inclusion.

@thaJeztah

This comment has been minimized.

Copy link
Member

thaJeztah commented Aug 7, 2014

Fair point, and not my top priority

Would port-mapping be something to include in this proposal? Was thinking something like;

docker link add <consumer>[:port] <producer>[:port] name
@thaJeztah

This comment has been minimized.

Copy link
Member

thaJeztah commented Aug 7, 2014

Thinking about this a bit more. Port mapping could be a challenge if I need to remap multiple ports from the same producer-container. Right now, link-name === host-name in /etc/hosts. If I want to map multiple ports on the same producer, each link would get its own entry in the hosts file (assuming link-names are unique), which may be confusing (multiple host-names for the same producer-container)

Basically, this would require the link-name to be re-used, so that only a single entry in /etc/hosts will be created;

// creates a link from producer:portA to consumer:portB and adds 'name' entry to /etc/hosts
docker links add <consumer>[:portB] <producer>[:portA] name

// adds another port(mapping) to the existing link without adding a new entry in /etc/hosts
docker links add <consumer>[:portD] <producer>[:portC] name

// removes the first port mapping
docker links remove <consumer>[:portB] name

// completely removes link "name"
docker links remove <consumer> name

_note_
In your proposal, docker links remove name would remove a link. Shouldn't a <consumer> be included here? Or are link-names globally unique? If so, this would greatly limit its use (e.g. Only a single link named db could exist on the docker-host?

@bfirsh

This comment has been minimized.

Copy link
Contributor

bfirsh commented Aug 7, 2014

This only partially resolves #2733 and #3285. You still can't change an environment variable on a container, for example.

@bfirsh

This comment has been minimized.

Copy link
Contributor

bfirsh commented Aug 7, 2014

👍 otherwise

@pkieltyka

This comment has been minimized.

Copy link

pkieltyka commented Aug 12, 2014

👍 👍 👍 👍

@pkieltyka

This comment has been minimized.

Copy link

pkieltyka commented Aug 12, 2014

Links are pretty useless without the ability to be updated when a parent changes. Please add this stuff and release it asap, as it requires a lot of extra work when upgrading a container. Something I came across for the short term: https://github.com/blalor/docker-hosts

@obi12341

This comment has been minimized.

Copy link

obi12341 commented Aug 13, 2014

@kuon

This comment has been minimized.

Copy link
Contributor

kuon commented Aug 13, 2014

Wouldn't it be also useful to allow external host linking? For example I have a server with 2 containers, app and db, app is linked to db. Being able to move db to another host and then just do

docker link move app:db another_host

To have db in the app container point to another_host.

This would centralize relationship between containers without having to rely on ENV var or CNAME....

@pkieltyka

This comment has been minimized.

Copy link

pkieltyka commented Aug 13, 2014

That does sound convenient, but I think it belongs in a higher level of orchestration. Currently there is no concept (that I know of) that connects docker hosts together, which would be necessary in order to do a remote link like that. The best thing I've seen for the discovery part is consul. Then it would be something above that to deploy containers on the various hosts.

@kuon

This comment has been minimized.

Copy link
Contributor

kuon commented Aug 13, 2014

I think the higher level orchestration tool need an entry point to manage the /etc/hosts file of the containers.

@pkieltyka

This comment has been minimized.

Copy link

pkieltyka commented Aug 13, 2014

@kuon right. Well this could be through a shared hosts file on the host machine, just like how https://github.com/blalor/docker-hosts works... it has a shared hosts file at /var/lib/docker/hosts. Perhaps this proposal should do something similar.

@nugend

This comment has been minimized.

Copy link

nugend commented Sep 24, 2014

If the links are being worked on, could something be done about the environment variables exposed? Having to know the interface and the port of the environment variables that tell you what the interface and port are seems remarkably silly.

Like, if you're going to have to parse strings to find that info out, you may as well just have a single environment variable containing all the host/port/interface information, like this:

DB_NAME=/web2/db
DB_PORTS=tcp://172.17.0.5:5432,udp://172.17.0.5:5432,tcp://172.17.0.5:5433

Personally, I'd like it if the expose flags/dockerfile command let you set some sort of friendly name for ports.

DB_NAME=/web/db
DB_DATA_PORT=tcp://172.17.0.5:5432
DB_ADMIN_PORT=tcp://172.17.0.5:5433

And if that does get added, /etc/services could be modified too!

@Mulkave

This comment has been minimized.

Copy link

Mulkave commented Sep 29, 2014

👍

@duglin

This comment has been minimized.

Copy link
Contributor

duglin commented Oct 16, 2014

@erikh how are the list of linked containers and ports for each container discovered? I'm looking for the equivalent of DOCKER_LINKS and <alias>_PORTS from #8515

@ibuildthecloud

This comment has been minimized.

Copy link
Contributor

ibuildthecloud commented Oct 16, 2014

@erikh DNS based service discovery has its issues. Has there been some lengthy discussion in which everyone agreed to just accept its downfalls (ie client caching)?

By dropping ENV vars we are effectively removing the ability to remap ports. Today you could set the ENV var for port 3306 to 1234, it just happens to be in the current implementation they are always the same. In the end, with links v2 we are basically dropping the "linked ports" concept and instead going towards linking containers based on straight IP. I can't say I really agree with this. EXPOSE in the Dockerfile will not really be required anymore for links.

@duglin does have a point that if we go 100% DNS there's no way to discover the linked ports. Maybe the introspection/metadata service would have to be a prerequisite.

Links v2 will break the way geard does container linking as they modify the ENV vars to point to 127.0.0.1. Even if they switched to running their own custom DNS server, they can't point to 127.0.0.1 anymore because the have no ability to remap ports anymore if ENV vars are deprecated. I guess that's just their problem to solve?

@tianon

This comment has been minimized.

Copy link
Member

tianon commented Oct 16, 2014

A DNS record TTL of zero is valid, and specifies that the record should not
be cached.

As for port discovery, SRV records could be used for individual ports, but
TXT records could certainly be used for more complex data if necessary.

(My own DNS hobby-project, rawdns [https://github.com/tianon/rawdns] uses
TTLs of 0, and I've been using it on all my development machines and
several production machines for a while now without any issues related to
that.)

@erikh

This comment has been minimized.

Copy link
Contributor

erikh commented Oct 16, 2014

I also have a zero TTL implementation I’m working on, fwiw.

On Oct 16, 2014, at 12:42 AM, Tianon Gravi notifications@github.com wrote:

A DNS record TTL of zero is valid, and specifies that the record should not
be cached.

As for port discovery, SRV records could be used for individual ports, but
TXT records could certainly be used for more complex data if necessary.

(My own DNS hobby-project, rawdns [https://github.com/tianon/rawdns] uses
TTLs of 0, and I've been using it on all my development machines and
several production machines for a while now without any issues related to
that.)

Reply to this email directly or view it on GitHub.

@erikh

This comment has been minimized.

Copy link
Contributor

erikh commented Oct 16, 2014

Hmm. @duglin contributed a patch to clarify this a bit, and maybe that’s the solution here. I don’t remember the ticket offhand.

On Oct 16, 2014, at 12:07 AM, Darren notifications@github.com wrote:

@erikh DNS based service discovery has its issues. Has there been some lengthy discussion in which everyone agreed to just accept its downfalls (ie client caching)?

By dropping ENV vars we are effectively removing the ability to remap ports. Today you could set the ENV var for port 3306 to 1234, it just happens to be in the current implementation they are always the same. In the end, with links v2 we are basically dropping the "linked ports" concept and instead going towards linking containers based on straight IP. I can't say I really agree with this. EXPOSE in the Dockerfile will not really be required anymore for links.

@duglin does have a point that if we go 100% DNS there's no way to discover the linked ports. Maybe the introspection/metadata service would have to be a prerequisite.

Links v2 will break the way geard does container linking as they modify the ENV vars to point to 127.0.0.1. Even if they switched to running their own custom DNS server, they can't point to 127.0.0.1 anymore because the have no ability to remap ports anymore if ENV vars are deprecated. I guess that's just their problem to solve?


Reply to this email directly or view it on GitHub.

@thaJeztah

This comment has been minimized.

Copy link
Member

thaJeztah commented Oct 16, 2014

Okay, I'm writing this comment, but may be really making a silly point because I'm a n00b at this;

Would it be possible to use libswarm to construct links between containers?

After reading this blogpost and the readme of libchan, I "distilled" this;

Libchan (thus libswarm?);

  • Channels are bi-directional (Enabling two-way links?)
  • Unix sockets are supported (Linking a socket in Container A inside Container B)
  • ".. libchan services can be used as smart gateways to a sql database, ssh or file transfer service, with unified auth, discovery and tooling and without performance penalty." (Discovery of dynamic links?)
  • "... remote libchan sessions are regular HTTP2 over TLS sessions, they can be used in combination with any standard proxy or authentication middleware. (..) can be safely exposed on the public Internet." (Establishing links between docker hosts, without having to use an Ambassador?)

Again, this may be really silly (and not really possible), just got carried when reading those readmes.

@duglin

This comment has been minimized.

Copy link
Contributor

duglin commented Oct 16, 2014

Still coming up to speed on this one but it seems that part of this solution is to update the /etc/hosts file in a container if someone links containers after they are created, do I have that right?

If so, I wonder.... who actually owns /etc/hosts once the container is created? I was under the impression that once the container is created, the container owner is pretty much free to do whatever they want. However, what if they modify /etc/hosts ? And, in particular, what if they update it in such a way that it must adhere to a certain pattern? Or the container has a process that writes to the file from a cache w/o reading it in each time - because they think they own it? Or Docker and the container try to update it at the same time? ... Will doing a link, post-create, cause them pain if Docker then goes in there any modifies that file? If Docker is free to modify a file within a container then I'm not sure we can honestly say that the container owns that file - and that might be a concern for some folks.

In some issue, and maybe this is what @ibuildthecloud was referring to when they mentioned the "introspection/metadata service", there was a discussion of containers being able to "phone home" to get info about itself. If containers will need to get dynamic info I'd prefer if we got that info w/o modifying a file that the container may be using and modifying. Whether that's via a metadata service or via some new Docker-specific file that exists within the container but is made clear that Docker owns it and can modify it at any time, doesn't matter much to me. But having Docker touch a container managed file, after its been handed over to the container owner, feels like we're asking for trouble.

@erikh

This comment has been minimized.

Copy link
Contributor

erikh commented Oct 16, 2014

We modify it by line (see pkg/networkfs/etchosts). We use a mount strategy called MS_SLAVE that allows it to be edited from both the host and container.

On Oct 16, 2014, at 6:11 AM, Doug Davis notifications@github.com wrote:

Still coming up to speed on this one but it seems that part of this solution is to update the /etc/hosts file in a container if someone links containers after they are created, do I have that right?

If so, I wonder.... who actually owns /etc/hosts once the container is created? I was under the impression that once the container is created, the container owner is pretty much free to do whatever they want. However, what if they modify /etc/hosts ? And, in particular, what if they update it in such a way that it must adhere to a certain pattern? Or the container has a process that writes to the file from a cache w/o reading it in each time - because they think they own it? Or Docker and the container try to update it at the same time? ... Will doing a link, post-create, cause them pain if Docker then goes in there any modifies that file? If Docker is free to modify a file within a container then I'm not sure we can honestly say that the container owns that file - and that might be a concern for some folks.

In some issue, and maybe this is what @ibuildthecloud was referring to when they mentioned the "introspection/metadata service", there was a discussion of containers being able to "phone home" to get info about itself. If containers will need to get dynamic info I'd prefer if we got that info w/o modifying a file that the container may be using and modifying. Whether that's via a metadata service or via some new Docker-specific file that exists within the container but is made clear that Docker owns it and can modify it at any time, doesn't matter much to me. But having Docker touch a container managed file, after its been handed over to the container owner, feels like we're asking for trouble.


Reply to this email directly or view it on GitHub.

@duglin

This comment has been minimized.

Copy link
Contributor

duglin commented Oct 16, 2014

But that doesn't really address my concern of ownership though.

@ibuildthecloud

This comment has been minimized.

Copy link
Contributor

ibuildthecloud commented Oct 18, 2014

@tianon TTL doesn't address all DNS caching issues, for example Java by default will cache DNS forever regardless of TTL. Every argument I hear for DNS based service discovery in the end comes to the conclusion that you need well behaved clients. Basically I'd like to see if there was some drawn out discussion on this. Something in IRC or another PR perhaps?

SRV records for port mappings requires programmatic interaction meaning only applications that are programmed to use SRV records will work. "Legacy" applications that just depend on a hostname/port combo from a configuration source won't work. With links as they stand today I can very easily support legacy applications that have no concept of service discovery by just reading the ENV vars from a shell script and writing the configuration into whatever configuration file the application needs. Thus no code change.

@erikh If /etc/hosts is bind mounted MS_SLAVE, doesn't that mean the files can get out of sync if the container does something to create a new inode, for example vi /etc/hosts. @duglin comment of ownership is important. If we're dedicated to DNS based service discovery then made we should just go all in with a DNS server. I don't think sharing ownership of /etc/hosts is the best idea.

@shykes

This comment has been minimized.

Copy link
Collaborator

shykes commented Oct 18, 2014

Darren, I agree with you, we should not rely on the caching behavior of dns
resolvers. At the same time, DNS [1] as a way to resolve name->ip mappings
is too ubiquitous to ignore. I think we can have the best of both worlds by
offering it, but also guaranteeing that the mapping will not change. That
way we get the benefits but we don't depend on client caching behavior.
It's more work at the networking layer, but I believe it's a worthy
tradeoff.

Of course this should be properly documented.

[1] for convenience I'm lumping /etc/hosts and actual dns in the common
term "dns"

On Fri, Oct 17, 2014 at 8:20 PM, Darren notifications@github.com wrote:

@tianon https://github.com/tianon TTL doesn't address all DNS caching
issues, for example Java by default will cache DNS forever regardless of
TTL. Every argument I hear for DNS based service discovery in the end comes
to the conclusion that you need well behaved clients. Basically I'd like to
see if there was some drawn out discussion on this. Something in IRC or
another PR perhaps?

SRV records for port mappings requires programmatic interaction meaning
only applications that are programmed to use SRV records will work.
"Legacy" applications that just depend on a hostname/port combo from a
configuration source won't work. With links as they stand today I can very
easily support legacy applications that have no concept of service
discovery by just reading the ENV vars from a shell script and writing the
configuration into whatever configuration file the application needs. Thus
no code change.

@erikh https://github.com/erikh If /etc/hosts is bind mounted MS_SLAVE,
doesn't that mean the files can get out of sync if the container does
something to create a new inode, for example vi /etc/hosts. @duglin
https://github.com/duglin comment of ownership is important. If we're
dedicated to DNS based service discovery then made we should just go all in
with a DNS server. I don't think sharing ownership of /etc/hosts is the
best idea.


Reply to this email directly or view it on GitHub
#7468 (comment).

@shykes

This comment has been minimized.

Copy link
Collaborator

shykes commented Oct 18, 2014

Sebastiaan, yes I believe libchan is a very powerful means for containers
to communicate, and in the future I would like to expose an (optional)
interface for containers to take advantage of it. The libchan
implementation is getting quite solid and is seeing some real world usage.
As we start using it more for Docker's own needs (watch for progress around
clustering and plugins for example) this will become easier.

However that is orthogonal to this proposal, which is about IP networking
for containers. Over time different applications will rely on different
interfaces for inter-connections, and that's okay. So in the present
context, libchan is off-topic.

On Sat, Oct 18, 2014 at 1:52 PM, Solomon Hykes s@docker.com wrote:

Darren, I agree with you, we should not rely on the caching behavior of
dns resolvers. At the same time, DNS [1] as a way to resolve name->ip
mappings is too ubiquitous to ignore. I think we can have the best of both
worlds by offering it, but also guaranteeing that the mapping will not
change. That way we get the benefits but we don't depend on client caching
behavior. It's more work at the networking layer, but I believe it's a
worthy tradeoff.

Of course this should be properly documented.

[1] for convenience I'm lumping /etc/hosts and actual dns in the common
term "dns"

On Fri, Oct 17, 2014 at 8:20 PM, Darren notifications@github.com wrote:

@tianon https://github.com/tianon TTL doesn't address all DNS caching
issues, for example Java by default will cache DNS forever regardless of
TTL. Every argument I hear for DNS based service discovery in the end comes
to the conclusion that you need well behaved clients. Basically I'd like to
see if there was some drawn out discussion on this. Something in IRC or
another PR perhaps?

SRV records for port mappings requires programmatic interaction meaning
only applications that are programmed to use SRV records will work.
"Legacy" applications that just depend on a hostname/port combo from a
configuration source won't work. With links as they stand today I can very
easily support legacy applications that have no concept of service
discovery by just reading the ENV vars from a shell script and writing the
configuration into whatever configuration file the application needs. Thus
no code change.

@erikh https://github.com/erikh If /etc/hosts is bind mounted
MS_SLAVE, doesn't that mean the files can get out of sync if the container
does something to create a new inode, for example vi /etc/hosts. @duglin
https://github.com/duglin comment of ownership is important. If we're
dedicated to DNS based service discovery then made we should just go all in
with a DNS server. I don't think sharing ownership of /etc/hosts is the
best idea.


Reply to this email directly or view it on GitHub
#7468 (comment).

@thaJeztah

This comment has been minimized.

Copy link
Member

thaJeztah commented Oct 18, 2014

@shykes thanks for explaining, I kind of suspected that libswarm/libchan wasn't mature enough (yet) for this purpose, but glad to hear that it hasn't lost the teams attention for application in Docker itself.

Re: this proposal, which is about IP networking for containers.
I was thinking; if libswarm/libchan was able to provide a communication channel as a "back-end" over which IP networking was achieved, this may create a very flexible system (e.g. No longer having to use iptables to open communication between containers, but open/close channels in stead). That could be expanded on later on (if not only IP-links but also sockets would be implemented as links)

Again, I'm a bit of a n00b on this, so I'll await further development with interest :)

@ibuildthecloud

This comment has been minimized.

Copy link
Contributor

ibuildthecloud commented Oct 19, 2014

@shykes If this proposal goes through I think what I'd do for links may fall in line with what I think you're talking about. Basically, from a higher level (stampede), I'd manage DNS such that the link name will resolve to a unique link local IP for that link target, for example 169.254.6.2. Then I'd have an iptables rule that catches outbound traffic to 169.254.6.2 and does a DNAT rule to the real IP of the target. If the link changes, the iptables is changed, but DNS always resolves to the same IP.

I think this is a good compromise.

@shykes

This comment has been minimized.

Copy link
Collaborator

shykes commented Oct 19, 2014

Doug, your are right. In addition to DNS lookups, we will need to offer a
mechanism for exposing and discovering ports at the udp/tcp level. (srv
lookups are a possibility but they are by no means obvious, since they are
not nearly as ubiquitous as regular A/CNAME lookups).

Clearly, with this proposal, links are becoming primarily an IP-level
consideration. I think that is a good thing because it will allow
separation of concerns. The need for exposing and discovering udp/tcp ports
is not going away, but there is an opportunity to break down a big problem
into 2 smaller problems, and deal with them in a more loosely coupled way.

Erik, with this in mind, I think your proposal requires a section about the
issue of port discovery.

On Sat, Oct 18, 2014 at 1:56 PM, Solomon Hykes s@docker.com wrote:

Sebastiaan, yes I believe libchan is a very powerful means for containers
to communicate, and in the future I would like to expose an (optional)
interface for containers to take advantage of it. The libchan
implementation is getting quite solid and is seeing some real world usage.
As we start using it more for Docker's own needs (watch for progress around
clustering and plugins for example) this will become easier.

However that is orthogonal to this proposal, which is about IP networking
for containers. Over time different applications will rely on different
interfaces for inter-connections, and that's okay. So in the present
context, libchan is off-topic.

On Sat, Oct 18, 2014 at 1:52 PM, Solomon Hykes s@docker.com wrote:

Darren, I agree with you, we should not rely on the caching behavior of
dns resolvers. At the same time, DNS [1] as a way to resolve name->ip
mappings is too ubiquitous to ignore. I think we can have the best of both
worlds by offering it, but also guaranteeing that the mapping will not
change. That way we get the benefits but we don't depend on client caching
behavior. It's more work at the networking layer, but I believe it's a
worthy tradeoff.

Of course this should be properly documented.

[1] for convenience I'm lumping /etc/hosts and actual dns in the common
term "dns"

On Fri, Oct 17, 2014 at 8:20 PM, Darren notifications@github.com wrote:

@tianon https://github.com/tianon TTL doesn't address all DNS caching
issues, for example Java by default will cache DNS forever regardless of
TTL. Every argument I hear for DNS based service discovery in the end comes
to the conclusion that you need well behaved clients. Basically I'd like to
see if there was some drawn out discussion on this. Something in IRC or
another PR perhaps?

SRV records for port mappings requires programmatic interaction meaning
only applications that are programmed to use SRV records will work.
"Legacy" applications that just depend on a hostname/port combo from a
configuration source won't work. With links as they stand today I can very
easily support legacy applications that have no concept of service
discovery by just reading the ENV vars from a shell script and writing the
configuration into whatever configuration file the application needs. Thus
no code change.

@erikh https://github.com/erikh If /etc/hosts is bind mounted
MS_SLAVE, doesn't that mean the files can get out of sync if the container
does something to create a new inode, for example vi /etc/hosts. @duglin
https://github.com/duglin comment of ownership is important. If we're
dedicated to DNS based service discovery then made we should just go all in
with a DNS server. I don't think sharing ownership of /etc/hosts is the
best idea.


Reply to this email directly or view it on GitHub
#7468 (comment).

@erikh

This comment has been minimized.

Copy link
Contributor

erikh commented Oct 19, 2014

Ok, I’ll work on it this week.

On Oct 18, 2014, at 7:45 PM, Solomon Hykes notifications@github.com wrote:

Doug, your are right. In addition to DNS lookups, we will need to offer a
mechanism for exposing and discovering ports at the udp/tcp level. (srv
lookups are a possibility but they are by no means obvious, since they are
not nearly as ubiquitous as regular A/CNAME lookups).

Clearly, with this proposal, links are becoming primarily an IP-level
consideration. I think that is a good thing because it will allow
separation of concerns. The need for exposing and discovering udp/tcp ports
is not going away, but there is an opportunity to break down a big problem
into 2 smaller problems, and deal with them in a more loosely coupled way.

Erik, with this in mind, I think your proposal requires a section about the
issue of port discovery.

On Sat, Oct 18, 2014 at 1:56 PM, Solomon Hykes s@docker.com wrote:

Sebastiaan, yes I believe libchan is a very powerful means for containers
to communicate, and in the future I would like to expose an (optional)
interface for containers to take advantage of it. The libchan
implementation is getting quite solid and is seeing some real world usage.
As we start using it more for Docker's own needs (watch for progress around
clustering and plugins for example) this will become easier.

However that is orthogonal to this proposal, which is about IP networking
for containers. Over time different applications will rely on different
interfaces for inter-connections, and that's okay. So in the present
context, libchan is off-topic.

On Sat, Oct 18, 2014 at 1:52 PM, Solomon Hykes s@docker.com wrote:

Darren, I agree with you, we should not rely on the caching behavior of
dns resolvers. At the same time, DNS [1] as a way to resolve name->ip
mappings is too ubiquitous to ignore. I think we can have the best of both
worlds by offering it, but also guaranteeing that the mapping will not
change. That way we get the benefits but we don't depend on client caching
behavior. It's more work at the networking layer, but I believe it's a
worthy tradeoff.

Of course this should be properly documented.

[1] for convenience I'm lumping /etc/hosts and actual dns in the common
term "dns"

On Fri, Oct 17, 2014 at 8:20 PM, Darren notifications@github.com wrote:

@tianon https://github.com/tianon TTL doesn't address all DNS caching
issues, for example Java by default will cache DNS forever regardless of
TTL. Every argument I hear for DNS based service discovery in the end comes
to the conclusion that you need well behaved clients. Basically I'd like to
see if there was some drawn out discussion on this. Something in IRC or
another PR perhaps?

SRV records for port mappings requires programmatic interaction meaning
only applications that are programmed to use SRV records will work.
"Legacy" applications that just depend on a hostname/port combo from a
configuration source won't work. With links as they stand today I can very
easily support legacy applications that have no concept of service
discovery by just reading the ENV vars from a shell script and writing the
configuration into whatever configuration file the application needs. Thus
no code change.

@erikh https://github.com/erikh If /etc/hosts is bind mounted
MS_SLAVE, doesn't that mean the files can get out of sync if the container
does something to create a new inode, for example vi /etc/hosts. @duglin
https://github.com/duglin comment of ownership is important. If we're
dedicated to DNS based service discovery then made we should just go all in
with a DNS server. I don't think sharing ownership of /etc/hosts is the
best idea.


Reply to this email directly or view it on GitHub
#7468 (comment).


Reply to this email directly or view it on GitHub #7468 (comment).

@erikh

This comment has been minimized.

Copy link
Contributor

erikh commented Oct 20, 2014

I've updated #7467 (the appropriate proposal for this side of links) to cover port discovery (near the bottom of the text). Please review and comment there, thanks.

/cc @duglin

@congdepeng

This comment has been minimized.

Copy link

congdepeng commented Dec 18, 2014

👍

@bbinet

This comment has been minimized.

Copy link
Contributor

bbinet commented Dec 19, 2014

any news on this topic?

@erikh

This comment has been minimized.

Copy link
Contributor

erikh commented Dec 19, 2014

Hello,

internally we've decided to move forward with a different strategy. there will be a proposal for the public in a few weeks.

going to close this as it will very likely never exist in this form.

@gdm85

This comment has been minimized.

Copy link
Contributor

gdm85 commented Feb 2, 2015

For those interested, I have implemented a very basic stateful two-ways linking here:

https://github.com/gdm85/docker-fw#two-ways-linking

The trick is that by using docker-fw to start/stop containers you can have this feature covered.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment