Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow /etc/hosts to reflect all containers on host, regardless of links #7261

Closed
blalor opened this issue Jul 27, 2014 · 11 comments
Closed

Allow /etc/hosts to reflect all containers on host, regardless of links #7261

blalor opened this issue Jul 27, 2014 · 11 comments

Comments

@blalor
Copy link

blalor commented Jul 27, 2014

If /etc/hosts were updated within each container to reflect the current list of running containers, it would be possible for containers to interact via hostnames without having to resort to links.

@thaJeztah
Copy link
Member

The --links option is also for safety; in a shared environment (containers of multiple people running on the same host), containers should not be able to communicate freely. After all, you don't want a stranger connecting to your database :)

With --links, it's possible to only EXPOSE a port, without publishing it (-p). Containers will not be able to connect to such port, unless they are linked to that container, which will give them access.

Hope my description is clear :)

@blalor
Copy link
Author

blalor commented Jul 27, 2014

This could be an alternative to using links. Links are fragile and require restarting the whole chain of linked containers.

@jlmitch5
Copy link

There is an alternative to using links, and that is publishing the port with -p.

For example, if you were to docker run -p 8080:8080 nginx, you would then be able to access nginx from localhost. You could also docker run -p 3306:3306 mysql which would allow you to access mysql from localhost.

Provided you can access localhost from DNS, you can then access each container within each container using your localhost as a bridge.

@blalor
Copy link
Author

blalor commented Jul 28, 2014

If, for example, you want to run 5 Redis containers to support 5 different applications, the only thing the applications care about is being able to talk to Redis. There's additional management overhead to expose each of those 5 Redis ports via the host, especially for a service that -- in this case -- should not be publicly exposed. Even if you bind those ports only to the bridge interface, you still have to deal with potential port conflicts. If instead each Redis container gets a hostname that can be resolved by other containers -- redis-app1, redis-app2, etc. -- you only need to ensure that the containers are running. The app can be configured to talk to its Redis instance by hostname, and it will reconnect -- using a new IP address if necessary -- if the Redis container is restarted. So this solution would offer less management overhead of exposed ports and would not have the fragility of container links.

Is this a replacement for ports or links? No. Is this an alternative that could be provided by Docker for certain use cases? Yes. Can this be implemented outside of the Docker daemon? Probably, as long as there's no weirdness with mounting your own /etc/hosts on top of the one Docker provides.

@jlmitch5
Copy link

So I'm understanding that what you're looking for is essentially DDNS for containers.

This is getting over my head, so I'm going to watch this one from the sidelines from now on.

@blalor
Copy link
Author

blalor commented Jul 28, 2014

This could effectively replace crosbymichael/skydock. It would accomplish the same thing, but with fewer moving parts.

@jlmitch5
Copy link

Alright, thanks for posting that, I'm understanding what you're looking for now.

Definitely get what you mean about the fewer moving parts thing. Skydock seems to be a choke-point in the sense that if it fails, the system fails (because your DNS infrastructure is effectively gone)

I do see an issue with scalability. Let's say you have 10000 containers running...each time an "event" occurs (like a container is stopped or started), it would need to update every other container's /etc/hosts right?

@blalor
Copy link
Author

blalor commented Jul 28, 2014

On Jul 27, 2014, at 11:54 PM, jlmitch5 notifications@github.com wrote:

I do see an issue with scalability. Let's say you have 10000 containers running...each time an "event" occurs (like a container is stopped or started), it would need to update every other container's /etc/hosts right?

Or it could update a single hosts file shared by all containers.

@cpuguy83
Copy link
Member

-1, this is what links are specifically for.
We can and are improving links.

Also gh#7092 updates /etc/hosts automatically when linked containers are restarted, but it was closed pending some other changes. This will come back.

@kuon
Copy link
Contributor

kuon commented Jul 31, 2014

A feature that would help greatly in this case is the ability to swap a linked container without breaking the link.

@cpuguy83
Copy link
Member

Please see proposals here: #7468 and #7467

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants