New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add dockerhost as an entry in /etc/hosts in all containers #23177
Comments
Hi @alunarbeach Are you working on this? I'm planning to start on this ( this has been a pain point for me as well ). In any case we must outline the scenarios in this thread well , example Cluster/non-cluster environment. Thanks. |
#dibs |
USER POLL The best way to get notified of updates is to use the Subscribe button on this page. Please don't use "+1" or "I have this too" comments on issues. We automatically The people listed below have upvoted this issue by leaving a +1 comment: |
As a workaround, in 1.11.2 the following seems to work inside the container if you have run it with the
|
Or |
My stupid workaround: docker run -v /etc/advertise-ip:/etc/host-ip:ro -e SERVICE_REGISTRY_URL=consul://dockerhost:8500/ ...... and change my docker image's entry script to setup " By this way, my container still works even the host IP changes. |
This is especially big issue when running dev setups through This is specially detrimental when developing systems that require lots of files to be carried over to the container (fe. PHP CMS systems), at which point the directory mounting through Virtualbox shared folders just dies (even with mounting as little files as possible, the syncing can take minutes), meaning developing is much faster on the host machine itself, but alas, then you cant really use dockerized services that would contact your system since you have no reliable way to provide host machine as service address though |
+1 |
Some places I would want to check for this:
|
Binaries in minimal Docker containers like the static Go containers don't use the So, I suggest adding |
They do use it. |
While @Dominik-K was wrong about go and /etc/hosts, his suggestion is valid - the dockerhost should be exposed via built in DNS and not the hosts file (docker0 ip works even with custom bridge networks). While this is useful for convenience purposes (especially in swarm mode), I am however of the opinion that you should provide explicit hosts which you want to connect to via environment/configuration. There's just less space for things breaking because you split two services into two different docker hosts for example, and dockerhost would be useless because you'd need the other one, and you'd create a custom network between hosts or just explicitly add the hosts to --add-host or your own DNS. I see the point why Docker guys have been ignoring this for so many years. |
There can be many uses for dockerhost and not all of them can be covered by putting it in /etc/hosts . But there are certain basic cases where it is very useful to be able to connect to your host, just like any system has the concept of "localhost = 127.0.0.1". Sure, the host can have many IPs, you don't always want to get 127.0.0.1, you can define multiple IPs on the lo interface, there are endless possibilities, but that doesn't change the fact that every system has the concept of "localhost" so it can connect to itself. Putting dockerhost in /etc/hosts and assigning it to the docker0 IP of the host will give some consistency and enable uses which currently require workarounds. It won't solve all problems for all people, but it is a starting point, just like localhost. |
Yes, not everybody is using docker to run a million services distributed over thousands of hosts. Especially for development it can be very convenient to connect to some service that is running on the host. Since the introduction of networks it has become much more difficult to reliably predict the host's ip inside the container from outside the container. This is necessary because in most cases when using pre-exisintg images (e.g. from Hub) with docker-compose it's very cumbersome (or even sometimes impossible) to inject a script to figure out the host ip. |
@cpuguy83 I'd love to access Docker host from within a container as "localhost" (because of some Cookie restrictions etc. - it's just much easier to set it up). Why is |
an example of how to run |
👍 for this one. I'm having
|
+1 |
1 similar comment
+1 |
We have |
This is an example of docker-host
|
This feature is particularly important when running global services (one per host) as part of a Docker stack (in swarm mode). If you want to create a container to, say, monitor some services running on the host outside of docker, you need a way to access those services. Since each host in the swarm has a different IP, you can't hard code the IP, so there's no way of doing this currently without adding a hack to determine the host IP into the container image startup script. This is very inconvenient in many cases (standard registry images, etc.) - you end up having to build a new image for everything just to include the hack, and it may mean having to dynamically update/write config files for whatever the container is running as part of container startup. A lot of complexity and fragility vs 'dockerhost'... |
I need to access a different container from a container, but the automagical host mapping in /etc/hosts is effectively a loopback. Need a way to turn off the ill-concieved network magic. |
Not sure how that's relevant to this issue. But if you need to access one container from another then they either need to share a network or use "--link". |
You can now use |
@jbcpollak Does |
Looks like it supports Mac and Windows right now: https://www.reddit.com/r/docker/comments/87ln1e/good_news_hostdockerinternal_resolves_to_host_on/ |
While having host IP exposed to applications running inside Docker, a more generic solution of reverse mapping a specific |
The host is always reachable from inside the containers, isnt it? |
… |
@AnrDaemon Assuming your machine doesn't expose ports for inbound traffic by default (except SSH and other ports that you defined explicitly), then you can run an application in the host (that can be accessed with To give more context, RFC 1918 defines:
The IP that your container uses to access the host will be a private one, like You just have to make sure that the application running in the host can accept even those connections that aren't from localhost. For example, you can block external access to mysql (port 3306) using a firewall that block all ports by default, but include in the mysql configuration to allow all ips ( This was just an example, but can be extended to other cases. So I don't see a need (and I don't even know if it's viable) to make the container map |
Not necessarily on the host. This is a generally useful functionality. |
@AnrDaemon You could try to use sockets. I try to use only ports in my apps because it simplify things, but you should be able to use sockets (assuming the containers/host that need access to the socket can "see" it). When you access through the socket, the connection should be seen as coming from For example, you can setup nginx to connect to an upstream socket (instead of
With mysql you can achieve a similar solution:
And in a docker container with a mysql client:
And inside the container:
This way you can connect as you would connect to localhost, but using sockets. But I recommend using |
Assuming target application understand sockets… |
@AnrDaemon Maybe this SO answer my help you (haven't tested tough): https://stackoverflow.com/a/52812977/4850646 That said, you need to have Assuming you are on a Linux machine, you will find that As a workaround, you could, for example, store the docker host ip in an environment variable and use it when running the container with the |
@lucasbasquerotto Sockets wont work in a communication between host and container in Docker-For-Mac! Because there is a (hidden) VM in between, the socket breaks because the hypervisor doesnt support them. See also docker/for-mac#483 |
Covered in #40007 |
@AnrDaemon another solution is this: The file gets mounted as debug - so in the container I can just execute |
Is there a way to have |
There's a proposal to make it configurable on the client, but it hasn't been implemented yet; see docker/cli#2290 |
Thank you! I will track it there. :-) |
This is a follow up to the issue #1143
As you see there are lot of thumbs up for adding dockerhost as an entry in /etc/hosts.
I would like to connect dockerhost from all containers in all platforms - mac/win/linux.
The text was updated successfully, but these errors were encountered: