Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add dockerhost as an entry in /etc/hosts in all containers #23177

Closed
alunarbeach opened this issue Jun 1, 2016 · 41 comments
Closed

Add dockerhost as an entry in /etc/hosts in all containers #23177

alunarbeach opened this issue Jun 1, 2016 · 41 comments

Comments

@alunarbeach
Copy link

This is a follow up to the issue #1143
As you see there are lot of thumbs up for adding dockerhost as an entry in /etc/hosts.
I would like to connect dockerhost from all containers in all platforms - mac/win/linux.

@sbose78
Copy link
Contributor

sbose78 commented Jun 13, 2016

Hi @alunarbeach Are you working on this? I'm planning to start on this ( this has been a pain point for me as well ).

In any case we must outline the scenarios in this thread well , example Cluster/non-cluster environment.
and the outline of the design, before we proceed.

Thanks.

@sbose78
Copy link
Contributor

sbose78 commented Jun 13, 2016

#dibs

@GordonTheTurtle
Copy link

USER POLL

The best way to get notified of updates is to use the Subscribe button on this page.

Please don't use "+1" or "I have this too" comments on issues. We automatically
collect those comments to keep the thread short.

The people listed below have upvoted this issue by leaving a +1 comment:

@eciuca

@jglick
Copy link

jglick commented Jun 23, 2016

As a workaround, in 1.11.2 the following seems to work inside the container if you have run it with the -v /var/run/docker.sock:/var/run/docker.sock trick and have a Docker client inside:

echo $(docker inspect -f '{{.NetworkSettings.Gateway}}' $HOSTNAME) dockerhost >> /etc/hosts

@cpuguy83
Copy link
Member

Or --add-host dockerhost:<addr>

@Dieken
Copy link

Dieken commented Jun 26, 2016

My stupid workaround:

docker run -v /etc/advertise-ip:/etc/host-ip:ro -e SERVICE_REGISTRY_URL=consul://dockerhost:8500/ ......

and change my docker image's entry script to setup "cat /etc/host-ip dockerhost" in /etc/hosts.

By this way, my container still works even the host IP changes.

@mikkotikkanen
Copy link

This is especially big issue when running dev setups through docker-compose (which doesn't allow bash scripts specially when running in different OS's) to have specific version of database servers etc. It doesn't make any sense (and is pretty much against the ideology of docker) to force one to modify the image itself (ie. write scripts for specific environments) in order to be able to override a service configurations (fe. instead of contacting that service over there, use this one over here, which happens to run on host system).

This is specially detrimental when developing systems that require lots of files to be carried over to the container (fe. PHP CMS systems), at which point the directory mounting through Virtualbox shared folders just dies (even with mounting as little files as possible, the syncing can take minutes), meaning developing is much faster on the host machine itself, but alas, then you cant really use dockerized services that would contact your system since you have no reliable way to provide host machine as service address though docker-compose-override.

@dimitrovs
Copy link

+1

@MetalArend
Copy link

Some places I would want to check for this:

  • add the host ip to "docker info"
  • add the host ip to a container inspect in the .HostConfig section
  • add a dockerhost and/or docker.local domain that I can getent hosts (this might also add it to the .HostConfig.ExtraHosts path in a docker inspect?)

@Dominik-K
Copy link

Binaries in minimal Docker containers like the static Go containers don't use the /etc/hosts file. But they can access the Docker host's hostname without the ".local" or ".home" (OS X), as mentioned by Christoph Kluge.

So, I suggest adding dockerhost in a more generic place, e.g. in the Docker network provider itself that all binaries can resolve dockerhost.

@AlekSi
Copy link

AlekSi commented Sep 27, 2016

Binaries in minimal Docker containers like the static Go containers don't use the /etc/hosts file.

They do use it.

@titpetric
Copy link

While @Dominik-K was wrong about go and /etc/hosts, his suggestion is valid - the dockerhost should be exposed via built in DNS and not the hosts file (docker0 ip works even with custom bridge networks). While this is useful for convenience purposes (especially in swarm mode), I am however of the opinion that you should provide explicit hosts which you want to connect to via environment/configuration. There's just less space for things breaking because you split two services into two different docker hosts for example, and dockerhost would be useless because you'd need the other one, and you'd create a custom network between hosts or just explicitly add the hosts to --add-host or your own DNS. I see the point why Docker guys have been ignoring this for so many years.

@dimitrovs
Copy link

dimitrovs commented Oct 23, 2016

There can be many uses for dockerhost and not all of them can be covered by putting it in /etc/hosts . But there are certain basic cases where it is very useful to be able to connect to your host, just like any system has the concept of "localhost = 127.0.0.1". Sure, the host can have many IPs, you don't always want to get 127.0.0.1, you can define multiple IPs on the lo interface, there are endless possibilities, but that doesn't change the fact that every system has the concept of "localhost" so it can connect to itself. Putting dockerhost in /etc/hosts and assigning it to the docker0 IP of the host will give some consistency and enable uses which currently require workarounds. It won't solve all problems for all people, but it is a starting point, just like localhost.

@ulope
Copy link

ulope commented Oct 25, 2016

Yes, not everybody is using docker to run a million services distributed over thousands of hosts.

Especially for development it can be very convenient to connect to some service that is running on the host.

Since the introduction of networks it has become much more difficult to reliably predict the host's ip inside the container from outside the container. This is necessary because in most cases when using pre-exisintg images (e.g. from Hub) with docker-compose it's very cumbersome (or even sometimes impossible) to inject a script to figure out the host ip.

@VojtechVitek
Copy link
Contributor

@cpuguy83 I'd love to access Docker host from within a container as "localhost" (because of some Cookie restrictions etc. - it's just much easier to set it up).

Why is --add-host localhost:<addr> forbidden to use?

@ripper2hl
Copy link

an example of how to run --add-host flag ?

@hickscorp
Copy link

👍 for this one. I'm having DOCKERHOST set as env now, quite cumbersome.

export DOCKERHOST=$(ifconfig | grep -E "([0-9]{1,3}\.){3}[0-9]{1,3}" | grep -v 127.0.0.1 | awk '{ print $2 }' | cut -f2 -d: | head -n1)

@fabiohbarbosa
Copy link

+1

1 similar comment
@killcity
Copy link

killcity commented Aug 9, 2017

+1

@AlekSi
Copy link

AlekSi commented Aug 15, 2017

We have docker.for.mac.localhost now: https://docs.docker.com/docker-for-mac/networking/#per-container-ip-addressing-is-not-possible

@ripper2hl
Copy link

This is an example of docker-host

docker run --add-host="www.example.com:192.168.0.1" ubuntu

@tomwidmer
Copy link

This feature is particularly important when running global services (one per host) as part of a Docker stack (in swarm mode). If you want to create a container to, say, monitor some services running on the host outside of docker, you need a way to access those services. Since each host in the swarm has a different IP, you can't hard code the IP, so there's no way of doing this currently without adding a hack to determine the host IP into the container image startup script. This is very inconvenient in many cases (standard registry images, etc.) - you end up having to build a new image for everything just to include the hack, and it may mean having to dynamically update/write config files for whatever the container is running as part of container startup. A lot of complexity and fragility vs 'dockerhost'...

@CH-JoelBondurant
Copy link

I need to access a different container from a container, but the automagical host mapping in /etc/hosts is effectively a loopback. Need a way to turn off the ill-concieved network magic.

@cpuguy83
Copy link
Member

Not sure how that's relevant to this issue. But if you need to access one container from another then they either need to share a network or use "--link".

@jbcpollak
Copy link

You can now use host.docker.internal to access the host computer.

@tomfotherby
Copy link
Contributor

@jbcpollak Does host.docker.internal work on Linux or is it only for Docker Desktop?

@jbcpollak
Copy link

Looks like it supports Mac and Windows right now:

https://www.reddit.com/r/docker/comments/87ln1e/good_news_hostdockerinternal_resolves_to_host_on/

@AnrDaemon
Copy link

While having host IP exposed to applications running inside Docker, a more generic solution of reverse mapping a specific port inside container to specific address:port outside container is still needed.
I.e. I'm running XDebug on the host and want an application running inside container to reach it. Preferable on localhost:port, not a wide-open public host IP.

@MatthiasKuehneEllerhold
Copy link

The host is always reachable from inside the containers, isnt it?

@AnrDaemon
Copy link

Preferable on localhost:port, not a wide-open public host IP.

@lucasbasquerotto
Copy link

@AnrDaemon Assuming your machine doesn't expose ports for inbound traffic by default (except SSH and other ports that you defined explicitly), then you can run an application in the host (that can be accessed with localhost:port from the host, and inside the container you can access it with host_ip:port, without being stopped by the firewall, even if the port is blocked from external access by the firewall, because it uses the docker0 network that is in the host (the ip is accessible only in the host).

To give more context, RFC 1918 defines:

The Internet Assigned Numbers Authority (IANA) has reserved the
following three blocks of the IP address space for private internets:

 10.0.0.0        -   10.255.255.255  (10/8 prefix)
 172.16.0.0      -   172.31.255.255  (172.16/12 prefix)
 192.168.0.0     -   192.168.255.255 (192.168/16 prefix)

The IP that your container uses to access the host will be a private one, like 172.17.42.1.

You just have to make sure that the application running in the host can accept even those connections that aren't from localhost. For example, you can block external access to mysql (port 3306) using a firewall that block all ports by default, but include in the mysql configuration to allow all ips (0.0.0.0), which means that all applications running in the host, as well as in containers inside the host, can access it, but not external applications (if you allow only 127.0.0.1, other containers may not be able to access it).

This was just an example, but can be extended to other cases. So I don't see a need (and I don't even know if it's viable) to make the container map localhost:port to a port in the host. Instead, having host.docker.internal in /etc/hosts in the container pointing to the host ip should be enough.

@AnrDaemon
Copy link

make the container map localhost:port to a port in the host

Not necessarily on the host. This is a generally useful functionality.
Yes, use cases often boil down to connecting to host.
Yes.
But I don't understand why everybody want to dumb it down to that exclusively?

@lucasbasquerotto
Copy link

@AnrDaemon You could try to use sockets. I try to use only ports in my apps because it simplify things, but you should be able to use sockets (assuming the containers/host that need access to the socket can "see" it). When you access through the socket, the connection should be seen as coming from localhost.

For example, you can setup nginx to connect to an upstream socket (instead of host:port), like:

upstream mysite {
    server unix:///tmp/mysite.sock; # for a file socket
    # server 127.0.0.1:8001; # instead of this
    # server mysite:8001; # or this
} 

With mysql you can achieve a similar solution:

[mysql]
socket = /path/to/mysqld.sock

And in a docker container with a mysql client:

docker run -it -v /path/to/mysqld.sock:/path/to/mysqld.sock my_image /bin/sh

And inside the container:

mysql -u root -p -h localhost --socket=/path/to/mysqld.sock

This way you can connect as you would connect to localhost, but using sockets. But I recommend using host:port unless you really need to use sockets directly.

@AnrDaemon
Copy link

you should be able to use sockets

Assuming target application understand sockets…
xdebug do not.

@lucasbasquerotto
Copy link

lucasbasquerotto commented Oct 2, 2019

@AnrDaemon Maybe this SO answer my help you (haven't tested tough):

https://stackoverflow.com/a/52812977/4850646

That said, you need to have host.docker.internal defined in the container running XDebug.

Assuming you are on a Linux machine, you will find that host.docker.internal is not defined in the container. This problem is what this issue is about (and also this issue: docker/for-linux#264)

As a workaround, you could, for example, store the docker host ip in an environment variable and use it when running the container with the --add-host in docker run or include an extra_hosts property in docker-compose.yml.

@MatthiasKuehneEllerhold
Copy link

MatthiasKuehneEllerhold commented Oct 4, 2019

@lucasbasquerotto Sockets wont work in a communication between host and container in Docker-For-Mac! Because there is a (hidden) VM in between, the socket breaks because the hypervisor doesnt support them.

See also docker/for-mac#483

@AkihiroSuda
Copy link
Member

Covered in #40007

@tflori
Copy link

tflori commented Mar 5, 2020

@AnrDaemon another solution is this:
route -nA inet|egrep ^0.0.0.0|tr -s ' '|cut -d' ' -f2
in a concrete example https://github.com/tflori/riki-community/blob/permissions/docker/php/debug.sh

The file gets mounted as debug - so in the container I can just execute debug phpunit

@gaby
Copy link

gaby commented Apr 4, 2022

Is there a way to have host.docker.internal always be embedded on linux containers? We have hundreds of containers and having to run them with --add-host is a lot of places to update..

@thaJeztah
Copy link
Member

There's a proposal to make it configurable on the client, but it hasn't been implemented yet; see docker/cli#2290

@gaby
Copy link

gaby commented Apr 4, 2022

There's a proposal to make it configurable on the client, but it hasn't been implemented yet; see docker/cli#2290

Thank you! I will track it there. :-)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests