Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Detect exposed ports from inside container #3778

Open
drewcrawford opened this issue Jan 27, 2014 · 118 comments
Open

Detect exposed ports from inside container #3778

drewcrawford opened this issue Jan 27, 2014 · 118 comments
Labels
area/networking kind/feature Functionality or other elements that the project doesn't currently have. Features are new and shiny

Comments

@drewcrawford
Copy link

What

There should be a simple way for a container to detect the portmappings assigned to it. (from inside the container)

Why

There are a variety of cases where an application needs to know the real external IP address and port at which it can be reached. Some examples:

  • torrent client
  • FTP server (passive mode)
  • TeamCity build agent
  • others

The external IP can be detected reliably through the use of an intermediary. However the port mappings cannot be reliably automatically detected.

Why don't you use non-dynamic ports?

The use of static ports (e.g. docker run -p 1234:1234 syntax), plus hardcoding the same portmappings into the image, allows the container to know what its port mappings are without dynamic discovery.

However this solution does not allow you to run the same image in multiple containers on the same host (as the ports would conflict), which is an important usecase for some images. It also assumes that the ports baked into the image will never be used by any other docker image that a user is likely to install, which is not a very good assumption.

Why don't you use the REST API?

Allowing a container to have access to the REST API is problematic. For one thing, the REST API is read/write, and if all you need is to read your portmappings, that's a dangerous level of permissions to grant a container just to find out a few ports.

@newhoggy
Copy link

newhoggy commented Mar 2, 2014

I'm interested in this as well. Is there a way to do this?

@SvenDowideit
Copy link
Contributor

yes :). if you bind mount the docker client and /var/run/docker.sock into your container, you can inspect yourself. This is an insecure approximation of introspection, which is being worked on (for eg #4332 )

the long term plan is to provide a safe way to do this - am I'm presuming that includes only allowing containers to look up their own info, without the writeable risks.

@eyal-lupu
Copy link

I am interested in that as well. Here is a scenario: on startup my containers have to register themselves (ip + port) in a service discovery directory. It means that I have to know both the external IPs and ports.

@c1pherx
Copy link

c1pherx commented Oct 16, 2014

WRT Service discovery. I make heavy use of Consul for this purpose. I leave a Consul agent listening on each host that I use. That agent knows what it's IP address is. Whenever something registers with that host it's assumed to be running on that host.

As noted by others, when registering with a service discovery tool, knowing the actual exposed port is critical. I don't much like the idea of needing to have something on the host (outside the container) register the service that's in the container. But that requires that the service inside the container knows what port to register with.

I'm guessing that it would be possible to feed the real port into the container via the environment. As noted in this bug and others similar to it (#7421), determining the "proper IP" is not something that is always easily done algorithmically as the host may have more than one interface. With that said, when the container is being created, the entity creating it (be it a person, service, etc) should be able to pass the IP in as an environment variable on its own (since while the correct IP may not be known to Docker, it should be known to whatever is doing the creating). What the creator doesn't know is what random port was chosen by Docker as the exposed one. And hard-coding isn't always desirable since each host may have different containers running and using different ports.

@ghost
Copy link

ghost commented Nov 14, 2014

I am interested in that, to automatically change the port livereload listens to.

http://feedback.livereload.com/knowledgebase/articles/195869-how-to-change-the-port-number-livereload-listens-o

@flaviostutz
Copy link

+1
I really need this for registering container services on etcd from inside the container!
For example, if I have a bunch of NodeJS containers serving my Application (with elastic resizing) and I have a bunch of NGinx containers on its front balancing requests, I would like to have NGinx instances to lookup at etcd for available NodeJS containers (along with IP:port) so that I could reconfigure NGinx to automatically reconfigure itself for added/removed NodeJS containers during operations.

@ali-mosavian
Copy link

We require this as well for service discovery. I know there are other ways of solving that, but would much prefer it if a container could register itself instead of using linked containers etc. That just creates more moving parts, in an microservice architecture where you already have a lot of moving parts, any way of reducing it is useful.

@mlehner616
Copy link

+1. Absolutely a requirement for service discovery. Right now, there are hacks and work arounds but the container really needs to have some way to either directly query the host daemon or for the information be passed into the environment at runtime.

@jessfraz jessfraz added kind/feature Functionality or other elements that the project doesn't currently have. Features are new and shiny Proposal labels Feb 26, 2015
@jessfraz
Copy link
Contributor

see #8427

@pikeas
Copy link

pikeas commented Mar 9, 2015

+1!

@selimekizoglu
Copy link

+1. This is needed for service discovery of apps running on multiple hosts.

@savermyas
Copy link

+1. Extremely need this feature for distributed testing of different devices, where test framework running in a container should send to tested device port number, where device must answer. Now I have to use bridged interface for workaround, but it would be great to know port mappings.

@thijsterlouw
Copy link
Contributor

+1 for example if you run Couchbase in a container, you need to know your (dynamically mapped) bucket port, which cannot be set on the client but is discovered via Couchbase-internals.

@danbeaulieu
Copy link

+1. We have an application that registers itself with another one of our services. We are currently using a small script to calculate a port to explicitly map and injecting it as an env variable. We'd like to start using things like the mesosphere stack and even ECS but current solutions in these environments start to look like ugly hacks really quickly.

@melo
Copy link

melo commented Mar 28, 2015

+1 hacking around to have this, need to get my containers auto-registered into Consul...

@deardooley
Copy link

@melo try registrator

@melo
Copy link

melo commented Mar 28, 2015

@deardooley yeah, I saw that one… Probably the best bet at this moment.

Thanks

@deardooley
Copy link

If that doesn't work, maybe check out adama/serfnode. It's an in house project, but does work well for a class of problems. Maybe it will be a fit for you. Praying Docker gets this into trunk.

Rion

----- Reply message -----
From: "Pedro Melo" notifications@github.com
To: "docker/docker" docker@noreply.github.com
Cc: "Rion Dooley" deardooley@hotmail.com
Subject: [docker] Detect exposed ports from inside container (#3778)
Date: Sat, Mar 28, 2015 2:38 PM

@deardooley yeah, I saw that one… Probably the best bet at this moment.

Thanks


Reply to this email directly or view it on GitHub:
#3778 (comment)

@ajodock
Copy link

ajodock commented Mar 31, 2015

@danbeaulieu Mesosphere already has a solution for injecting the port mappings into your containers.

This is because when you launch a mesos docker task, docker isn't the one that picks the random port. Instead mesos picks one out of the range of ports that you have set up as available to use. They are also nice enough to inject those port mappings into your container as environment variables!

So lets say I have this task specification

{                                                                                                                                    
  "container": {                                                                                                                     
    "type": "DOCKER",                                                                                                                
    "docker": {                                                                                                                      
      "network": "BRIDGE",                                                                                                           
      "image": "centos:centos7",                                                                                   
      "portMappings": [                                                                                                              
        { "containerPort": 8080, "hostPort": 0, "servicePort": 9000, "protocol": "tcp" },                                            
        { "containerPort": 161, "hostPort": 0, "protocol": "udp"}                                                                    
      ]                                                                                                                              
    }                                                                                                                                
  },                                                                                                                                 
  "id": "centos-test",                                                                                                               
  "instances": 1,                                                                                                                    
  "cpus": 0.25,                                                                                                                      
  "mem": 128,                                                                                                                        
  "cmd": "env"                                                                                                                       
}

Because my only command was env we can see on the stdout in the mesos interface what environment variables are being passed to the container:

HOSTNAME=64fbcdadd439
HOST=rh-mesos-dev01
PORT1=31002
PORT0=31001
MESOS_TASK_ID=centos-test.4b1fc7df-d7d4-11e4-b243-3af6fdd773cc
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
PWD=/
PORT_8080=31001
PORTS=31001,31002
container_uuid=64fbcdad-d439-2b4a-1ce8-33442781101d
SHLVL=1
HOME=/root
MARATHON_APP_ID=/centos-test
PORT_161=31002
MARATHON_APP_VERSION=2015-03-31T18:32:24.486Z
PORT=31001
MESOS_SANDBOX=/mnt/mesos/sandbox
_=/usr/bin/env

So you can see your application can detect the external port by using the PORT_ environment variable. You can also get the external hostname with the HOST environment variable. With those two things you should have all of the data that you need to give your service discovery framework.

FYI I have only tested this with a Marathon deployment, but hopefully they implemented this at the framework level so that a docker deployment from any of the tools (i.e. Chronos) would include this info.

@danbeaulieu
Copy link

@ajodock Ah that is great info I missed in the docs. (Though I do wish there was more documentation on the full implications of using network:BRIDGE)

My use case is a bit more complex and would need further functionality than just port injection but it is outside of the scope of this issue. Thanks for the info!

@vmaatta
Copy link

vmaatta commented Apr 14, 2015

In case any of you +1 people have cared to subscribe… stop with it already. That meme is so frustrating. If you have nothing to contribute, don't say anything.

@deardooley
Copy link

@vmaatta dislike

@bfil
Copy link

bfil commented Apr 14, 2015

@vmaatta just trying to avoid having the issue closed because of "lack of interest".

This feature is very much needed to achieve proper service discovery when running containers across a cluster. For example when specifying the resources needed by a container in a dynamic way (using swarm for example).

Having access to the dynamic port assigned by Docker will allow to register the services without having to run agents like Consul or use other tools such as registrator.

@ghost
Copy link

ghost commented May 19, 2015

👍

@koliyo
Copy link

koliyo commented May 19, 2015

++

@glootdev
Copy link

glootdev commented Feb 16, 2018

I think my own solution for CSGO will be:

  1. Start docker instance.
  2. Digg out exposed port (I know CSGO port is 27000).
  3. Store the port in a file on the newly started container.
  4. Start-script in my container waits for the file to appear.

ID=$(docker run -d -P myown_csgo:latest) && docker exec $ID bash -c "echo \"$(docker port $ID 27000/udp | sed s/[^:]*://)\" > /tmp/exposed_port"

Test if it worked:
docker exec $ID cat /tmp/exposed_port
32779

@cpuguy83
Copy link
Member

Supporting templating in various things, e.g. env and labels, seems reasonable to me.
There is precedent for it in services already.
I'd expect the template to be run on the types.ContainerJSON type.

@MartinKosicky
Copy link

I think in linux you are free to lookup a free port prior to starting the dockers and then pass that in an environment variable. You should not depend on anything more than that or a simple variable as I think a good practice is to have docker unaware applications... (ie they could run without docker)

@erichorne
Copy link

@MartinKosicky ; actually it's not always true that the port can be pre-selected. When using docker-compose, docker-compose chooses the port for you when scaling. There is no way for the container to know what port was assigned to it. For applications that require the actual exposed port (because the app might expose it as a URL, for example; or it might self-register with service discovery), this is a big problem. It seems like such a simple thing to just provide metadata as an environment variable; the app doesn't need to know about the metadata at all, the container entrypoint can take care of everything. The app only needs to have a concept of exposed ports vs. listening ports; which many do.

@MartinKosicky
Copy link

MartinKosicky commented Feb 17, 2018

@erichorne .. hmm yes ur right, in that case you need a component outside your container what port you are forwarded to... well in that case, Docker why not give us this feature? :) , in that case the nicest solution was to not pass that as an environment variable but prior to running your main app, run a script that contacts something outside docker to give you port mappings (using your HOSTNAME, and a env defined loopback)... and this script would set those ENV variables which I previously said you should pass... it's not very nice, but I would not sacrafise the ability to have the ability to have my app run outside docker the same way as inside, environment variables give you this decoupling

@lakshaykaushik2506
Copy link

Cannot believe this is not a feature. People here have wasted hours, months and years to develop hacks to solve this, but docker guys seem to think its not important.

@cpuguy83
Copy link
Member

@lakshaykaushik2506 Every feature takes time to design and build. If you want something on an OSS project then you should make a formal proposal and back that up with some engineering. It's not Docker Inc's job to take on every feature request.

It's also possible to do this without having a feature built in to Docker, and despite what many people think, Docker does strive not to make lock-in style features where you write applications that only work in Docker.

I do like the idea of supporting templates for things like env vars, and this was brought up very recently, however there are security implications for such a feature (as in, it's difficult to provide authorization control for these templates) which need to be fully thought through.

@alan-czajkowski
Copy link

@cpuguy83 "It's not Docker Inc.'s job to take on every feature request." <-- that is not entirely true, it is absolutely Docker Inc.'s job to take on feature requests with a strong demand or with many votes, otherwise, if Docker Inc. does not do this then it risks disenfranchising the Docker community and it creates resentment, hurting Docker Inc.

@junneyang
Copy link

any solution????

@vincentfree
Copy link

+1
I would still very mush like this feature in docker, even if it's just a ENV that's always set in the container. Consul service discovery need the exposed port. I can't use the swarm mode load balancer because the one to one mapping to healthy nodes won't work then.

@a-nick-fischer
Copy link

Another hacky solution: One could read-mount /var/run/docker.sock and read the container port mapping with curl and jq (Helpful link to a blog). Then you could pass the port as an argument to your app.

Now your application works inside and outside a container.

@ajodock
Copy link

ajodock commented Jul 17, 2018

Unfortunately you can PUT/POST to a read-only sock file, so from a security perspective that would be bad.

#Note that no containers exist
$ docker ps -a
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES

#Start up a container and do a POST from inside it to the sock file
$ docker run -it --rm -v /var/run/docker.sock:/var/run/docker.sock:ro centos:centos7 curl -XPOST -H "Content-Type:application/json"  --unix-socket /var/run/docker.sock http:/containers/create?name=test  -d '{"Image": "centos:centos7"}'
{"Id":"1ebde038faadb1ef6765b5076df82a0894b8803e9679a74fd0c810c8dea2aaed","Warnings":null}

#Verify that the test container was created from the post
$ docker ps -a
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
1ebde038faad        centos:centos7      "/bin/bash"         3 seconds ago       Created                                 test

Socks expect two way communication (you need to request something to read). In this case this sock file is pointing you to an API, and to read anything you need to write your GET request to the sock.

All making it read-only will do is prevent you from changing its file permissions, or deleting it, but that is about it. If anyone were to break into your container they could easily start up a new privileged container with full access to the host file system/networking/etc.

@brychcy
Copy link

brychcy commented Jun 18, 2019

general introspection infrastructure is being discussed in #26331 .
Maybe this feature can be added on top of that PR.

Unfortunately 26331 has been closed.

@NullHypothesis
Copy link

NullHypothesis commented Jun 29, 2019

My "solution" to this issue is to have a shell script that wraps the docker run command. This script first determines random ports and then passes these ports inside the docker container using environment variables:

function get_port {
    # Here's where the following code snippet comes from:
    # <https://unix.stackexchange.com/questions/55913/whats-the-easiest-way-to-find-an-unused-local-port>
    read LOWERPORT UPPERPORT < /proc/sys/net/ipv4/ip_local_port_range
    while :
    do
            port="`shuf -i $LOWERPORT-$UPPERPORT -n 1`"
            ss -lpn | grep -q ":$port" || break
    done
    echo "$port"
}

# Determine random ports.
OR_PORT=$(get_port)
PT_PORT=$(get_port)

# Keep getting a new PT port until it's different from our OR port.  This loop
# will only run if we happened to choose the same port for both variables, which
# is unlikely.
while [ "$PT_PORT" -eq "$OR_PORT" ]
do
    PT_PORT=$(get_port)
done

# Pass our two ports and email address to the container using environment
# variables.
docker run -d \
    -e "OR_PORT=$OR_PORT" -e "PT_PORT=$PT_PORT" -e "EMAIL=$EMAIL" \
    -p "$OR_PORT":"$OR_PORT" -p "$PT_PORT":"$PT_PORT" \
    phwinter/obfs4-bridge:0.1

Needless to say, this is just an ugly workaround. I'd rather have users run the docker container directly instead of having to run this wrapper script.

@SvenDowideit
Copy link
Contributor

I wonder if the right solution these days, is to implement #26331 as a volume plugin - that might even simplify some of the ux complexities of the original PR (and allow experimentation)

@kenttanl
Copy link

kenttanl commented Dec 4, 2020

Is it possible to inject the port into the container as an environment variable?

@alan-czajkowski
Copy link

alan-czajkowski commented Dec 7, 2020

@kenttanl yes, of course, see @NullHypothesis's solution above #3778 (comment) ... but this is an "ugly" solution and the community is requesting an "elegant" solution

@begginersbyte
Copy link

begginersbyte commented Feb 19, 2021

+1 This is extremely needed. It's been 7 years now...

@alan-czajkowski
Copy link

is there a way to make this issue more visible so that it gets implemented? a different issue tracker? some kind of forum post?

@cpuguy83
Copy link
Member

This is the best place.

I was thinking if someone wanted to do this via templating, where someone could specify a template as an env var as an example, it could work.

@bobaoapae
Copy link

+1

@snxraven
Copy link

snxraven commented Jan 28, 2022

Tonight I decided to just query the docker api using a small express server running on the host node from within the container.

const express = require('express')
const app = express()
const port = 3000

var unirest = require('unirest');


app.get('/', (req, res) => {
    let id = req.query.id;

    if (!req.query.id) {
        res.send('I do not think you belong here.')
    } else {

        var Request = unirest.get('http://localhost:2375/containers/' + id + '/json');

        Request
            .header('Accept', 'application/json')
            .end(function (response) {
                console.log(response.body.NetworkSettings.Ports)
                res.send(response.body.NetworkSettings.Ports)


            })
    }
})

app.listen(port, () => {
    console.log(`app listening on port ${port}`)
})

Then simply query the webserver using curl inside of your container: curl 172.1.0.1:3000/?id=yourcontainersname

The webserver will then respond with the ports exposed via docker for the name requested

The code may need tweaking of course for your needs but this worked out for me pretty well and I plan on expanding on this for myself a bit further.

For my purpose, I only need to grab one port number from the list of exposed ports so I can post configure some software automatically, here is it working :)

[user@exampleserver]# curl localhost:3000?id=342128351638585344
32902

Side note, to get the single port, I did this:

let responseKeys = Object.keys(response.body.NetworkSettings.Ports);
let selectedPort = responseKeys[3].replace("/tcp", "");
console.log(selectedPort)

I hope this idea helps someone at least

@crystalww
Copy link

any elegant solutions?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/networking kind/feature Functionality or other elements that the project doesn't currently have. Features are new and shiny
Projects
maintainers-session
  
Awaiting triage
Development

No branches or pull requests