-
Notifications
You must be signed in to change notification settings - Fork 18.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Detect exposed ports from inside container #3778
Comments
I'm interested in this as well. Is there a way to do this? |
yes :). if you bind mount the docker client and /var/run/docker.sock into your container, you can inspect yourself. This is an insecure approximation of introspection, which is being worked on (for eg #4332 ) the long term plan is to provide a safe way to do this - am I'm presuming that includes only allowing containers to look up their own info, without the writeable risks. |
I am interested in that as well. Here is a scenario: on startup my containers have to register themselves (ip + port) in a service discovery directory. It means that I have to know both the external IPs and ports. |
WRT Service discovery. I make heavy use of Consul for this purpose. I leave a Consul agent listening on each host that I use. That agent knows what it's IP address is. Whenever something registers with that host it's assumed to be running on that host. As noted by others, when registering with a service discovery tool, knowing the actual exposed port is critical. I don't much like the idea of needing to have something on the host (outside the container) register the service that's in the container. But that requires that the service inside the container knows what port to register with. I'm guessing that it would be possible to feed the real port into the container via the environment. As noted in this bug and others similar to it (#7421), determining the "proper IP" is not something that is always easily done algorithmically as the host may have more than one interface. With that said, when the container is being created, the entity creating it (be it a person, service, etc) should be able to pass the IP in as an environment variable on its own (since while the correct IP may not be known to Docker, it should be known to whatever is doing the creating). What the creator doesn't know is what random port was chosen by Docker as the exposed one. And hard-coding isn't always desirable since each host may have different containers running and using different ports. |
I am interested in that, to automatically change the port livereload listens to. |
+1 |
We require this as well for service discovery. I know there are other ways of solving that, but would much prefer it if a container could register itself instead of using linked containers etc. That just creates more moving parts, in an microservice architecture where you already have a lot of moving parts, any way of reducing it is useful. |
+1. Absolutely a requirement for service discovery. Right now, there are hacks and work arounds but the container really needs to have some way to either directly query the host daemon or for the information be passed into the environment at runtime. |
see #8427 |
+1! |
+1. This is needed for service discovery of apps running on multiple hosts. |
+1. Extremely need this feature for distributed testing of different devices, where test framework running in a container should send to tested device port number, where device must answer. Now I have to use bridged interface for workaround, but it would be great to know port mappings. |
+1 for example if you run Couchbase in a container, you need to know your (dynamically mapped) bucket port, which cannot be set on the client but is discovered via Couchbase-internals. |
+1. We have an application that registers itself with another one of our services. We are currently using a small script to calculate a port to explicitly map and injecting it as an env variable. We'd like to start using things like the mesosphere stack and even ECS but current solutions in these environments start to look like ugly hacks really quickly. |
+1 hacking around to have this, need to get my containers auto-registered into Consul... |
@deardooley yeah, I saw that one… Probably the best bet at this moment. Thanks |
If that doesn't work, maybe check out adama/serfnode. It's an in house project, but does work well for a class of problems. Maybe it will be a fit for you. Praying Docker gets this into trunk. Rion ----- Reply message ----- @deardooley yeah, I saw that one… Probably the best bet at this moment. Thanks Reply to this email directly or view it on GitHub: |
@danbeaulieu Mesosphere already has a solution for injecting the port mappings into your containers. This is because when you launch a mesos docker task, docker isn't the one that picks the random port. Instead mesos picks one out of the range of ports that you have set up as available to use. They are also nice enough to inject those port mappings into your container as environment variables! So lets say I have this task specification
Because my only command was env we can see on the stdout in the mesos interface what environment variables are being passed to the container:
So you can see your application can detect the external port by using the PORT_ environment variable. You can also get the external hostname with the HOST environment variable. With those two things you should have all of the data that you need to give your service discovery framework. FYI I have only tested this with a Marathon deployment, but hopefully they implemented this at the framework level so that a docker deployment from any of the tools (i.e. Chronos) would include this info. |
@ajodock Ah that is great info I missed in the docs. (Though I do wish there was more documentation on the full implications of using network:BRIDGE) My use case is a bit more complex and would need further functionality than just port injection but it is outside of the scope of this issue. Thanks for the info! |
In case any of you +1 people have cared to subscribe… stop with it already. That meme is so frustrating. If you have nothing to contribute, don't say anything. |
@vmaatta just trying to avoid having the issue closed because of "lack of interest". This feature is very much needed to achieve proper service discovery when running containers across a cluster. For example when specifying the resources needed by a container in a dynamic way (using swarm for example). Having access to the dynamic port assigned by Docker will allow to register the services without having to run agents like Consul or use other tools such as registrator. |
👍 |
++ |
I think my own solution for CSGO will be:
ID=$(docker run -d -P myown_csgo:latest) && docker exec Test if it worked: |
Supporting templating in various things, e.g. env and labels, seems reasonable to me. |
I think in linux you are free to lookup a free port prior to starting the dockers and then pass that in an environment variable. You should not depend on anything more than that or a simple variable as I think a good practice is to have docker unaware applications... (ie they could run without docker) |
@MartinKosicky ; actually it's not always true that the port can be pre-selected. When using docker-compose, docker-compose chooses the port for you when scaling. There is no way for the container to know what port was assigned to it. For applications that require the actual exposed port (because the app might expose it as a URL, for example; or it might self-register with service discovery), this is a big problem. It seems like such a simple thing to just provide metadata as an environment variable; the app doesn't need to know about the metadata at all, the container entrypoint can take care of everything. The app only needs to have a concept of exposed ports vs. listening ports; which many do. |
@erichorne .. hmm yes ur right, in that case you need a component outside your container what port you are forwarded to... well in that case, Docker why not give us this feature? :) , in that case the nicest solution was to not pass that as an environment variable but prior to running your main app, run a script that contacts something outside docker to give you port mappings (using your HOSTNAME, and a env defined loopback)... and this script would set those ENV variables which I previously said you should pass... it's not very nice, but I would not sacrafise the ability to have the ability to have my app run outside docker the same way as inside, environment variables give you this decoupling |
Cannot believe this is not a feature. People here have wasted hours, months and years to develop hacks to solve this, but docker guys seem to think its not important. |
@lakshaykaushik2506 Every feature takes time to design and build. If you want something on an OSS project then you should make a formal proposal and back that up with some engineering. It's not Docker Inc's job to take on every feature request. It's also possible to do this without having a feature built in to Docker, and despite what many people think, Docker does strive not to make lock-in style features where you write applications that only work in Docker. I do like the idea of supporting templates for things like env vars, and this was brought up very recently, however there are security implications for such a feature (as in, it's difficult to provide authorization control for these templates) which need to be fully thought through. |
@cpuguy83 "It's not Docker Inc.'s job to take on every feature request." <-- that is not entirely true, it is absolutely Docker Inc.'s job to take on feature requests with a strong demand or with many votes, otherwise, if Docker Inc. does not do this then it risks disenfranchising the Docker community and it creates resentment, hurting Docker Inc. |
any solution???? |
+1 |
Another hacky solution: One could read-mount Now your application works inside and outside a container. |
Unfortunately you can PUT/POST to a read-only sock file, so from a security perspective that would be bad. #Note that no containers exist
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
#Start up a container and do a POST from inside it to the sock file
$ docker run -it --rm -v /var/run/docker.sock:/var/run/docker.sock:ro centos:centos7 curl -XPOST -H "Content-Type:application/json" --unix-socket /var/run/docker.sock http:/containers/create?name=test -d '{"Image": "centos:centos7"}'
{"Id":"1ebde038faadb1ef6765b5076df82a0894b8803e9679a74fd0c810c8dea2aaed","Warnings":null}
#Verify that the test container was created from the post
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1ebde038faad centos:centos7 "/bin/bash" 3 seconds ago Created test Socks expect two way communication (you need to request something to read). In this case this sock file is pointing you to an API, and to read anything you need to write your GET request to the sock. All making it read-only will do is prevent you from changing its file permissions, or deleting it, but that is about it. If anyone were to break into your container they could easily start up a new privileged container with full access to the host file system/networking/etc. |
Unfortunately 26331 has been closed. |
My "solution" to this issue is to have a shell script that wraps the function get_port {
# Here's where the following code snippet comes from:
# <https://unix.stackexchange.com/questions/55913/whats-the-easiest-way-to-find-an-unused-local-port>
read LOWERPORT UPPERPORT < /proc/sys/net/ipv4/ip_local_port_range
while :
do
port="`shuf -i $LOWERPORT-$UPPERPORT -n 1`"
ss -lpn | grep -q ":$port" || break
done
echo "$port"
}
# Determine random ports.
OR_PORT=$(get_port)
PT_PORT=$(get_port)
# Keep getting a new PT port until it's different from our OR port. This loop
# will only run if we happened to choose the same port for both variables, which
# is unlikely.
while [ "$PT_PORT" -eq "$OR_PORT" ]
do
PT_PORT=$(get_port)
done
# Pass our two ports and email address to the container using environment
# variables.
docker run -d \
-e "OR_PORT=$OR_PORT" -e "PT_PORT=$PT_PORT" -e "EMAIL=$EMAIL" \
-p "$OR_PORT":"$OR_PORT" -p "$PT_PORT":"$PT_PORT" \
phwinter/obfs4-bridge:0.1 Needless to say, this is just an ugly workaround. I'd rather have users run the docker container directly instead of having to run this wrapper script. |
I wonder if the right solution these days, is to implement #26331 as a volume plugin - that might even simplify some of the ux complexities of the original PR (and allow experimentation) |
Is it possible to inject the port into the container as an environment variable? |
@kenttanl yes, of course, see @NullHypothesis's solution above #3778 (comment) ... but this is an "ugly" solution and the community is requesting an "elegant" solution |
+1 This is extremely needed. It's been 7 years now... |
is there a way to make this issue more visible so that it gets implemented? a different issue tracker? some kind of forum post? |
This is the best place. I was thinking if someone wanted to do this via templating, where someone could specify a template as an env var as an example, it could work. |
+1 |
Tonight I decided to just query the docker api using a small express server running on the host node from within the container. const express = require('express')
const app = express()
const port = 3000
var unirest = require('unirest');
app.get('/', (req, res) => {
let id = req.query.id;
if (!req.query.id) {
res.send('I do not think you belong here.')
} else {
var Request = unirest.get('http://localhost:2375/containers/' + id + '/json');
Request
.header('Accept', 'application/json')
.end(function (response) {
console.log(response.body.NetworkSettings.Ports)
res.send(response.body.NetworkSettings.Ports)
})
}
})
app.listen(port, () => {
console.log(`app listening on port ${port}`)
}) Then simply query the webserver using curl inside of your container: curl 172.1.0.1:3000/?id=yourcontainersname The webserver will then respond with the ports exposed via docker for the name requested The code may need tweaking of course for your needs but this worked out for me pretty well and I plan on expanding on this for myself a bit further. For my purpose, I only need to grab one port number from the list of exposed ports so I can post configure some software automatically, here is it working :)
Side note, to get the single port, I did this: let responseKeys = Object.keys(response.body.NetworkSettings.Ports);
let selectedPort = responseKeys[3].replace("/tcp", "");
console.log(selectedPort) I hope this idea helps someone at least |
any elegant solutions? |
What
There should be a simple way for a container to detect the portmappings assigned to it. (from inside the container)
Why
There are a variety of cases where an application needs to know the real external IP address and port at which it can be reached. Some examples:
The external IP can be detected reliably through the use of an intermediary. However the port mappings cannot be reliably automatically detected.
Why don't you use non-dynamic ports?
The use of static ports (e.g.
docker run -p 1234:1234
syntax), plus hardcoding the same portmappings into the image, allows the container to know what its port mappings are without dynamic discovery.However this solution does not allow you to run the same image in multiple containers on the same host (as the ports would conflict), which is an important usecase for some images. It also assumes that the ports baked into the image will never be used by any other docker image that a user is likely to install, which is not a very good assumption.
Why don't you use the REST API?
Allowing a container to have access to the REST API is problematic. For one thing, the REST API is read/write, and if all you need is to read your portmappings, that's a dangerous level of permissions to grant a container just to find out a few ports.
The text was updated successfully, but these errors were encountered: