Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

expose port for web server #27

Open
jgallen23 opened this issue Jun 27, 2015 · 11 comments
Open

expose port for web server #27

jgallen23 opened this issue Jun 27, 2015 · 11 comments

Comments

@jgallen23
Copy link

I'd like to use envy for some of my web development projects, but I'm not sure how to get access to the port the server is running on. For example, if I'm in an environment and my node server is running on port 8080, how do I map that to the outside world so I can hit it from a browser? Maybe have a command addport [public] [private] or something like that.

@progrium
Copy link
Owner

Yeah, I'm thinking it would be something only admin users can do. And it would be a command like addport you suggested. It would just wire up some iptables. For implementation simplicity, it might mean we require running envy with --net=host.

I don't know if it should be against the published port of the inner container or the internal exposed port of the inner container that it uses. I kind of like the idea of it being published so that the envy public port is a persistent port mapping for the user. Once admin sets up addport 80 8000 for that user, the user can stop and start different containers that publish on port 8000.

@jgallen23
Copy link
Author

Sounds great. Are there any work arounds that I could do now to expose a port? Would it work to get the internal ip of the container and then doing some sort of iptable mapping on the host?

@jgallen23
Copy link
Author

For some reason docker isn't giving me the ip address of the container. I ran docker inspect on the container, but no ip shows up. Not sure if it's because of the docker 1.7 networking refactor

@jgallen23
Copy link
Author

in case anybody else is having this issue, here is what I changed to get it working

diff --git a/scripts/enterenv b/scripts/enterenv
index e8acbb2..0b43b1c 100755
--- a/scripts/enterenv
+++ b/scripts/enterenv
@@ -69,7 +69,7 @@ env-session() {
     docker rm -f "$session" &> /dev/null
     docker run -it \
       --name "$session" \
-      --net "container:$USER.$ENV" \
+      --net "host" \
       --env "HOSTNAME=$ENV" \
       --env "ENVY_SESSION=$session" \
       --env "ENVY_RANDOM=$RANDOM" \

and then ran envy with --net host option

@progrium let me know if you want this in a pull request or want to handle differently

@progrium
Copy link
Owner

This will break your access to Docker so it's not a solution, but a workaround if it works fine for you.

@jgallen23
Copy link
Author

What do you mean by break access to docker?

@progrium
Copy link
Owner

Every environment has a Docker-in-Docker instance for any Docker usage in that environment. The socket is mounted and set with environment variable, but access to any published ports is done by attaching the environment network to the Docker-in-Docker container. You just changed that to attach to the host network instead. Besides that, I don't think environments should have access to the host network in this way. Or if they do, it's a decision of the admin, certainly not the default.

@jgallen23
Copy link
Author

I ended up taking a different route on this. I built a little reverse proxy container (envy-proxy) that watches for new envy containers. So app.name.example.com will proxy to the app environment on port 80. To get this to work, I needed to expose the user and environ name to the dind container. I added a pull request #41.

@fgrehm
Copy link
Contributor

fgrehm commented Jul 11, 2015

I tried digging deeper into this yesterday and came out to the conclusion that there are at least 2 use cases that we need to think about. Not sure if Envy should / needs to support both but one use case would be to expose ports for a specific envy session and the other would be to expose ports from the inner dind instance.

Exposing session ports is useful if we think about Envy environments being the complete environment for hacking on a web app and running the server from there without having to prepare a docker image and expose it from the inner dind instance.

On the other hand, if (for example) we want to use docker-compose inside an envy session and hack on our projects using that, the app itself will run inside the nested dind instance and will make sense to implement something like @jgallen23 did.

Does that make sense? What are your thoughts around this?

@progrium
Copy link
Owner

Still planning to use port redirects via iptables to solve this, but good point. Sometimes we want to send traffic to the environment container and sometimes to the session container. Turns out though since the session container is made to run attached to the network of the enrionment container, we only need to focus on the environment container.

@smaccona
Copy link

smaccona commented Oct 7, 2015

@jgallen23 I have a couple of Envy instances running in DigitalOcean for trying out new dev, and I frequently need to expose ports externally. As @fgrehm mentioned, there are two typical cases:

  1. I am using Docker inside my Envy session to build an app and I want to expose a port on the internal Docker container
  2. I am trying something quick directly in the Envy session, and I want to expose a port on the session itself.

As @progrium mentioned, these are actually the same use case because the session container doesn't have its own network (do a docker inspect on a session container and you'll see it doesn't have an IP address) but is instead attached to the network of the environment container.

I handle these situations by using iptables on the Envy host (the DigitalOcean instance in this case). Since I only use these DigitalOcean instances for working in Envy, I have just written a couple of convenience functions in my .bash_profile to avoid having to type full iptables commands every time. Here they are - most of this is sanity checking on the arguments and could be omitted but I didn't want to risk screwing up my iptables.

# list all mappings on the DOCKER chain
alias list="sudo iptables -n -t nat -L DOCKER --line-numbers"

# map a port on a docker container to an external port
map() {
  if [ "$#" -ne 3 ]; then
    echo "Usage: map <container-name> <container-port> <host-port>"
    return
  fi

  CONTAINER_IP=$(sudo docker inspect --format '{{ .NetworkSettings.IPAddress }}' $1)
  if [ "$CONTAINER_IP" == "" ]; then
    echo "$1 does not appear to be a Docker container with a valid network - are you trying to map a port on an Envy session?"
    return
  fi

  if ! [[ $2 =~ ^-?[0-9]+$  && $3 =~ ^-?[0-9]+$ ]]; then
    echo "Please specify integer port numbers for both <container-port> and <host-port>"
    return
  fi

  echo Mapping $CONTAINER_IP port $2 from container $1 to host port $3
  sudo iptables -t nat -A DOCKER -p tcp --dport $3 -j DNAT --to-destination $CONTAINER_IP:$2

  unset CONTAINER_IP
}

# remove all mappings on a specified external port on the DOCKER chain
unmap() {
  if [[ "$1" == "" || ! $1 =~ ^-?[0-9]+$ ]]; then
    echo "Please specify integer host port number to unmap. Usage: unmap <host-port>"
    return
  fi

  for line_num in $(sudo iptables -n -t nat -L DOCKER --line-numbers | grep dpt:$1 | awk '{print $1}')
  do
    IPTABLES_LINES="$line_num $IPTABLES_LINES"
  done

  if [[ "$IPTABLES_LINES" == "" ]]; then
    echo Port $1 does not appear to be mapped to a docker container on this host
    return
  fi

  for line in $IPTABLES_LINES
  do
    echo Removing port mapping $(sudo iptables -n -t nat -L DOCKER $line)
    sudo iptables -t nat -D DOCKER $line
  done

  unset IPTABLES_LINES
}

Of course you can call these whatever you want - I just picked list, map and unmap. Here are examples of using them from scratch, both directly from inside a session and from inside a docker container you create inside your session.

We start from a blank slate with only Envy installed on our host:

root@dev2:~# docker ps -a
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                                      NAMES
49e6a3f56c65        progrium/envy       "codep '/bin/execd -e"   35 hours ago        Up 17 hours         0.0.0.0:22->22/tcp, 0.0.0.0:443->443/tcp   envy
root@dev2:~#

First, let's create a session - I'll use a new environment called test just to make things clear:

MBP13:~ smaccona$ ssh smaccona+test@45.55.35.248
Entering session...
root@test:/# 

On our host, we now see we have two new docker containers - one for the session (smaccona.15), and one for the docker-in-docker environment (smaccona.test):

root@dev2:~# docker ps -a
CONTAINER ID        IMAGE                  COMMAND                  CREATED             STATUS              PORTS                                      NAMES
b55fe253d71e        smaccona/test          "/bin/bash"              39 seconds ago      Up 39 seconds                                                  smaccona.15
9005b3d46ed4        progrium/dind:latest   "/bin/dind"              40 seconds ago      Up 39 seconds                                                  smaccona.test
49e6a3f56c65        progrium/envy          "codep '/bin/execd -e"   36 hours ago        Up 17 hours         0.0.0.0:22->22/tcp, 0.0.0.0:443->443/tcp   envy
root@dev2:~# 

Note the session container doesn't have its own network:

root@dev2:~# docker inspect smaccona.15 | grep IPAddress
        "IPAddress": "",
        "SecondaryIPAddresses": null,
root@dev2:~# 

Now back in our Envy session, let's create a simple web server:

root@test:/# while true ; do echo "Hello world from session" | nc -l 80 ; done

On the Envy host, let's look at the existing port mappings and map our new port using the helper functions above (note we are mapping to the smaccona.test container, not the session container):

root@dev2:~# list
Chain DOCKER (2 references)
num  target     prot opt source               destination         
1    DNAT       tcp  --  0.0.0.0/0            0.0.0.0/0            tcp dpt:443 to:172.17.0.6:443
2    DNAT       tcp  --  0.0.0.0/0            0.0.0.0/0            tcp dpt:22 to:172.17.0.6:22
root@dev2:~# map smaccona.test 80 8000
Mapping 172.17.0.9 port 80 from container smaccona.test to host port 8000
root@dev2:~# 

From a terminal on my computer:

MBP13:dev smaccona$ curl 45.55.35.248:8000
Hello world from session
MBP13:dev smaccona$ 

So far so good. Now let's stop our tiny web server and run a docker container in our Envy session (I omitted the apt-get output for brevity):

root@test:/# while true ; do echo "Hello world from session" | nc -l 80 ; done
GET / HTTP/1.1
Host: 45.55.35.248:8000
User-Agent: curl/7.43.0
Accept: */*

^C
root@test:/# apt-get update && apt-get install docker.io
...
root@test:/# docker run -d -p 7000:5000 training/webapp
f3454fc0446dbec5c1033364e28f0ca47379804d9515c2aff5836170c007699d
root@test:/#

Note that the training/webapp image runs on port 5000 (in our internal container) but we are exposing it on port 7000 (in our docker-in-docker container). So that's the port we should map on our Envy host (in this case, it will be exposed on the Envy host's port 8001):

root@dev2:~# map smaccona.test 7000 8001
Mapping 172.17.0.9 port 7000 from container smaccona.test to host port 8001
root@dev2:~# 

Testing on my computer:

MBP13:dev smaccona$ curl 45.55.35.248:8001
Hello world!MBP13:dev smaccona$ 

Finally, when I'm done:

root@dev2:~# list
Chain DOCKER (2 references)
num  target     prot opt source               destination         
1    DNAT       tcp  --  0.0.0.0/0            0.0.0.0/0            tcp dpt:443 to:172.17.0.6:443
2    DNAT       tcp  --  0.0.0.0/0            0.0.0.0/0            tcp dpt:22 to:172.17.0.6:22
3    DNAT       tcp  --  0.0.0.0/0            0.0.0.0/0            tcp dpt:8000 to:172.17.0.9:80
4    DNAT       tcp  --  0.0.0.0/0            0.0.0.0/0            tcp dpt:8001 to:172.17.0.9:7000
root@dev2:~# unmap 8000
Removing port mapping DNAT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:8000 to:172.17.0.9:80
root@dev2:~# unmap 8001
Removing port mapping DNAT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:8001 to:172.17.0.9:7000
root@dev2:~# 

Testing on my computer:

MBP13:dev smaccona$ curl 45.55.35.248:8001
curl: (7) Failed to connect to 45.55.35.248 port 8001: Connection refused
MBP13:dev smaccona$ 

This ended up being far more long-winded than I expected but hopefully it will help someone! It would be useful to include this functionality directly within Envy as @progrium is proposing.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants