Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support Overlay Network #337

Merged
merged 17 commits into from
May 1, 2016
Merged

Support Overlay Network #337

merged 17 commits into from
May 1, 2016

Conversation

baptistedonaux
Copy link
Contributor

  • Migrate docker-gen from 0.4.2 to 0.5.0 (add []Network supports)
  • Nginx template use now current IP into network.
  • Documentation updated. Proxy must run it with the option --net=host to access at all differents networks (overlay network support - tested with docker-compose --x-networking up -d)

@baptistedonaux
Copy link
Contributor Author

If PR merged, missing Circle CI container creation with flag --net=host to deal with network differents of bridge (default value).

@md5
Copy link
Contributor

md5 commented Jan 17, 2016 via email

@baptistedonaux
Copy link
Contributor Author

--net=host flag isn't required while proxy container is in the same network that the container called. So, if all containers called (with VIRTUAL_HOST) are in default network (currently bridge), the flag isn't required.

With the new overlay network and the --x-networking docker-compose flag (currently beta - 1.5), every application are isolated in specific network. Containers into bridge network can't call containers in other network. With --net=host flag, proxy container is on the localhost network. All virtual networks are callables.

@md5
Copy link
Contributor

md5 commented Jan 18, 2016

Based on the Getting started with multi-host networking document, it seems like they're recommending --net=my-net where my-net is the name of the overlay network.

I think this should be documented in its own section of the readme instead of adding --net= flags to every docker run in the existing readme examples. That new section should use --net=my-net instead of --net=host and should link to https://docs.docker.com/engine/userguide/networking/get-started-overlay/ or some other Docker documentation.

@baptistedonaux
Copy link
Contributor Author

Okay, --net=host flag is probably too much and require few explications. host "network" is the ultimate solution for proxy in front-end.

@md5 Do you want propose a README text block to insert ?

@md5
Copy link
Contributor

md5 commented Jan 18, 2016

I haven't really played around with overlay enough to write the section myself and I'm not sure when I'll get a chance to do so.

Regarding --net=host, I'm really reluctant to ever use it since it means that the container shares all network namespaces with the host (including Unix domain sockets). I've seen weird, unexpected things happen when --net=host is used (eg moby/moby#5899), so I consider it an absolute last resort.

@baptistedonaux
Copy link
Contributor Author

I understand. The best solution could be an aggregation of namespace networks. I currently search a best solution.

@baptistedonaux
Copy link
Contributor Author

Currently, we can associate a container at many networks with docker network connect my_network my_container but only one network at container creation (moby/moby#17796).

Can be a best alternative at --net=host flag.

@md5
Copy link
Contributor

md5 commented Jan 18, 2016

I agree the connecting explicitly to all the relevant networks is the best approach. It looks like support for doing this in a docker-compose.yml was added in docker/compose#2564

@baptistedonaux
Copy link
Contributor Author

@md5 Please feedback for English language and content writing.

@md5
Copy link
Contributor

md5 commented Jan 21, 2016

@baptistedonaux I opened baptistedonaux#1 with my suggestions.

One thing that updating the README made me wonder is whether some attempt should be made in nginx.tmpl to filter the containers to only include the ones in the same networks as the nginx-proxy instance itself. That would require the ability to identify the container ID in which docker-gen is currently executing. I think nginx-proxy/docker-gen#146 may be a step in that direction, but I'm not sure that's possible currently.

@md5
Copy link
Contributor

md5 commented Jan 21, 2016

Looking closer at nginx-proxy/docker-gen#146, I don't think it actually does anything to help us figure out the container ID of the nginx-proxy container in which docker-gen is currently executing.

@baptistedonaux
Copy link
Contributor Author

@md5 I was thinking at this feature. I try implement it and I come back to you.

@md5
Copy link
Contributor

md5 commented Jan 21, 2016

It looks like docker-gen could determine this from /proc/self/cgroup. Here's an example with awk:

# awk -F/ '/cpu:/ { print $NF }' /proc/self/cgroup 
7f72ffdf3089df475a9cfe459fd0ec3b0d214686e794887ca17bc1b32a6f995b

docker-gen could pretty easily read the /proc/self/cgroup file itself directly, assuming this is consistent across Docker hosts.

Update README wording for overlay networking
@baptistedonaux
Copy link
Contributor Author

@md5 To retrieve current container name, I propose awk command:

awk -F"-|\." '/1:/ {print $3}' /proc/self/cgroup

Then, get container's networks for current network requires to know current container name. So I propose two solutions:

  • Add into Docker structure a new field with name of current container.
type Docker struct {
   Name            string
   NumContainers   int
   NumImages       int
   Version         string
   ApiVersion      string
   GoVersion       string
   OperatingSystem string
   Architecture    string
+  CurrentContainerName string
}
  • Add new template's method into template.go to retrieve container name (with awk).
func getCurrentContainerName() (string, error);

In two cases, docker-gen requires a new feature implementation (or I need help).

@baptistedonaux
Copy link
Contributor Author

@md5 What do you think of my propositions ?

@md5
Copy link
Contributor

md5 commented Jan 27, 2016

@baptistedonaux I don't think that using awk will be necessary. The Go code should be able to deal with /proc/self/cgroup directly.

I think adding the current container id (or name) to the Docker struct makes sense. I started looking at the code to do so, but the code for populating $.Docker is spread around a bit and I didn't end up writing any code.

@baptistedonaux
Copy link
Contributor Author

@md5 I try too on a git branch to prepare the next implementation (matches between proxy container and networks).

@baptistedonaux
Copy link
Contributor Author

I have implemented a parser for /proc/self/cgroup. I'm not satisfied because parse a file with split methods doesn't seem me "conventional". Default to find (to know) a best solution, this is my implementation.

baptistedonaux/docker-gen@dc155e1

I wait your implementation or your feedback.

@md5
Copy link
Contributor

md5 commented Jan 28, 2016

I commented on the commit. Perhaps @jwilder can take a look when he has a minute.

@jwilder
Copy link
Collaborator

jwilder commented Jan 28, 2016

Agree w/ all of @md5 comments. Format of my /proc/self/cgroup matches @md5's as well.

@baptistedonaux
Copy link
Contributor Author

I propose this PR nginx-proxy/docker-gen#158

@pitkley
Copy link
Contributor

pitkley commented Mar 24, 2016

@baptistedonaux I got good and bad news, I found the issue:

$ docker run nginx cat /proc/self/cgroup
11:name=systemd:/user/1002.user/4.session/lxc/box185
10:hugetlb:/user/1002.user/4.session/lxc/box185
9:perf_event:/user/1002.user/4.session/lxc/box185
8:blkio:/user/1002.user/4.session/lxc/box185
7:freezer:/user/1002.user/4.session/lxc/box185
6:devices:/user/1002.user/4.session/lxc/box185
5:memory:/user/1002.user/4.session/lxc/box185
4:cpuacct:/user/1002.user/4.session/lxc/box185
3:cpu:/user/1002.user/4.session/lxc/box185
2:cpuset:/user/1002.user/4.session/lxc/box185

The container ID doesn't get exposed by Circle-CI through /proc/self/cgroup, thus the generation of the config fails. One alternative to using cgroups is the $HOSTNAME environment variable in conjunction with /proc/self/mountinfo:

# echo $HOSTNAME
991c24d485ec

# cat /proc/self/mountinfo
664 652 0:36 /box1259/rootfs/var/lib/docker/btrfs/subvolumes/c9e1b36a7b2c52348f03888a6f05599db02a301f2f6a59b65fe6afeb189c3a02 / rw,nodev,noatime - btrfs /dev/xvdh rw,nodatasum,nodatacow,ssd,space_cache
665 664 0:271 / /proc rw,nodev,relatime - proc proc rw
666 664 0:272 / /dev rw,nosuid,nodev - tmpfs tmpfs rw,mode=755,uid=231072,gid=231072
667 666 0:273 / /dev/pts rw,relatime - devpts devpts rw,gid=231077,mode=620,ptmxmode=666
668 664 0:274 / /sys/fs/cgroup rw,nosuid,nodev,noexec,relatime - tmpfs tmpfs rw,mode=755,uid=231072,gid=231072
669 664 0:36 /box1259/rootfs/var/lib/docker/containers/991c24d485ec3a13198b101a98152228194305243f10266fd0215d0689e4bb22/resolv.conf /etc/resolv.conf rw,nodev,noatime - btrfs /dev/xvdh rw,nodatasum,nodatacow,ssd,space_cache
670 664 0:36 /box1259/rootfs/var/lib/docker/containers/991c24d485ec3a13198b101a98152228194305243f10266fd0215d0689e4bb22/hostname /etc/hostname rw,nodev,noatime - btrfs /dev/xvdh rw,nodatasum,nodatacow,ssd,space_cache
671 664 0:36 /box1259/rootfs/var/lib/docker/containers/991c24d485ec3a13198b101a98152228194305243f10266fd0215d0689e4bb22/hosts /etc/hosts rw,nodev,noatime - btrfs /dev/xvdh rw,nodatasum,nodatacow,ssd,space_cache
672 666 0:269 / /dev/shm rw,nosuid,nodev,noexec,relatime - tmpfs shm rw,size=65536k,uid=231072,gid=231072
673 666 0:46 / /dev/mqueue rw,nosuid,nodev,noexec,relatime - mqueue mqueue rw
674 666 0:5 /null /dev/null rw,relatime - devtmpfs udev rw,size=30904560k,nr_inodes=7726140,mode=755
675 666 0:5 /zero /dev/zero rw,relatime - devtmpfs udev rw,size=30904560k,nr_inodes=7726140,mode=755
676 666 0:5 /full /dev/full rw,relatime - devtmpfs udev rw,size=30904560k,nr_inodes=7726140,mode=755
677 666 0:5 /tty /dev/tty rw,relatime - devtmpfs udev rw,size=30904560k,nr_inodes=7726140,mode=755
678 666 0:5 /urandom /dev/urandom rw,relatime - devtmpfs udev rw,size=30904560k,nr_inodes=7726140,mode=755
679 666 0:5 /random /dev/random rw,relatime - devtmpfs udev rw,size=30904560k,nr_inodes=7726140,mode=755
681 665 0:5 /null /proc/kcore rw,relatime - devtmpfs udev rw,size=30904560k,nr_inodes=7726140,mode=755
682 665 0:5 /null /proc/latency_stats rw,relatime - devtmpfs udev rw,size=30904560k,nr_inodes=7726140,mode=755
683 665 0:5 /null /proc/timer_stats rw,relatime - devtmpfs udev rw,size=30904560k,nr_inodes=7726140,mode=755

# grep "$HOSTNAME" /proc/self/mountinfo | head -n1 | perl -n -e '/([0-9a-f]{64})/ && print "$1\n"'
991c24d485ec3a13198b101a98152228194305243f10266fd0215d0689e4bb22

I would suggest adapting docker-gen to try to find the ID in cgroup, and if it doesn't find it there, use /proc/self/mountinfo similar to how I did it in pure shell commands above.

@baptistedonaux
Copy link
Contributor Author

@pitkley I think $HOSTNAME isn't good reference because it is overridable.

On my host, I have /proc/self/mountinfo too. So I think that this file can be a sure source. I implement it.

@pitkley
Copy link
Contributor

pitkley commented Mar 24, 2016

@baptistedonaux /proc/self/mountinfo can contain more than one 64-byte hexadecimal ID, see my example. At least on Circle-CI, you will have to use the $HOSTNAME variable. There might be another alternative, but I haven't found one yet...

@thomasleveil
Copy link
Contributor

Keep in mind that CircleCI compile their own custom docker engine binary. Make sure that issue isn't specific to CircleCI

@thomasleveil
Copy link
Contributor

See thomasleveil@c85839b for switching back to TravisCI (if it helps)

@pitkley
Copy link
Contributor

pitkley commented Mar 24, 2016

@thomasleveil Thanks for the input on that! I guess that this is rather specific, because I haven't heard of /proc/self/cgroup not providing the ID. I wouldn't "blame" it on the Circle-CI build without further analysis though, because it might have to do with the LXC setup they use rather than with the build itself.

Anyway, this would probably be a "custom-tailored" solution to get CI happy, with the side-effect of supporting the low percentage of other users who might run a special build of Docker that doesn't expose the ID through the cgroups directly.

While switching to Travis is an option (and a quick test shows that the cgroups are available), I'm not sure if @jwilder would be too happy about switching.

I think that this is @jwilder's call: "monkeypatch" docker-gen such that it tries to find the container ID elsewhere (like /proc/self/mountinfo), or switch CI to e.g. Travis?

@baptistedonaux
Copy link
Contributor Author

@pitkley True, only @jwilder can switch to Travis CI.

Neverheless, @thomasleveil was right because the last Travis built successful. https://travis-ci.org/thomasleveil/nginx-proxy/builds/95239874

@pitkley
Copy link
Contributor

pitkley commented Mar 24, 2016

@baptistedonaux Furthermore, I have tried to build and test this PR, which worked too. So Travis-CI definitely seems like a viable option! 👍

@jwilder
Copy link
Collaborator

jwilder commented Mar 25, 2016

I'm fine switching to Travis CI. This project used to use it as well, but switched to Circle CI when the test suite was added. See: #246 for why.

@baptistedonaux
Copy link
Contributor Author

@jwilder Great, Travais fail seems solved.

@thomasleveil @pitkley Somebody to propose a PR (switch Circle to Travis) or I add this changement in current PR.

@baptistedonaux
Copy link
Contributor Author

@jwilder Travis test works

@@ -73,6 +73,7 @@ load test_helpers
-v /var/run/docker.sock:/tmp/docker.sock:ro \
-v $BATS_TEST_DIRNAME/../nginx.tmpl:/etc/docker-gen/templates/nginx.tmpl:ro \
--volumes-from bats-nginx \
--expose 80 \
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Forgive me if this is covered in the previous comments, but why is this change needed?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@pitkley @md5 Unit test are fixed and Travis executes with success.

@nikashitsa
Copy link

@baptistedonaux any updates? Can't wait for it 😊

@baptistedonaux
Copy link
Contributor Author

I wait a feedback of @jwilder to advance this PR.

/cc @md5 @pitkley @nikashitsa

@jwilder jwilder merged commit 1c98df2 into nginx-proxy:master May 1, 2016
@jwilder
Copy link
Collaborator

jwilder commented May 1, 2016

Thanks @baptistedonaux!

@lessless
Copy link

does that mean that it's possible to use with v2 syntax?

@baptistedonaux
Copy link
Contributor Author

It's a feature only available with the v2 syntax. In v1, it's impossible to manage networks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.