New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feedback for docker 1.7/1.8 experimental networking features #14083
Comments
I created a network called test using: Then I ran two containers like that: I tried to ping from one container to the other using: |
@abdelrahmanhosny Couple of things.
Regardless, we would like to keep this Issue for design questions, rather than discussing specific technical issues. If you have specific issues like these, please reach us in #docker-network IRC or open another issue once you confirm that it is a bug. |
Excellent feature, thanks! Since it's possible to publish a service when running a container via the --publish-service flag, it would be nice to be able to optionally deregister the service (in one step) when stopping/removing a container. |
A few issues.
|
Hi folks, I would like to see: Multiple containers on the same host published under the same 'service domain'
And running
Thanks! |
I can't try the multi-host with overlay network, having the following error: # docker -d --kv-store=etcd:127.0.0.1:2379
WARN[0000] Running experimental build
INFO[0000] Listening for HTTP on unix (/var/run/docker.sock)
INFO[0000] [graphdriver] using prior storage driver "aufs"
WARN[0000] Running modprobe bridge nf_nat failed with message: , error: exit status 1
FATA[0000] Error starting daemon: Error initializing network controller: error obtaining controller instance: failed to initialize vxlan id manager: failed to initialize bit sequence handler: 100: Key not found (/docker) [4] What I doing wrong? |
@diegomarangoni |
@mavenugo
And I created overlay network called test:
And I started the two containers in these two nodes:
but there are two problems here: I noticed that there is warning when i start second container on overlay network - "WARN[0924] Failed adding service host entries to the running container: open : no such file or directory", is it because of that? how it happened? |
@diegomarangoni we are working on a blog post to clearly mention all the pre-req and the configuration details. In addition to the issue with 3.13 kernel, we also have a PR opened in libkv to solve a specific problem with |
@wf2004081522 you are missing the |
@mavenugo |
Hi, I've create a Vagrant setup to test the docker multi host networking setup, I hope that can help someone to test it and give feedback, or at least as a setup example. This is some feedback, I hope it helps:
|
@eckz My own feedback: I'll continue adding to this post as I think of things... |
Hi
There are some suggestions of upgrading the kernel to 3.16 or 3.19. Can I just upgrade my swarm-0 (master) to 3.19 now or do I need to upgrade the kernel after creating the machine on AWS but before setting up swarm on master and individual nodes. |
@wf2004081522 @jgkamat external access via overlay network is something that we are exploring. One of the ideas that we are trying out now is to add the container to the |
@eckz Thanks for the feedback. @jgkamat answered a few of them. Also, the |
@rajnemani Yes. we currently have a pre-req of 3.16+ kernel in order to try the overlay driver. Your question is specific to Machine and I would ping @nathanleclaire to give a more authoritative answer. |
@mavenugo Upgrading to 3.19 resolved my 2nd issue from my earlier post. Thank you for your help. But I still have 1st issue. I cannot add/setup swarm-1 (additional node) to swarm. @nathanleclaire, any thoughts on the following issue? when I try to run the following on (my first swarm node, swarm-0 is my master) I get the following weird error. "Maximum number of retries (60) exceeded Note that in my post above I gave the wrong docker command. That command I gave is for creating the swarm-1 node using docker-machine on AWS. That works fine. The problem happens when I run the above docker run command that I provided in this post. Thanks in advance for help. |
@mavenugo thats a great solution! It looks like its working great right now! 😄 My only issue with it (so far, and its not really an issue), is that the IP of the container shows the overlay IP when running EDIT: It would also be nice to activate the dual networks with its own flag, so that we don't have to publish ports if we don't need to. I'm not sure if this behavior should be made default or not. |
@mavenugo Rather can container creation getting exposed to network option, can container attach/detach from service/endpoint and let service creation/deletion manage the network? Can we avoid specifying network while creating containers since we are anyway mentioning the service? Question 2: Question 3: Thanks |
So yesterday I watched the Docker channel video from DockerCon 2015 about Networking. And of course I had to try it out and here I my thoughts on what I've seen. Really glad you all figured out the right abstractions and things are starting to look usable. At the moment when you want to connect to an overlay-network and you want to connect to the outside world you need to configure 2 things. For example bridge and overlay. But you can't specify both on the commandline in 1 command. This is all you get: docker run --publish-service=test1.multihost.overlay -i -t alpine /bin/sh docker run --publish-service=test1.bridge.bridge -i -t alpine /bin/sh It seems you can't use 2 --publish-service-flags or some combined command divided with a comma or something: Also if you create the container with a bridge first and attach the overlay later you don't get the same behaviour as when you create the container with overlay network first and attach bridge later. With overlay network first you get a working /etc/hosts file, the other way around you don't get that. That is obviously kind of annoying. If you have an application that at start up time needs to connect to something on the overlay-network and the outside world then it would probably fail because one isn't configured yet at the start of the application because you can't specify both on the commandline (maybe there is a workaround, but I don't know yet). Also if you publish a network, you'll always have to unpublish it. While the cool thing about the default network is that all the bahaviour stays the same when just a deploy a container without any extra options. It does mean it will automatically publish and if you are not aware of what you are doing you'll get lots and lots of published networks you don't need anymore. If I'm not mistaken every single published network does generate some traffic. Sreenivas Makam mentioned --link. Haven't played with --link yet in this context. But I'm not sure how much sense that would make. Because as I understand it with overlays: Every container connect to an overlay can talk to anything else connected to that overlay. Also every container you deploy with an overlay will have it's published named available to all other container automatically. So you can also choose the name: --publish-service=db.mynetwork.overlay Not sure why I would still need --link for that, right ? "We are exploring the possibility of moving away from the need to have an external consistent store (represented by --kv-store) and have an embedded eventually-consistent layer such as Serf that we currently use in the overlay driver. But it needs a lot of work and we will keep you posted." Maybe I'm wrong, but I believe Serf is not a reliable message transport. If you want to do that you might want to look at the Consul code as an example on how to do that because Consul also uses Serf but in some kind of reliable fashion. An option is to always use a KV and register the Docker host IP-address in the KV so you don't have to specify it on the commandline to find other hosts. That way it wouldn't need to specify any IP-addresses on the commandline when starting Docker just some way to connect to the KV. Because now you have to do 2 cluster join operations. One for the KV (like Consul) and one for Docker. Obviously as mentioned above, if Docker needs consul at start up and you want to run Consul in a container this is also kind of a weird situation. The easiest workaround is to use a 'system-docker' just like RancherOS has, basically 2 docker daemons running on the same host. Have not tried to look at performance either. I would think vxlan, how it ended up in the kernel would be fast. But sometimes how you use it can really make things slower. Thanks again, |
Great job Docker networking team! Very useful feature. Have been able to successfully trial the basic overlay networking model with the guidance provided. Have a couple of questions regarding IPAM -
|
Another related question regarding IP addresses - |
Will there be a way to assign a virtual ip to a service. For instance, 2 Load Balancer containers would share a virtual IP to avoid a single point of failure. When the first LB container (primary) goes down then the virtual IP is transferred to the second LB container (backup). |
I have an experimental swarm going but can't seem to create a network. This happens: [ec2-user@ip-172-31-47-154 ~]$ docker network create -d overlay intc I'm guessing the "not found" is from an attempt to access a key/value store that the documents say the overlay driver needs. Since all my nodes are already set up I don't need Machine to make them for me but the only documents for wiring in the key/value store are through machine. Do I have to scrap everything just so Machine can spin it back up and wire the consul store in? Or is there I way I can point the swarm nodes I have to the stand-alone node with the store on it? Thanks, |
@McAlister Swarm does not support docker experimental features. |
@cpuguy83 thats rather confusing then. This is a red herring? https://github.com/docker/docker/blob/master/experimental/compose_swarm_networking.md |
@McAlister You'll notice the example is not using the top-level networking API's, it's only using |
@cpuguy83 I think I found the proper set of instructions: https://github.com/docker/libnetwork/blob/master/docs/overlay.md Of course now I get to curse at EC2 for being on kernel 3.14 ... /shakes fist in the general direction of of the Oregon datacenter. |
@McAlister You should be able to upgrade the kernel on your machines by changing the repository lists in /etc/apt/sources.list (to utopic?) and apt-get installing the new kernel. If you use ubuntu 14.04:
as root |
@jgkamat Amazon EC2 boxes are a bit less cooperative than that ... it is possible to change their kernel but its less straightforward a yum update. https://aws.amazon.com/blogs/aws/use-your-own-kernel-with-amazon-ec2/ If 3.16 isn't “EC2 compatible” I'll have to bake cookies for our yum repo guardian to convince him to add the rpm for the 3.16 kernel to the corporate repository where my on premise boxes can see them ... since for security reasons we aren't allowed to talk to public repos. Glory in the freedom of open source work you lucky people =). |
For reference the Debian folks have put a free ec2 ami in the Amazon Marketplace that grubs you up to kernel 3.16.0-4 Image name: Debian GNU/Linux 8 (Jessie) The only others I found were $$$ images from Oracle and Redhat. When you release you might add a list of docker-friendly ami's to a faq somewhere for people deploying docker into AWS. |
Hrrmmm ... I'm seeing things in etc/hosts but the traffic doesn't seem to be getting from point A to point B. admin@ip-172-31-33-164:/docker$ docker network ls admin@ip-172-31-33-164:/docker$ docker service ls ec.intc is on host 1 and la.intc is on host 2 admin@ip-172-31-33-164:/docker$ consul members Exec into either one and I see the proper etc/hosts file: Host 1: admin@ip-172-31-33-164:/docker$ docker ps Host 2: admin@ip-172-31-44-7:/docker$ docker ps However, they can't ping each other and if I open a socket on one the other can't write to it. Host 1: / # nc -l -p 4444 Host 2: / # telnet ec.intc 4444 and nothing happens on host 1. If i open another shell on host 1 like so: admin@ip-172-31-33-164:~$ docker exec -it 61c6cd06c88c /bin/sh then I see my listener on host 1 output my asdfasd as expected. Host 1: / # ping ec.intc Host 2: / # ping ec.intc If I create another container on Host 1 things work as expected. Host 1: admin@ip-172-31-33-164:/docker$ docker run -itd --name mock-aa --publish-service=aa.intc.overlay busybox top So I'm listening on mock-aa on HOST 1 then in another window in the ec-intc container: admin@ip-172-31-33-164:~$ docker exec -it 61c6cd06c88c /bin/sh and I see the asdfasdf appear in the aa-intc container also on host 1 in the shell where it is listening So if they are on the same host all is good. Across hosts ... not so much. All the amazon hosts in the test are in the same security group and all the tcp ports are open between computers within that group. /etc/hosts files on all the containers update appropriately when I add containers in or take them out. 'docker network ls' and 'docker service ls' commands likewise are staying nicely in sync between all hosts in the network. I'm not entirely sure how to debug this so I'll sleep on it. I find it very intriguing that the telnet command appears to connect but doesn't get there. I would expect a "connection refused" or a "trying ec.intc ...." that eventually times out. I've never seen telnet appear to connect but not actually connect before. |
@McAlister Any update on the connectivity issue? I'm also trying to get Docker networking running on AWS EC2. See https://gist.github.com/felixrabe/bcc8f67d6c262443d2ef for my work-in-progress on a provisioning script. It's mostly Bash. |
@McAlister I seem to remember that I once got two Docker containers successfully pinging each other on two AWS hosts, but I'm stuck reproducing how I did that right now. |
I figured the missing part thanks to @mrjana on IRC (opening UDP port 4789 for VXLAN) and rewrote my provisioning script to |
@felixrabe Sorry I didn't respond faster, The UDP port makes sense, I'll try it tomorrow and let you know. Thank you for digging into it too. |
I am trying to setup containers which need to talk to some external servers in meshed topology for example I have servers s1,s2 and s3 with their Ip addresses being 10.0.0.1, 10.0.0.2 and 10.0.0.3 and 3 docker containers say c1,c2 and c3 on 3 separate servers s4,s5, and s6 with server IPs being 10.0.0.4, 10.0.0.5 and 10.0.0.6. Containers are using overlay driver and ip associated with containers c1,c2 and c3 are 172.17.0.1, 172.17.0.2 and 172.17.0.3 respectively. What routing mechanism to be used so that all 3 servers s1,s2, and s3 can talk to any container c1, c2 or c3 ? Do I need to have a route to network 172.17 vlan with one of the server hosting container acting as default gateway to container network or any other mechanism is suggested . So far I am able to establish connectivity only to single container ip from external server by adding route add net command with the container hosting node acting as gateway other containers are not reachable from external nodes. |
What is the process of allocating IP addresses to containers when they are launched on different hosts. How to ensure unique IPs across hosts. When launching containers on two different hosts, I get the same ip address 172.17.0.1 allocated with default setup. |
A command to retrieve the types of network types available during creation would be useful. e.g. docker network create --help # Available Network Types:
# - bridge
# - host
# - weave or docker network list-types
# Available Network Types:
# - bridge
# - ... |
This should probably be closed since #16645 was merged. |
Agreed, let's close this (but feel free to comment after it's closed) |
Using this issue to solicit feedback on the exciting new networking experimental features introduced during the Docker 1.7 / 1.8 release time-frame.
https://github.com/docker/docker/blob/master/experimental/networking.md
The text was updated successfully, but these errors were encountered: