Skip to content
This repository has been archived by the owner on Feb 1, 2021. It is now read-only.

Proposal: Support the cross-host linking on swarm cluster #221

Closed
wants to merge 3 commits into from
Closed

Proposal: Support the cross-host linking on swarm cluster #221

wants to merge 3 commits into from

Conversation

denverdino
Copy link
Contributor

To enable the cross-host linking on the swarm, I made the following enhancement to the current swarm code base

  1. Create the ambassador container if the linked container is on the different host.
    The ambassador container will be created when create/start container has the "Links" in the HostConfig. And the ambassador container will be removed automatically if the corresponding container is removed.

By default, the ambassador container is using the image "svendowideit/ambassador:latest" and it can be overridden with the Env variable AMBASSADOR_IMAGE. So it is possible to provide the ambassador for dynamic docker link cross-host in the future.

  1. Provide the new filters to handle the --volume-from and --net=container:xxx
    The new container will be collocated with the one to share the volumes and network configuration.

NOTE:
To make the cross-host link on Swarm work properly, It need to enable the network access for the containers in cluster. I leverage Kubernetes code to create sample config of swarm cluster with Vagrant. The detail could be found in https://github.com/denverdino/docker-swarm-vagrant

Test cases

a) For volume filter

docker run -d -v /foo --name data busybox true
CID=$(docker run -d --volumes-from data busybox true)
docker inspect --format="{{.Volumes}}" $CID

You will see the volume created from "data" container.

b) For volume filter
docker run -d --name nettest nginx
CID=$(docker run -d --net=container:nettest busybox /bin/sh -c "while true; do echo hello world; sleep 1; done")
docker inspect --format="{{.NetworkSettings}}" $CID

You will see there is no IPAddress assigned for the 2nd container.

c) For linked containers

docker run --name some_mysql -e MYSQL_ROOT_PASSWORD=password -d mysql
docker run --name some_wp -p 8888:80 --link some_mysql:mysql -d wordpress
docker run --name some_wp1 -p 8888:80 --link some_mysql:mysql -d wordpress
docker run --name some_wp2 -p 8888:80 --link some_mysql:mysql -d wordpress
docker ps

The wordpress containers will be placed on the different hosts, and they will connect to the same mysql database. The ambassador container for mysql will be created If wordpress and mysql container are not on the same host.

d) For fig template
fig.yml is as following

web:
  image: wordpress
  ports:
    - "8000:80"
  links:
    - db:mysql
db:
  image: mysql
  environment:
    MYSQL_ROOT_PASSWORD: password

It need the fig with the enhancement for the Swarm cluster support from https://github.com/denverdino/fig

Thanks

…-volume-from and --net=container:xxx

Signed-off-by: Li Yi <denverdino@gmail.com>
Signed-off-by: Li Yi <denverdino@gmail.com>
@SvenDowideit
Copy link
Contributor

ohwow. this is really an awesome demo of swarm - almost magic :)

Signed-off-by: Li Yi <denverdino@gmail.com>
@phemmer
Copy link

phemmer commented Jan 6, 2015

I really don't like this (the linking & proxying).

  1. This is re-implementing the userspace proxy. The thing that we have been trying to get rid of in docker for ages because of all the problems it causes.
  2. This presents security issues as this previously unexposed container is now exposed.
  3. There is no advantage over just exposing the destination container directly. Instead of exposing container B, which redirects to container A, just expose container A.
  4. Docker is supposed to be light weight. Having to start a another container is going against this goal.

I think if cross-host container communication is to be supported via links, it needs to be done via some sort of virtual layer 2 or layer 3 network linking the hosts together. docker/8951 exists for just this purpose. I think we should be pushing that proposal forward (or something similar) instead of trying to implement some substitute in swarm.

@denverdino
Copy link
Contributor Author

@phemmer

Container linking and the networking support are related but they are still different.

I like the proposal for Docker multi-host Networking it will solve the connectivity between the containers.

In my point of view, linking is more than connectivity for it provides the abstraction for how to access the service endpoint for other containers. The cross-host container linking is important to build the distribute application on swarm. With that, developer can build and test the Docker Compose/Fig template in the single docker host, and deploy in the swarm cluster. And there are several discussion related to that: Issue 146, issues/144

To enable the cross-host linking, beside the connectivity container. There are still some work to do

The container can get the hostname from Env or use alias as hostname to access the linked container. We can get the destination container IP and set the Env for the source container. That is the reason we introduce the Amassador container and leverage the Docker to do that.

Docker is supposed to be light weight. Having to start a another container is going against this goal.

I agree. This approach to mitigate current gap. Another way is to use the --add-host to add a line to /etc/hosts to simulate the linking if the containers are on different hosts.

This presents security issues as this previously unexposed container is now exposed.

Actually the linked container can provide more fine grained security control. E.g. in the single host, when --icc=false is specified, the container can only talk with linked containers. We can enhance that for multi-host also. To let the swarm to control the firewall rules for the access protection.

BTW, I hope the swarm networking model are pluggable. For there will be different requirements for the network settings. And this proposal doesn't put much assumption for the underneath network implementation.

Thanks

@ghost
Copy link

ghost commented Jan 7, 2015

+1

@aluzzardi
Copy link
Contributor

Thank you so much for this proposal, @denverdino.

While I strongly agree that we need to have a solution for cross-host linking, I believe the foundation should be provided by Docker itself rather than Swarm.

We should definitely support this use case, but it shouldn't be built-in into Swarm - rather, it should be possible to use Swarm to build your solution by using the API.

Also, what I'd like to see long-term is advanced networking support in Docker with Swarm leveraging it to build cross-linking.

@aluzzardi aluzzardi closed this Jan 12, 2015
@denverdino
Copy link
Contributor Author

@aluzzardi Sure, if Docker can provide the build-in cross-host linking, that will be good. Looking forward for that.

@gastonmorixe
Copy link

Guys, I am struggling to understand the use case of swarm without link.

Say a startup's stack uses: redis, elasticsearch, rails, postgres. Why is it so difficult for swarm to pass the correct IP+PORT env to each container?

Thanks

@docteurklein
Copy link

@imton First, there is no way to ensure the 2 containers can communicate with each other without opening a port on the host and doing a NAT redirection, which means opening the service to the rest of the world.

Only a private virtual switch can make them communicate in a secure way, kind of the same as the docker0 one, but attached to multiple hosts.

Then, swarm is just a smart proxy: it delegates the container creation to whatever node that can fulllfill the constraints.
The problems relies in the way docker handles links: each docker node has a local database that is used to resolve links. If you try to link a container that is not in this local database, it will fail.

So in order to make cross-host links work, we'd need to distribute the database across all nodes or something.

@denverdino
Copy link
Contributor Author

@docteurklein
Yes, to enable the cross-host linking between containers

  1. Native Docker Multi-Host Networking.
    Proposal: Native Docker Multi-Host Networking moby/moby#8951
    That mean the container on different hosts can talk interact each other directly.
  2. Same behavior between cross-host linking and single host linking
    The container uses the env variable and the alias hostname to communicate the linked container.
    In Docker Swarm, before creating container, we can inspect the linked container on other hosts, and emulate the linking by set the env variable and use the --add-host to set the alias hostname in container creation request. After that, the cross-host link will have the same behavior to the local one
  3. When the container is restarted, the IP will be change. The IP addr resolved by alias hostname in container linked to it should be update also.
    For that, we need one distribute the database to manage the relationships of linked container . And use that information for the cross-host linking update.

Thanks

@gastonmorixe
Copy link

@docteurklein @denverdino Thank you guys for your being kind and explain this. Past 5 days I have been reading about multi-host dockering and I now completely understand what are all this problems about.

How is "distribute the database across all nodes" part planned to be solved? Consul seems to exactly do this.

@denverdino
Copy link
Contributor Author

@imton
I am using Etcd, but consul or other registry are OK also. Thanks

@imshashank
Copy link

I think it will be super useful to have simple cross-host linking on swarm clusters as creating a cluster with docker nodes on different clusters will be very easy.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants