New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Questions #1420

Closed
alexec opened this Issue Nov 16, 2015 · 4 comments

Comments

Projects
None yet
3 participants
@alexec

alexec commented Nov 16, 2015

I'm trying to compare various solutions in the container cluster management space. I wondered if anyone could be so good as to please help me:

  • Is there any out of the box service discovery?
  • How does Swarm self-heal when, for example, a container is up, but return HTTP 500 errors?
  • Can Swarm auto-scale?
  • Can Swarm schedule base on CPU or memory requirements?
  • Does Swarm automatically clean-up defunct images and containers?
  • Does Swarm support any kind of persistent storage?
  • Does Swarm support the sharing of "secrets"? E.g. encrypted files with passwords in them.

Your help would be really useful -- thank you!

@abronan

This comment has been minimized.

Contributor

abronan commented Nov 16, 2015

Hi @alexec:

Basically, Swarm is the base layer of clustering, managing the docker daemons. You can use any other service on top and integrate with pretty much all the other service discovery, orchestration tools, etc.

To answer all the questions:

1 - Yes, you can use file/node discovery (see the docs on node discovery) or Consul, registrator, etcd, any other DNS based mechanisms for service discovery (wagl is one of them for example) on top of Swarm.

2 - Monitoring of containers is not baked in, reasons for that is that we support the limited subset that is the Docker Remote API. Although you can integrate it directly on top of Swarm, with tools like SysDig, etc.

3 - No it can't by itself, but it's fairly easy to listen to the events and cluster-wide informations with docker info and spin up another machine with docker-machine when no more resources are available or when those machines are running a high number of containers. In the future docker-machine can provide a high level API for us to spin up more machines and auto-scale (up or down).

4 - Yes it can, for example docker run -c 1 -m 2GB will schedule the container on a machine with 1 CPU and 2GB or Ram available. Swarm still does not support lower-upper limits for memory (docker engine supports that though), but we can improve the scheduler based on soft memory limits.

5 - No it doesn't, the reason is actually pretty simple: we don't want to infer an information that could lead to a wrong decision cluster-wise. Swarm making the default assumption that the image is not used anymore will lead to a lot of images deleted wrongfully (just before they are used again globally, and if the image is 2GB in size, it can be painful), also leading to useless pulls taking a lot of network bandwidth. Image deletion policies should be something defined by the user/admin. Providing a description that explicitly tells when an Image of a certain type (using labels for example) should be deleted and after how long. This can definitely be an improvement, but Swarm should take the decision based on a model/pattern rather than by itself using default assumptions (or if using default assumptions, those should be very conservative).

6 - You can use the Volume Plugin API supported since docker 1.9. This way you can create ZFS, GlusterFS, Ceph distributed Volumes and not care about your containers moving around as the data is scattered across hosts rather than localized on a single node.

7 - It does not by default, but you can use Swarm with other components like Keywhiz or Vault to manage the secrets. Same for Load Balancing you can use HAProxy/nginx or the excellent Interlock.

We want to keep Swarm simple to deploy and maintain. You first begin with the exact same Docker API, and if you want more services you can just deploy them on top and add more capabilities (Load Balancing, Monitoring, Service Discovery, Secrets, Storage, etc.). There are many excellent tools out there that can be either plugged as is (for example Volumes), or deployed on top. Having them by default would hinder the deployment scenario which is extremely simple with Swarm for a regular usage.

Hope this helps! Let us know if you have more questions :)

@alexec

This comment has been minimized.

alexec commented Nov 17, 2015

Wow! Thanks for such detailed answers to my questions. They have been incredibly useful!

@abronan

This comment has been minimized.

Contributor

abronan commented Dec 1, 2015

Closing this one, feel free to open another issue if you run into any trouble using Swarm :)

@pikeszfish

This comment has been minimized.

pikeszfish commented Mar 23, 2017

Same questions on Can Swarm schedule base on CPU or memory requirements?

https://github.com/docker/swarm/blob/master/scheduler/node/node.go#L58
https://github.com/docker/swarm/blob/v1.2.0/scheduler/strategy/weighted_node.go#L59

Looks like swarm still take Docker memory limit as node.UsedMemory.
So if I have a Memory:10GB container on a machine with 2GB memory, I'm not able to create containers with Memory:>0 any more.

Any improvements since this is a 2015 issue?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment