Skip to content
This repository has been archived by the owner on Aug 26, 2021. It is now read-only.

Container Management Comparison

Igor Stoyanov edited this page Nov 24, 2016 · 1 revision

Container Management Systems Comparison

Admiral is highly scalable and very lightweight Container Management platform for deploying and managing container based applications. The main goals are to provide policy based application provisioning and management including very well integrated API, CLI and UI. From architectural point of view the goal is to provide a very lightweight system with minimum operational complexity. In this regards it is not comparing directly with the popular container scheduling systems like Kubernetes, Mesos or Swarm but rather the plan is to work together with such systems.

There are overlapping functionality and the next section will provide some very rough comparison to answer some of the questions that are often asked about Admiral.

Kubernetes Mesos Swarm Admiral
Types of Workloads Cloud Native applications Cloud Native applications Cloud Native applications 2.5 Gen Applications
Application Definition A combination of Pods, Replication Controllers, Replica Sets, Services and Deployments. A pod is a group of co-located containers; the atomic unit of deployment. Pods do not express dependencies among individual containers within them. Containers in a single Pod are guaranteed to run on a single Kubernetes node. "Application Group" models dependencies as a tree of groups. Components are started in dependency order. Colocation of group's containers on same Mesos slave is not supported. A Pod abstraction is on roadmap, but not yet available. Apps defined in Docker Compose can be deployed on a Swarm cluster The built-in scheduler has several filters such as node tags, affinity and strategies that will assign containers to the underlying nodes so as to maximize performance and resource utilization. Apps defined in YAML template or Docker Compose can be deployed on the cluster. The policy based placement has several grouping and filters such as node tags, affinity and placement zones that will assign containers to the underlying nodes so as to not only maximize performance and resource utilization but also comply with more business grouping constraints and rules. Additionally, it allows for dependency order between containers in an application as well as property bindings from one container to another. Ability to execute scripts between container provisioning is also available.
Application Scalability Constructs Each application tier is defined as a pod and can be scaled when managed by a Deployment or Replication Controller. The scaling can be manual or automated. Possible to scale an individual group, its dependents in the tree are also scaled. Possible to scale individual containers defined in the Compose file. Possible to scale individual containers defined in the application compose yaml file.
High Availability Pods are distributed among Worker Nodes. Services also HA by detecting unhealthy pods and removing them. Applications are distributed among Slave Nodes. Containers are distributed among Swarm Nodes. The Swarm manager is responsible for the entire cluster and manages the resources of multiple Docker hosts at scale. To ensure the Swarm manager is highly available, a single primary manager instance and multiple replica instances must be created. Requests issued on a replica are automatically proxied to the primary manager.If a primary manager fails, tools like Consul, ZooKeeper or etcd will pick a replica as the new manager. Applications are distributed among cluster of Nodes. All admiral nodes can handle the container provisioning request using shared state to determine the best place to place it. Shared state is replicated using the Xenon replication mechanisms.
Load Balancing Pods are exposed through a Service, which can be a load balancer. Application can be reached via Mesos-DNS, which can act as a rudimentary load balancer. Load balancer is typically just another service defined in a Compose file. The swarm manager uses ingress load balancing to expose the services you want to make available externally to the swarm. Internally, the swarm lets you specify how to distribute service containers between nodes. Similarly to Swarm, a load balancer is typically just another service defined in a YAML template file. There is a definition for clustered container, which provide soft affinity to the scheduler to spread those containers across multiple node for higher availability.
Auto-scaling for the Application Auto-scaling using a simple number-of-pods target is defined declaratively with the API exposed by Replication Controllers. Load-sensitive autoscaling available as a proof-of-concept application. Rate-sensitive autoscaling available for Mesosphere’s enterprise customers. Not directly available. For each service, you can declare the number of tasks you want to run. When you scale up or down, the swarm manager automatically adapts by adding or removing tasks to maintain the desired state. Similar to Swarm - not directly available right now but on the roadmap. When you scale up or down, the management functionality will automatically adapts by adding or removing containers to maintain the desired state.
Storage Two storage APIs: The first provides abstractions for individual storage backends (e.g. NFS, AWS EBS, ceph,flocker). The second provides an abstraction for a storage resource request (e.g. 8 Gb), which can be fulfilled with different storage backends. Modifying the storage resource used by the Docker daemon on a cluster node requires temporarily removing the node from the cluster A Marathon container can use persistent volumes, but such volumes are local to the node where they are created, so the container must always run on that node. Docker Engine and Swarm support mounting volumes into a container.A volume is stored locally by default. Volume plugins (e.g.flocker) mount volumes on networked storage (e.g., AWS EBS, cinder, ceph) Using Docker Volume abstraction to either mount to a local storage or use volume plugins to mount volumes on network storages
Networking The networking model lets any pod communicates with other pods and with any service. The model requires two networks (one for pods, the other for services). Neither network is assumed to be (or needs to be) reachable from outside the cluster. The most common way of meeting this requirement is to deploy an overlay network on the cluster nodes. Marathon's docker integration facilitates mapping container ports to host ports, which are a limited resource. A container does not get its own IP by default, but it can if Mesos is integrated with Calico. Even so, multiple containers cannot share a network namespace (i.e. cannot talk to one another on localhost). Docker Engine can create overlay networks on a single host. Docker Swarm can create overlay networks that span hosts in the cluster. By default, nodes in the swarm encrypt traffic between themselves and other nodes. A container can be assigned an IP on an overlay network. Containers that use the same overlay network can communicate, even if they are running on different hosts. Using Docker Networking abstraction to have IP assigned to a container on an overlay network. Containers that use the same overlay network can communicate, even if they are running on different hosts.
Performance & Scalability With the release of 1.2, Kubernetes now supports 1000-node clusters. Kubernetes scalability is benchmarked against the following Service Level Objectives (SLOs): API responsiveness: 99% of all API calls return in less than 1s. Pod startup time: 99% of pods and their containers (with pre-pulled images) start within 5s. Mesos has been simulated to scale to 50,000 nodes, although it is not clear how far scale has been pushed in production environments. Mesos can run LXC or Docker containers directly from the Marathon framework or it can fire up Kubernetes or Docker Swarm (the Docker-branded container manager) and let them do it. According to the Swarm website, Swarm is production ready and tested to scale up to one thousand (1,000) nodes and fifty thousand (50,000) containers with no performance degradation in spinning up incremental containers onto the node cluster. Admiral hasn't been pushed to the limits yet. The current running scalability tests are with 50 nodes and 10,000 container provisioning at the same time running a cluster of 3 Admiral nodes. The numbers will be much higher if the containers are provisioned sequentially and the max number of managed containers is tested.

The comparison data for Kubernetes, Mesos and Swarm are taken from platform9 blog