Skip to content
Deploy and manage OpenStack on Kubernetes
Branch: master
Clone or download
Quentin-M Merge pull request #149 from stackanetes/multi_traefik
Make Traefik use the port that is defined in the meta package
Latest commit a00caa2 Apr 21, 2018
Type Name Latest commit message Commit time
Failed to load latest commit information.
cinder Rename deployment.replicas by deployment.control_replicas Nov 29, 2016
elasticsearch Remove privileged=true and uid=0 on most services. Oct 11, 2016
etcd Changing label from rabbit to stackanetes Oct 21, 2016
keystone Rename deployment.replicas by deployment.control_replicas Nov 29, 2016
mariadb Use Kubernetes domain name from parameters Dec 27, 2016
memcached Improve memcached configuration for up to 8192 conucrrent connections Oct 24, 2016
neutron Rename deployment.replicas by deployment.control_replicas Nov 29, 2016
nova nova: report VMs' memory usage as part of the nova-compute pods Mar 2, 2017
rabbitmq Merge pull request #115 from ss7pro/fixrabbit Oct 20, 2016
searchlight Rename deployment.replicas by deployment.control_replicas Nov 29, 2016
stackanetes Make nova/compute use Deployments instead of a Daemonsets Dec 6, 2016
.travis.yml Add travis.yml Sep 5, 2016
AUTHORS Discard old project Sep 3, 2016
LICENSE Initial commit Apr 19, 2016
NOTICE Update README regarding to kpm 0.24+ and CNR Jan 23, 2017


Stackanetes is an initiative to make operating OpenStack as simple as running any application on Kubernetes. Stackanetes deploys standard OpenStack services into containers and uses Kubernetes’ robust application lifecycle management capabilities to deliver a single platform for companies to run OpenStack Infrastructure-as-a-Service (IaaS) and container workloads.


Demonstration Video

Stackanetes: Technical Preview


Stackanetes sets up the following OpenStack components:

  • Cinder
  • Glance
  • Horizon
  • Keystone
  • Neutron
  • Nova
  • Searchlight

In addition to these, a few other applications are deployed:

  • MariaDB
  • Memcached
  • RabbitMQ
  • RADOS Gateway
  • Traefik
  • Elasticsearch
  • Open vSwitch

Services are divided and scheduled into two groups, with the exception of the Open vSwitch agents which run everywhere:

  • The control plane, which runs all the OpenStack APIs and every other supporting applications,
  • The compute plane, which is dedicated to run Nova's virtual machines.

Gotta go fast

Leaving aside the configuration of the requirements, Stackanetes can fully deploy OpenStack from scratch in ~5-8min. But that's not the only strength of Stackanetes, its true power resides in its ability to help managing OpenStack's lifecycle.


Stackanetes requires Kubernetes 1.3+ with:

  • At least two schedulable nodes,
  • At least one virtualization-ready node,
  • Overlay network & DNS add-on,
  • Kubelet running with --allow-privileged=true,

While Glance may operate with local storage, a Ceph cluster is needed for Cinder. Nova's live-migration feature requires proper DNS resolution of the Kubernetes nodes' hostnames.

The rkt engine can be used in place of the default runtime with Kubernetes 1.4+ and rkt 1.20+. Note however that a known issue about mount propagation flags may prevent the Kubernetes' service account secret from being mounted properly on the Nova's libvirt pod, causing it to fail at startup.

High-availability & Networking

Thanks to Kubernetes' deployments, OpenStack APIs can be made highly-available using a single parameter, called deployment.replicas.

Internal traffic (i.e. inside the Kubernetes cluster) is load-balanced natively using Kubernetes' services. When Ingress is enabled, external traffic (i.e. from outside of the Kubernetes cluster) to OpenStack is routed from any of the Kubernetes' node to an Traefik instance, which then selects the appropriate service and forward the requests accordingly. By leveraging Kubernetes' services and health checks, high-availability of the OpenStack endpoints is achieved transparently: a simple round-robin DNS that resolves to few Kubernetes' nodes is sufficient.

When it comes to data availability for Cinder and Glance, Stackanetes relies on the storage backend being used.

High availability is not yet guaranteed for Elasticsearch (Searchlight).

Getting started

Preparing the environment


To setup Kubernetes, the CoreOS guides may be used.

At least two nodes must be labelled for Stackanetes' usage:

kubectl label node minion1 openstack-control-plane=enabled
kubectl label node minion2 openstack-compute-node=enabled

Following Galera guidelines, it's required to keep odd number of openstack-control-plane nodes. For development setup purposes, it's allowed to build one-node cluster.


To enable Nova's live-migration, there must be a DNS server, accessible inside the cluster, able to resolve each hostname of the Kubernetes nodes. The IP address of this server will then have to be provided in the Stackanetes configuration.

If external access is wanted, the Ingress feature should be enabled in Stackanetes configuration and the external DNS environment should be configured to resolve the following names (modulo a custom host that may have been configured) to at least some Kubernetes' nodes:



If data high availability, Nova's live migration or Cinder is desired, Ceph must be used. Deploying Ceph can be achieved easily using bare containers or even by using kubernetes.

Few users and pools have to be created. The user and pool names can be customized. Note down the keyrings, they will be used in the configuration.

ceph osd pool create volumes 128
ceph osd pool create images 128
ceph osd pool create vms 128
ceph auth get-or-create client.cinder mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=vms, allow rx pool=images'
ceph auth get-or-create client.glance mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=images'


kpm is the package manager and command-line tool used to deploy stackanetes. The most straight-forward way to install it is to use PyPI:

apt-get update && apt-get install -y python-pip python-dev
pip install kpm>=0.24.2



Technically, cloning Stackanetes is not necessary beside getting the default configuration file but is believed to be a good practice to understand the architecture of the project or if modifying the project is intended.

git clone
cd stackanetes


All the configuration is done in one place: the parameters.yaml file in the stackanetes meta-package. The file is self-documented.

While it is no strictly necessary, it is possible to persist changes to that file for reproducible deployments across environments, without the need of sharing it out of band. To do this, the stackanetes meta-package has to renamed and pushed to the CNR Registry. Pushing is also required when any modifications are made to the Stackanetes packages.

cd stackanetes
kpm login
kpm push -f<USERNAME>
cd ..


All we have to do is ask kpm to deploy Stackanetes. In the example below, we specify a namespace, a configuration file containing all non-default parameters (stackanetes/parameters.yaml if the changes have been made in place) and the registry where the packages should be pulled.

kpm deploy --namespace openstack --variables stackanetes/parameters.yaml

For a finer-grained deployment story, kpm also supports versioning and release channels.


Once Stackanetes is fully deployed, we can log in to Horizon or use the CLI directly.

If Ingress is enabled, Horizon may be accessed on http://horizon.openstack.cluster:30080/. Otherwise, it will be available on port 80 of any defined external IP. The default credentials are admin / password.

The file contains the default environment variables that will enable interaction using the various OpenStack clients.


When the configuration is updated (e.g. a new Ceph monitor is added) or customized packages are pushed, Stackanetes can be updated with the exact same command that has been used to deploy it. kpm will compute the differences between the actual deployment and the desired one and update the required resources: it will for instance trigger a rolling upgrade when a deployment is modified.

Note that manual rollouts still have to be done when only ConfigMaps are modified.

You can’t perform that action at this time.