Skip to content
This repository has been archived by the owner on Mar 28, 2018. It is now read-only.

Clear Containers and Kubernetes

Samuel Ortiz edited this page Jun 20, 2017 · 28 revisions

Running Kubernetes on top of Clear Containers 2.1

Kubernetes

Kubernetes is a Google project and the dominant container orchestration engine.

Kubernetes clusters run containers pods. Inside a pod, all containers share the pod resources (networking, storage, etc...) and all pods within a cluster have their own IP address.

By default Kubernetes runs the full Docker stack to start pods and containers within a pod. Rkt is an alternative container runtime for Kubernetes.

Problem statement

A Docker controlled Clear Container will start one VM per container. Providing the kubernetes pod semantics with one VM per container is very challenging, especially from a networking standpoint. Instead Clear Containers should be able to start one VM per pod and launch containers within those VMs/Pods.

Solution

With the recent addition of the Container Runtime Interface (CRI) to Kubernetes, Clear Containers can now be controlled by any OCI compatible CRI implementation and get passed container annotations to let it know when and how to run Pod VMs or container workload within those VMs.

CRI-O is the main OCI compatible CRI implementation and now supports Clear Containers.

Clear Containers as a bare metal runtime for Kubernetes

We are going to describe how to deploy a bare metal Kubernetes cluster that uses Clear Containers as its container runtime and CRI-O as its CRI server.

Requirements

We will be running a bare metal k8s cluster on 2 physical machines. k8s-master will be the master node and k8s-node will be the minion.

Both machines run Ubuntu 16.04.2 server

Install Kubernetes 1.6

  1. Update your machines
# apt-get update && apt-get upgrade && apt-get install -y apt-transport-https
  1. Use the unstable Ubuntu packages to install the Kubernetes 1.6 packages (with default CRI support)
# cat <<EOF > /etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial-unstable main
EOF
# curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
# apt-get update
# apt-get install -y docker.io kubelet kubeadm kubectl kubernetes-cni

Install Clear Containers

# sh -c "echo 'deb http://download.opensuse.org/repositories/home:/clearlinux:/preview:/clear-containers-2.1/xUbuntu_$(lsb_release -rs)/ /' >> /etc/apt/sources.list.d/cc-oci-runtime.list"
# curl -fsSL https://download.opensuse.org/repositories/home:clearlinux:preview:clear-containers-2.1/xUbuntu_$(lsb_release -rs)/Release.key | sudo apt-key add -
# apt-get update
# apt-get install -y cc-oci-runtime

Install CRI-O

  1. Install all CRI-O dependencies
# add-apt-repository ppa:longsleep/golang-backports && apt-get update && apt-get install libseccomp2 libseccomp-dev seccomp libdevmapper-dev libdevmapper1.02.1 libgpgme11 libgpgme11-dev libglib2.0-dev aufs-tools golang-go btrfs-tools
  1. Fetch, build and install CRI-O
# mkdir ~/go
# export GOPATH=~/go
# go get github.com/kubernetes-incubator/cri-o
# go get github.com/cpuguy83/go-md2man
# cd $GOPATH/src/github.com/kubernetes-incubator/cri-o
# git branch --track kube-1.6.x origin/kube-1.6.x 
# git checkout kube-1.6.x
# make && make install
# mkdir /etc/crio
# mkdir /etc/containers
# mkdir /var/lib/etcd
# cp seccomp.json /etc/crio/
# cp test/policy.json /etc/containers/
  1. Install the CRI-O systemd service file
# sh -c 'echo "[Unit]
Description=OCI-based implementation of Kubernetes Container Runtime Interface
Documentation=https://github.com/kubernetes-incubator/cri-o

[Service]
ExecStart=/usr/local/bin/crio --debug
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target" > /etc/systemd/system/crio.service'

IMPORTANT NOTE: If you're running behind a proxy, you'll need to specify that through environment variables. For example:

Environment="HTTP_PROXY=http://myproxy.example.com:8080" "NO_PROXY=example.com,.example.com,localhost"

And your CRI-O systemd service file would then look like:

# sh -c 'echo "[Unit]
Description=OCI-based implementation of Kubernetes Container Runtime Interface
Documentation=https://github.com/kubernetes-incubator/cri-o

[Service]
ExecStart=/usr/local/bin/crio --debug
Environment="HTTP_PROXY=http://myproxy.example.com:8080" "NO_PROXY=example.com,.example.com,localhost"
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target" > /etc/systemd/system/crio.service'
  1. Install the CRI-O configuration file

We need to install a Clear Containers customized CRI-O configuration file:

# wget https://gist.githubusercontent.com/sameo/5db12babc44c8195eac49d6d8817dcbc/raw/8b72990230f4e5f4132566081542dc12e9919248/crio.conf -O /etc/crio/crio.conf
  1. Enable and start crio
# systemctl daemon-reload
# systemctl enable crio
# systemctl start crio

Install runc 1.0.0-rc3

CRI-O 0.2+ needs a runtime that supports at least version 1.0.0-rc5 of the OCI spec. runc 1.0.0-rc3 meets that requirement:

# go get -u github.com/opencontainers/runc
# cd $GOPATH/src/github.com/opencontainers/runc
# git reset --hard cf630c6ae8dc83dc9520a23fd54faa0d485b4ac3
# make && make install

Configure, start and stop the Kubernetes cluster

  1. Modify the kubelet systemd service for kubelete to use crio
# sh -c 'echo "[Service]
Environment=\"KUBELET_KUBECONFIG_ARGS=--kubeconfig=/etc/kubernetes/kubelet.conf --require-kubeconfig=true\"
Environment=\"KUBELET_SYSTEM_PODS_ARGS=--pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true\"
Environment=\"KUBELET_NETWORK_ARGS=--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin\"
Environment=\"KUBELET_DNS_ARGS=--cluster-dns=10.96.0.10 --cluster-domain=cluster.local\"
Environment=\"KUBELET_AUTHZ_ARGS=--authorization-mode=Webhook --client-ca-file=/etc/kubernetes/pki/ca.crt\"
Environment=\"KUBELET_EXTRA_ARGS=--enable-cri --container-runtime=remote --container-runtime-endpoint=/var/run/crio.sock --runtime-request-timeout=30m\"
ExecStart=
ExecStart=/usr/bin/kubelet \$KUBELET_KUBECONFIG_ARGS \$KUBELET_SYSTEM_PODS_ARGS \$KUBELET_NETWORK_ARGS \$KUBELET_DNS_ARGS \$KUBELET_AUTHZ_ARGS \$KUBELET_EXTRA_ARGS" > /etc/systemd/system/kubelet.service.d/10-kubeadm.conf'
  1. Restart kubelet
# systemctl daemon-reload
# systemctl restart kubelet
  1. Bring the Kubernetes master up

We will use kubeadm for bringing the Kubernetes master up, through a wrapper script:

# wget https://gist.githubusercontent.com/sameo/cf92f65ae54a87807ed294f3de658bcf/raw/95d9a66a2268b779dbb25988541136d1ed2fbfe2/flannel.yaml -O /etc/kubernetes/flannel.yaml
# wget https://gist.githubusercontent.com/sameo/c2ae717bb8404068235164572acff16d/raw/cd3e3bb2f6d534addb2a3312e784ef9090bca9e8/k8s-bring-up.sh -O ~/k8s-bring-up.sh
# chmod a+x ~/k8s-bring-up.sh
# ~/k8s-bring-up.sh
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[init] Using Kubernetes version: v1.6.0
[init] Using Authorization mode: RBAC
[preflight] Running pre-flight checks
[preflight] Starting the kubelet service
[certificates] Generated CA certificate and key.
[certificates] Generated API server certificate and key.
[certificates] API Server serving cert is signed for DNS names [k8s-node kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.1.26]
[certificates] Generated API server kubelet client certificate and key.
[certificates] Generated service account token signing key and public key.
[certificates] Generated front-proxy CA certificate and key.
[certificates] Generated front-proxy client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[apiclient] Created API client, waiting for the control plane to become ready
[apiclient] All control plane components are healthy after 15.022481 seconds
[apiclient] Waiting for at least one node to register
[apiclient] First node has registered after 5.501618 seconds
[token] Using token: dada4d.1a1eeac3808e6d6d
[apiconfig] Created RBAC rules
[addons] Created essential addon: kube-proxy
[addons] Created essential addon: kube-dns

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run (as a regular user):

  sudo cp /etc/kubernetes/admin.conf $HOME/
  sudo chown $(id -u):$(id -g) $HOME/admin.conf
  export KUBECONFIG=$HOME/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  http://kubernetes.io/docs/admin/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join --token dada4d.1a1eeac3808e6d6d 192.168.1.26:6443

Starting flannel...serviceaccount "flannel" created
clusterrolebinding "flannel" created
clusterrole "flannel" created
serviceaccount "calico-policy-controller" created
configmap "kube-flannel-cfg" created
daemonset "kube-flannel-ds" created
Done.

At that point your Kubernetes master node is up and running, with a flannel networking overlay. You can now add nodes to the cluster by running (from the cluster nodes):

kubeadm join --token dada4d.1a1eeac3808e6d6d 192.168.1.26:6443

Your cluster is now ready to schedule container workloads.

  1. Tear the k8s cluster down

Unfortunately, CRI-O is not completely cleaned up when getting called by kubeadm reset. We need to do some additional cleaning.

Both kubeadm reset and the additional CRI-O cleanup are combined in one script:

# wget https://gist.githubusercontent.com/sameo/de7830848f3a65535f4e9660277f766f/raw/7a8f6d5429e72a33ee13c8e1a115757d95a1f59f/k8s-tear-down.sh -O ~/k8s-tear-down.sh
# chmod a+x ~/k8s-tear-down.sh
# ~/k8s-tear-down.sh