Skip to content

att-comdev/poc-topsail

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

21 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Topsail PoC: Manually Self-hosted Kubernetes via Bootkube

A small howto on how to bring up a self-hosted kubernetes cluster

We'll use bootkube to initiate the master-components. First we'll render the assets necessary for bringing up the control plane (apiserver, controller-manger, scheduler, etc). Then we'll start the kubelets which job is it to start the assets but can't do much, because there's no API-server yet. Running bootkube once will kick things off then. At a high-level the bootstrapping process looks like this:

Self-Hosted

Image taken from the self-hosted proposal.

This is how the final cluster looks like from a kubectl perspective:

Screenshot

Let's start!

Temporary apiserver: bootkube

Download

wget https://github.com/kubernetes-incubator/bootkube/releases/download/v0.3.9/bootkube.tar.gz
tar xvzf bootkube.tar.gz
sudo cp bin/linux/bootkube /usr/bin/

Render the Assets

Exchange 10.7.183.59 with the node you are working on. If you have DNS available group all master node IP addresses behind a CNAME Record and provide this insted.

bootkube render --asset-dir=assets --experimental-self-hosted-etcd --etcd-servers=http://10.3.0.15:2379 --api-servers=https://10.7.183.59:443

This will generate several things:

  • manifests for running apiserver, controller-manager, scheduler, flannel, etcd, dns and kube-proxy
  • a kubeconfig file for connecting to and authenticating with the apiserver
  • TLS assets

Start the Master Kubelet

Download hyperkube

wget http://storage.googleapis.com/kubernetes-release/release/v1.5.3/bin/linux/amd64/hyperkube -O ./hyperkube
sudo mv hyperkube /usr/bin/hyperkube
sudo chmod 755 /usr/bin/hyperkube

Install CNI

sudo mkdir -p /opt/cni/bin
wget https://github.com/containernetworking/cni/releases/download/v0.4.0/cni-amd64-v0.4.0.tbz2
sudo tar xjf cni-amd64-v0.4.0.tbz2 -C /opt/cni/bin/

Copy Configuration Files

sudo cp assets/auth/kubeconfig /etc/kubernetes/
sudo cp -a assets/manifests /etc/kubernetes/

Start the Kubelet

sudo hyperkube kubelet --kubeconfig=/etc/kubernetes/kubeconfig \
    --require-kubeconfig \
    --cni-conf-dir=/etc/kubernetes/cni/net.d \
    --network-plugin=cni \
    --lock-file=/var/run/lock/kubelet.lock \
    --exit-on-lock-contention \
    --pod-manifest-path=/etc/kubernetes/manifests \
    --allow-privileged \
    --node-labels=master=true \
    --minimum-container-ttl-duration=6m0s \
    --cluster_dns=10.3.0.10 \
    --cluster_domain=cluster.local \
    --hostname-override=10.7.183.59

The TLS credentials generated by bootkube render in assets/tls/ are copied to a secret: assets/manifests/kube-apiserver-secret.yaml.

Start the Temporary API Server

bootkube will serve as the temporary apiserver so the kubelet from above can start the real apiserver in a pod

sudo bootkube start --asset-dir=./assets  --experimental-self-hosted-etcd --etcd-server=http://127.0.0.1:12379

bootkube should exit itself after successfully bootstrapping the master components. It's only needed for the very first bootstrapping

Check the Output

watch hyperkube kubectl get pods -o wide --all-namespaces

Join Nodes to the Cluster

Copy the information where to find the apiserver and how to authenticate:

scp 10.7.183.59:assets/auth/kubeconfig .
sudo mkdir -p /etc/kubernetes
sudo mv kubeconfig /etc/kubernetes/

install cni binaries and download hyperkube

sudo mkdir -p /opt/cni/bin
wget https://github.com/containernetworking/cni/releases/download/v0.4.0/cni-amd64-v0.4.0.tbz2
sudo tar xjf cni-amd64-v0.4.0.tbz2 -C /opt/cni/bin/
wget http://storage.googleapis.com/kubernetes-release/release/v1.5.3/bin/linux/amd64/hyperkube -O ./hyperkube
sudo mv hyperkube /usr/bin/hyperkube
sudo chmod 755 /usr/bin/hyperkube

Master Nodes

Start the kubelet:

sudo hyperkube kubelet --kubeconfig=/etc/kubernetes/kubeconfig \
    --require-kubeconfig \
    --cni-conf-dir=/etc/kubernetes/cni/net.d \
    --network-plugin=cni \
    --lock-file=/var/run/lock/kubelet.lock \
    --exit-on-lock-contention \
    --pod-manifest-path=/etc/kubernetes/manifests \
    --allow-privileged \
    --node-labels=master=true \
    --minimum-container-ttl-duration=6m0s \
    --cluster_dns=10.3.0.10 \
    --cluster_domain=cluster.local \
    --hostname-override=10.7.183.60

Worker Nodes

Note the only difference is the removal of --node-labels=master=true:

sudo hyperkube kubelet --kubeconfig=/etc/kubernetes/kubeconfig \
    --require-kubeconfig \
    --cni-conf-dir=/etc/kubernetes/cni/net.d \
    --network-plugin=cni \
    --lock-file=/var/run/lock/kubelet.lock \
    --exit-on-lock-contention \
    --pod-manifest-path=/etc/kubernetes/manifests \
    --allow-privileged \
    --minimum-container-ttl-duration=6m0s \
    --cluster_dns=10.3.0.10 \
    --cluster_domain=cluster.local\
    --hostname-override=10.7.183.60

Scale Etcd

kubectl apply doesn't work for TPR at the moment. See kubernetes/kubernetes#29542. As a workaround, we use cURL to resize the cluster.

hyperkube kubectl --namespace=kube-system get cluster.etcd kube-etcd -o json > etcd.json && \
vim etcd.json && \
curl -H 'Content-Type: application/json' -X PUT --data @etcd.json http://127.0.0.1:8080/apis/etcd.coreos.com/v1beta1/namespaces/kube-system/clusters/kube-etcd

If that doesn't work, re-run until it does. See kubernetes-retired/bootkube#346 (comment)

Challenges

Node setup

Some Broadcom NICs panic'ed with the default Ubuntu kernel

  • upgrade kernel to >4.8 because of brcm nic failure
  • move to --storage-driver=overlay2 instead of aufs as docker driver
  • disable swap on the node (will be a fatal error in kube-1.6)

ToDo Items:

apiserver resiliance

the master apiservers need to have a single address only. Possible solutions:

  • use LB from the DC
  • use DNS from the DC with programmable API (e.g. powerdns)
  • use something like kube-keepalive-vip?
  • bootstrap DNS itself (skydns, coredns)

Etcd Challenges

Notes

clean up docker

sudo su -
docker rm -f $(docker ps -a -q)
exit

Compile Bootkube

sudo docker run --rm -it -v $(pwd)/golang/src:/go/src/ -w /go/src golang:1.7 bash
go get -u github.com/kubernetes-incubator/bootkube
cd $GOPATH/src/github.com/kubernetes-incubator/bootkube
make

RBAC

./bootkube-rbac render --asset-dir assets-rbac --experimental-self-hosted-etcd --etcd-servers=http://10.3.0.15:2379 --api-servers=https://10.7.183.59:443
sudo rm -rf /etc/kubernetes/*
sudo cp -a assets-rbac/manifests /etc/kubernetes/
sudo cp assets-rbac/auth/kubeconfig /etc/kubernetes/
sudo ./bootkube-rbac start --asset-dir=./assets-rbac --experimental-self-hosted-etcd --etcd-server=http://127.0.0.1:12379

Containerized Kubelet

The benefit here is using a docker container instead of a kubelet binary. Also the hyperkube docker image packages and installs the cni binaries. The downside would be that in either case something needs to start the container upon a reboot of the node. Usually the something is systemd and systemd is better managing binaries than docker containers. Either way, this is how you would run a containerized kubelet:

sudo docker run \
    --rm \
    -it \
    --privileged \
    -v /dev:/dev \
    -v /run:/run \
    -v /sys:/sys \
    -v /etc/kubernetes:/etc/kubernetes \
    -v /usr/share/ca-certificates:/etc/ssl/certs \
    -v /var/lib/docker:/var/lib/docker \
    -v /var/lib/kubelet:/var/lib/kubelet \
    -v /:/rootfs \
    quay.io/coreos/hyperkube:v1.5.3_coreos.0 \
    ./hyperkube \
        kubelet \
        --network-plugin=cni \
        --cni-conf-dir=/etc/kubernetes/cni/net.d \
        --cni-bin-dir=/opt/cni/bin \
        --pod-manifest-path=/etc/kubernetes/manifests \
        --allow-privileged \
        --hostname-override=10.7.183.60 \
        --cluster-dns=10.3.0.10 \
        --cluster-domain=cluster.local \
        --kubeconfig=/etc/kubernetes/kubeconfig \
        --require-kubeconfig \
        --lock-file=/var/run/lock/kubelet.lock \
        --containerized

Not quite working yet though. The node comes up, registeres successfully with the master and starts daemonsets. Everything comes up except flannel:

main.go:127] Failed to create SubnetManager: unable to initialize inclusterconfig: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory

Resources and References

About

A PoC for demonstrating self-hosted Kubernetes

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published