Skip to content
Permalink
Branch: master
Find file Copy path
Find file Copy path
Fetching contributors…
Cannot retrieve contributors at this time
248 lines (165 sloc) 7.6 KB

Kops Workshop

This workshop will cover usage of kubernetes/kops and utilities such as channels.

Workshop Introduction

The kops tool aims to manage Kubernetes clusters in the same way Kubernetes itself manages resources: Through desired state manifests.

Kubernetes uses Etcd for state storage and similarly, Kops uses a state store which can either be Google Cloud Storage or S3 buckets.

An S3 bucket for state storage is created as part of this Workshop setup.

Additionally, kops bundles a utility to deploy kubernetes add-ons called channels which we will cover in this workshop as well.

Kops cluster maintenance

Load env vars

On your workstation, an .env file has been created with all configuration kops needs for the following exercises.

Verify the contents of the .env file, then load these variables into your shell environment:

export $(cat .env | xargs)

Create cluster spec

Similar to kubectl ... kops provides imperative commands to generate cluster definitions:

kops create cluster \
    --node-count 3 \
    --zones ap-southeast-1a \
    --master-zones ap-southeast-1a \
    --node-size t2.medium \
    --master-size t2.medium \
    --ssh-public-key ~/.ssh/kops_key.pub \
    ${CLUSTER_NAME}

Now verify that the cluster definition was created in the kops state store.

kops get cluster

# Also list instance groups related to the cluster
kops get --name $CLUSTER_NAME instancegroups

Get/Set cluster definitions

Ideally we keep these cluster definitions as manifests under source control (Infrastructure as code).

To download these manifests, similarly to Kubernetes, use the get subcommand and --output yaml:

Get:

kops get cluster ${CLUSTER_NAME} -o yaml > ${CLUSTER_NAME}-cluster.yaml
kops get --name ${CLUSTER_NAME} instancegroups -o yaml > ${CLUSTER_NAME}-ig.yaml

Note use the --full flag to see all defaults

Review / Edit the cluster and instnace group manifests

vim ${CLUSTER_NAME}-cluster.yaml
vim ${CLUSTER_NAME}-ig.yaml

Read more about these manifests:

During cluster bootstrap, manifests are read from the state store by the bootstrapping components. Thus, we need to ensure the manifests are updated into the state store.

Set:

kops replace -f ${CLUSTER_NAME}-cluster.yaml
kops replace -f ${CLUSTER_NAME}-ig.yaml

Generate Terraform config

kops update cluster --name ${CLUSTER_NAME} \
  --target=terraform \
  --out=modules/clusters/${CLUSTER_NAME} 

Note Add this stage, kops will automatically configure your kubeconfig as well. We can also manually get the kubeconfig:

kops export --name ${CLUSTER_NAME} kubecfg

Build cluster

As we heavily use Terraform modules and manage infrastructure outside of Kubernetes using Terraform, we import the kops generated module into our main.tf file:

module "cluster-bee02" {
  source = "./modules/clusters/bee02-cluster.training.honestbee.com"
}

Initialise, plan and apply the Terraform configuration:

terraform init
terraform plan
terraform apply

Wait for the cluster to be ready...

until kubectl cluster-info; do (( i++ ));echo "Cluster not available yet, waiting for 5 seconds ($i)"; sleep 5; done

Troubleshooting

  • Get the public IP from the master and ssh into it

    ssh -i ~/.ssh/kops_key admin@54.254.203.127
    
  • Check the status of the systemd units (kubelet / docker)

    sudo systemctl status kubelet
    sudo systemctl status docker
    
  • Follow the kubelet journal logs and look for errors

    sudo journalctl -u kubelet
    
  • Follow the api-server logs and look for errors

    sudo tail -f /var/log/kube-apiserver.log
    

Rolling Updates

...To be completed (note about the danger of unbalanced clusters)

Kops addon channels

Kubernetes addons are bundles of resources that provide specific functionality (such as dashboards, auto scaling, ...). Multiple addons can be versioned together and managed through the concept of addon channels. The channels tool bundled with kops aims to simplify the management of addons. The channels tool is similar to Helm, but without the need for a server side component - yet it can not provide the templating and release management provided by Helm.

Addon channels are defined as a list of addons stored in an addons.yaml file. This list keeps track of all addon versions applicable for a particular channel. Each addon may have multiple kubernetes resource manifests streamed into a single yaml file. The channels tool keeps track of which addon version is deployed in a cluster and automates the creation of all addons in the channel.

Deploy upstream channels

There are several upstream channels such as dashboard and heapster, we may install these as follows:

channels apply channel monitoring-standalone --yes
channels apply channel kubernetes-dashboard --yes

Currently channels is hardcoded to prefix simple channel names such as kubernetes-dashboard by searching master in kubernetes/kops/addons for the addons.yaml list. See channels/pkg/cmd/apply_channel.go source

At this stage, we can review all addons that were deployed by channels (notice several addons were deployed as part of kops cluster bootstrap)

channels get addons

Good to know: behind the scene, channels uses annotations on the kube-systems namespace to keep track of deployed addon versions:

We can get similar output using jq:

kubectl get ns kube-system -o json | jq '.metadata.annotations | with_entries(select(.value | contains("addons"))) | map_values(fromjson | .version)'

Now that the dashboard is deployed - notice that as we did not make our cluster private, we can access the dashboard form anywhere (requires basic-auth):

https://api.bee02-cluster.training.honestbee.com/ui

Once we accepted the untrusted root cluster certificate, we can get a list of basic-auth credentials from our kubeconfig:

kubectl config view -o json | jq '[.users[] | select(.name | contains("basic-auth")) | {(.name): {(.user.username): .user.password}}]'

Deploy custom Honestbee - beekeeper channel

As Honestbee depends on Helm for all of its deployments, we created our own addons channel called beekeeper to bootstrap Helm and other core Kubernetes addons (namespaces, service accounts, registry secrets, rbac, ...). On your workstation are sample addons for practice purposes.

beekeeper/
├── addons.yaml
├── kube-state-metrics.addons.k8s.io
│   ├── README.md
│   ├── v1.0.1.yaml
│   └── v1.1.0-rc.0.yaml
├── namespaces.honestbee.io
│   └── k8s-1.7.yaml
└── tiller.addons.k8s.io
    └── k8s-1.7.yaml

To apply this channel to the cluster, run the following command:

channels apply channel -f beekeeper/addons.yaml --yes

Cleaning up

Delete cluster

As the cloud resources are managed through Terraform, the only thing we want to do is delete the manifest:

kops delete cluster --name ${CLUSTER_NAME} --unregister --yes

Todo

  • Add section about rolling updates
  • Add section about kops toolbox template
  • Add section on how to clean up clusters
You can’t perform that action at this time.