Skip to content

Alpha Cluster Administration

James Lott edited this page Nov 16, 2018 · 8 revisions

Configure kubectl on workstation

The cluster can be managed entirely from a local workstation using the kubectl command. These instructions assume kubectl has already been installed on the workstation in question.

The steps to use the shared kubectl configuration are as follows:

  1. Decrypt kubectl configuration
  2. Place decrypted configuration in kubectl config directory
  3. Verify kubectl can connect to the cluster

From within the ops repo directory:

(
  set -e
  gpg -do kubectl.kubeconfig kubernetes/alpha-cluster/workstation-resources/kubectl.kubeconfig.asc
  test -e ~/.kube || mkdir ~/.kube
  mv -i kubectl.kubeconfig ~/.kube/config
  kubectl get nodes
)

Deploy a new project to the cluster

These steps assume that the project being deployed is already published to a container registry which can be reached by the cluster nodes. The high level process is as follows:

  1. Create project namespace
  2. Create storage volumes and persistent volume claims, if needed
  3. Create secrets
  4. Create deployments for each project container
  5. Create services for each deployment
  6. Add project to edge routing service
  7. Rebuild edge routing container
  8. Redeploy edge routing container

An example of the full end-to-end process can be referred to from the Yadaguru project deployment

Create project namespace

By convention, each project should have its own namespace on the kubernetes cluster, and all of its related kubernetes object files should be stored in a directory which shares the name of that namespace located at kubernetes/alpha-cluster/namespaces/${project_name}. The kubernetes object used to initialize the namespace should be located at kubernetes/alpha-cluster/namespaces/${project_name}/init.yml and can be as simple as the following:

cat <<EOF > kubernetes/alpha-cluster/namespaces/${project_name?}/init.yml
apiVersion: v1
kind: Namespace
metadata:
  name: ${project_name?}
EOF

Create storage volume & persistent volume claims

This should only be necessary for projects which need direct access to filesystem level persistent storage. Otherwise, projects should be encouraged to use a shared database service for storage. For reference on these steps, see section: Create and expose container volume

Create secrets

Each project should have a single secrets file (kubernetes/alpha-cluster/namespaces/${project_name}/secrets.yml by convention) which contains all credentials and other secrets needing to be exposed to containers. These files should be blackbox encrypted in the git repository. It will generally be desirable to expose these secrets to containers as environment variables.

Create deployments for each project container

The deployment object file for each container should be stored by convention under kubernetes/alpha-cluster/namespaces/${project_name}/${container_name}/deployment.yml.

Create services for each project container

The service object file for each container should be stored by convention under kubernetes/alpha-cluster/namespaces/${project_name}/${container_name}/service.yml.

Add project to edge routing service

In order to route public traffic to a project container, a nginx configuration needs to be created under docker/images/openresty-edge/conf.d/${project}-${container}.conf which routes traffic appropriately.

Rebuild edge routing container

The edge routing container is currently built locally on each node rather than pulled from a central registry. The container builds are managed by salt. In order to trigger a new build, the container's revision number must first be incremented in the salt pillar. Once that is done, each node must pull down the latest version of the git repository and then be highstated.

Redeploy edge routing container

Prior to performing this step, all project containers and their respective services must be deployed into the kubernetes cluster. Otherwise, the edge service will fail to start due to being unable to locate the service in question.

To redeploy, the revision number of the container being deployed must be incremented in the deployment configuration and the updated resource should be applied to the cluster using kubectl apply -f kubernetes/alpha-cluster/namespaces/kube-system/daemonsets/openresty-edge.yml

Access cluster services via kubectl

The alpha cluster runs several services, including web accessible resources such as kibana and grafana. Access to these resources must be proxied through the kubernetes master using kubectl. The steps to access a service are as follows, and access to the kibana dashboard is used in the example.

  1. Acquire proxy information for cluster services
  2. Open proxy to kubernetes master
  3. Access resource via HTTP proxy

Note: This example is illustrative, and is no good for copy/paste

$ kubectl cluster-info
Kubernetes master is running at https://kubmaster01:443
Elasticsearch is running at https://kubmaster01:443/api/v1/proxy/namespaces/kube-system/services/elasticsearch-logging
Heapster is running at https://kubmaster01:443/api/v1/proxy/namespaces/kube-system/services/heapster
Kibana is running at https://kubmaster01:443/api/v1/proxy/namespaces/kube-system/services/kibana-logging
KubeDNS is running at https://kubmaster01:443/api/v1/proxy/namespaces/kube-system/services/kube-dns
Grafana is running at https://kubmaster01:443/api/v1/proxy/namespaces/kube-system/services/monitoring-grafana
InfluxDB is running at https://kubmaster01:443/api/v1/proxy/namespaces/kube-system/services/monitoring-influxdb
# This command will block until you send SIGINT
$ kubectl proxy
Starting to serve on 127.0.0.1:8001
# Now you would access http://127.0.0.1/api/v1/proxy/namespaces/kube-system/services/kibana-logging from your browser

Create and expose container volume

Creating and exposing new volumes for use by containers is a two step process:

  1. Create volume on NFS server
  2. Create kubernetes PersistentVolume resource which can be claimed

Create volume on NFS server

In order to create a new volume for a container, define the container volume in the pillar of whichever storage machine will house the volume, then highstate the machine.

Create PersistentVolume resource

The creation of these resources is automated based on the pillar used to create the container volume on the NFS server. From the master, ensure pillar has been enabled and perform a highstate to create and update these resources.

(
  set -e
  test -e /srv/pillar || ln -s /ops/kubernetes/alpha-cluster/pillar /srv/pillar
  salt-call --local state.highstate
)

Managing Secrets

In git

The secrets in the ops git repository are encrypted to the GPG keys of all admin users. This process is simplified through the use of (Blackbox)[https://github.com/StackExchange/blackbox] to manage encryption/decryption, and maintain the keyring of keys to encrypt secrets to.

Exposing to projects

Project secrets (e.g., database credentials) are stored in the git repository as encrypted kubernetes API secrets. Loading these from the repository into the kubernetes master can easily be done using blackbox and kubectl, e.g.:

blackbox_cat kubernetes/alpha-cluster/namespaces/${namespace?}/secrets.yml | kubectl create -f -

Cluster services

Postgres Database

The most common action to be performed on the Postgres database will be to add new users. The container has a script builtin which will add new users as needed from a list maintained on a storage volume in the container. The process of adding a new user is illustrated in the example below.

variables:

  • postgres_pod: get from kubectl get pod
  • user: Postgres DB username
  • passwd: Postgres user password
  • project: Project name
# enter the postgres container
kubectl exec -ti ${postgres_pod?}
# configure new user & db
printf '%s\t%s\n' ${user?} ${passwd?} > /userdata/${project?}.txt
/bin/sh /docker-entrypoint-initdb.d/create-databases.sh

GrayLog

Each project is deployed into its own kubernetes namespace which makes its log messages distinguishable within Graylog by the message tags. All log messages from a project should be assigned to a dedicated stream to allow users for that project access exclusively to that message stream.

Clone this wiki locally