Skip to content

Commit

Permalink
Fix #1348: Add YAMLs to deploy development branch using Kubernetes
Browse files Browse the repository at this point in the history
Also add document explaining the steps
  • Loading branch information
singhpratyush committed Jul 22, 2017
1 parent ee880ff commit 6930c2f
Show file tree
Hide file tree
Showing 7 changed files with 168 additions and 0 deletions.
82 changes: 82 additions & 0 deletions docs/installation/deploying-development-kubernetes.md
@@ -0,0 +1,82 @@
# Deploying `development` version of loklak on Kubernetes

## 1. Background of the Deployment

### 1.1. API Server and Elasticsearch
The API server and Elasticsearch would co exist in the `web` namespace. API Server would use `NodeBuilder` to create a Node based Elasticsearch cluster with dump and index at `/loklak_server/data` volume.

### 1.2. Persistent Storage for Data Dump and Elasticsearch Index
The data dump and Elasticsearch index would be mounted on external persistent disk so that rolling updates do not wipe out the data.

## 2. Steps

### 2.1. Create a Kubernetes cluster

Kubernetes cluster can be created easily using the `gcloud` CLI. More details about this are available [here](https://github.com/loklak/loklak_server/blob/development/docs/installation/installation_google_cloud_kubernetes.md#7-creating-a-container-cluster).

### 2.2. Clone the loklak project

```bash
git clone https://github.com/loklak/lokklak_server
cd loklak_server
```

Ensure that you are at `development` branch.

```bash
git checkout development
```

### 2.3. Create a Persistent Disk

```bash
gcloud compute disks create --size=100GB --zone=<same as cluster zone> data-index-disk
```

### 2.4. Create Kubernets objects using configuration files

```bash
kubectl create -R -f kubernetes/yamls/development
```

### 2.5. Label the Node

The persistent disk is attached to the node running `api-server`. But in case of rolling update, new container may get created on a different node.

This would result in issues as the new instance would now try to mount a HDD which is already in use by another (older) instance.

To avoid this, we enforce the new containers get created on same instance by labeling them. A selector for this is already present in the `api-server` deployment configuration.

You can get the node name by running

```bash
kubectl get nodes
```

Choose one of the nodes and label it as `server=primary`.

```bash
kubectl label nodes <node-name> server=primary
```

### 2.6. Wait for LoadBalancer to get in action

After some time, we can see the public IP by running the following command

```bash
kubectl get services --namespace=web
```

## 3. Updating deployment

### 3.1. Setting a new Docker image

To update deployment image, we can use the following command -

```bash
kubectl set image deployment/server --namespace=web server=<image name>
```

### 3.2. Updating configurations

While updating the configurations, it should be ensured that the `api-service.yml` configuration is not recreated. This would retain pervious IP and save us from the trouble of updating DNS.
4 changes: 4 additions & 0 deletions kubernetes/yamls/development/api-server/00-namespace.yml
@@ -0,0 +1,4 @@
apiVersion: v1
kind: Namespace
metadata:
name: web
12 changes: 12 additions & 0 deletions kubernetes/yamls/development/api-server/api-claim.yml
@@ -0,0 +1,12 @@
# Setup PVC for smooth switchover while updating image
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: server-data-claim
namespace: web
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
34 changes: 34 additions & 0 deletions kubernetes/yamls/development/api-server/api-deployment.yml
@@ -0,0 +1,34 @@
# Main deployment for API server
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: server
namespace: web
spec:
replicas: 1
template:
metadata:
labels:
app: server
spec:
nodeSelector:
server: primary
containers:
- name: server
image: loklak/loklak_server:development
volumeMounts:
- mountPath: /loklak_server/data
name: data-index
livenessProbe:
httpGet:
path: /api/status.json
port: 80
initialDelaySeconds: 30
timeoutSeconds: 3
ports:
- containerPort: 80
protocol: TCP
volumes:
- name: data-index
persistentVolumeClaim:
claimName: server-data-claim
17 changes: 17 additions & 0 deletions kubernetes/yamls/development/api-server/api-persistence.yml
@@ -0,0 +1,17 @@
# Configuration of persistent volume
# gcloud compute disks create --size=100GB --zone=... data-index-disk
apiVersion: v1
kind: PersistentVolume
metadata:
name: data-index-disk
namespace: web
labels:
name: data-index-disk
spec:
capacity:
storage: 100Gi
accessModes:
- ReadWriteOnce
gcePersistentDisk:
pdName: data-index-disk
fsType: ext4
14 changes: 14 additions & 0 deletions kubernetes/yamls/development/api-server/api-service.yml
@@ -0,0 +1,14 @@
# Expose port 80 using load balancer
kind: Service
apiVersion: v1
metadata:
name: server
namespace: web
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: server
type: LoadBalancer
5 changes: 5 additions & 0 deletions kubernetes/yamls/development/api-server/configmap.yml
@@ -0,0 +1,5 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: server
namespace: web

0 comments on commit 6930c2f

Please sign in to comment.