Setup couchbase operator 1.2 on open source kubernetes using minikube
The deployment would be using command line tools to deploy
Pre-requisities
Env details
Deploy admission controller
Deploy Couchbase Autonomous Operator
Deploying Couchbase Cluster with following details
* PV
* TLS certificates
Delete a pod
Check that cluster self-heals
Cluster is healthy
Scaling up and down
Backup and Restore Couchbase server
Run sample Python application using CB Python SDK
-
CLI / UI
$ brew update
-
Install hypervisor from link below
https://download.virtualbox.org/virtualbox/6.0.10/VirtualBox-6.0.10-132072-OSX.dmg
-
Install minikube
$ brew cask install minikube
-
Install kubectl
https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl-on-macos
-
Kubernetes cluster with supported version
-
Start minikube
$ sudo minikube start
$ sudo kubectl cluster-info
-
minikue on macos : v1.2.0
-
Set the vCPUs and Memory to 4 and 4GiB so that Couchbase operator would work on laptop
sudo minikube config set memory 4096
sudo minikube config set cpus 4
$ sudo minikube config view
- cpus: 4
- memory: 4096
$ sudo kubectl get nodes
NAME STATUS ROLES AGE VERSION
minikube Ready master 3d11h v1.15.0
- cd into the files dir to access the required yaml files First we will create a namespace to localize our deployment
$ sudo kubectl create namespace cbdb
- Deployment adminission controller
$ sudo kubectl create -f admission.yaml --namespace cbdb
-
Query the deployment
$ sudo kubectl get deployments --namespace cbdb NAME READY UP-TO-DATE AVAILABLE AGE couchbase-operator-admission 1/1 1 1 11m
-
Deploy the Custom Resource Definition
Scope of the CRD can be k8s cluster wide or localized to the namespace. Choice is upto devops/k8s administrator. In the example below its localized to the a particular namespace
sudo kubectl create -f crd.yaml --namespace cbdb
-
Deploy Operator Role
sudo kubectl create -f operator-role.yaml --namespace cbdb
-
Create service account
sudo kubectl create serviceaccount couchbase-operator --namespace cbdb
-
Bind the service account 'couchbase-operator' with operator-role
sudo kubectl create rolebinding couchbase-operator --role couchbase-operator --serviceaccount cbdb:couchbase-operator --namespace cbdb
-
Deploy Custom Resource Definition
sudo kubectl create -f operator-deployment.yaml --namespace cbdb
-
Query deployment
$ sudo kubectl get deployment --namespace cbdb NAME READY UP-TO-DATE AVAILABLE AGE couchbase-operator 1/1 1 1 20m couchbase-operator-admission 1/1 1 1 20m
Using help file below, make sure use appropriate namespace, here I have used 'cbdb' Link is here
$ sudo kubectl get secrets --namespace cbdb
NAME TYPE DATA AGE
couchbase-operator-tls Opaque 1 14h
couchbase-server-tls Opaque 2 14h
sudo kubectl create -f secret.yaml --namespace cbdb
$ sudo kubectl get storageclass
NAME PROVISIONER AGE
standard (default) k8s.io/minikube-hostpath 3d14h
sudo kubectl create -f couchbase-persistent-cluster-tls-k8s-minikube.yaml --namespace cbdb
$ sudo kubectl get pods --namespace cbdb
NAME READY STATUS RESTARTS AGE
cb-opensource-k8s-0000 1/1 Running 0 5h58m
cb-opensource-k8s-0001 1/1 Running 0 5h58m
cb-opensource-k8s-0002 1/1 Running 0 5h57m
couchbase-operator-864685d8b9-j72jd 1/1 Running 0 20h
couchbase-operator-admission-7d7d594748-btnm9 1/1 Running 0 20h
- Get the service details for Couchbase cluster
$ sudo kubectl get svc --namespace cbdb
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE 6h11m
cb-opensource-k8s-ui NodePort 10.100.90.161 <none> 8091:30477/TCP,18091:30184/TCP
$ sudo kubectl port-forward service/cb-opensource-k8s-ui 8091:8091 --namespace cbdb
Forwarding from 127.0.0.1:8091 -> 8091
Forwarding from [::1]:8091 -> 8091
Verify the root ca to check custom x509 cert is being used
Click Security->Root Certificate
Delete a pod at random, lets delete pod 001
$ sudo kubectl delete pod cb-opensource-k8s-0001 --namespace cbdb
pod "cb-opensource-k8s-0001" deleted
Server would automatically failover, depending on the autoFailovertimeout
A lost couchbase is auto-recovered by Couchbase Operator as its constantly watching cluster definition
Change size to 4 from 3
--- a/opensrc-k8s/cmd-line/files/couchbase-persistent-cluster-tls-k8s-minikube.yaml
enableIndexReplica: false
compressionMode: passive
servers:
- - size: 3
+ - size: 4
name: data
services:
- data
Run
sudo kubectl apply -f couchbase-persistent-cluster-tls-k8s-minikube.yaml --namespace cbdb
Its exact opposite of scaling up, reduce the cluster to any number. But not less than 3. Couchbase MVP is 3 nodes.
Backup and restore the Couchbase server
$ sudo kubectl create namespace apps
namespace/apps created
Deploy the app pod
$ sudo kubectl create -f app_pod.yaml --namespace apps
pod/app01 created
- Run the sample python program to upsert a document into couchbase cluster
Login to the pods shell/exec into app pod
$ sudo kubectl exec -ti app01 bash --namespace apps
Prep the pod for installing python SDK
Edit the program with FQDN of the pod
Run below command after exec'ing into the couchbase pod
$ sudo kubectl exec -ti cb-opensource-k8s-0000 bash --namespace cbdb
root@cb-opensource-k8s-0000:/# hostname -f
cb-opensource-k8s-0000.cb-opensource-k8s.cbdb.svc.cluster.local
Edit the program with correct connection string
Connection string for me looks like below:
cluster = Cluster('couchbase://cb-opensource-k8s-0000.cb-opensource-k8s.cbdb.svc.cluster.local')
Since both the namespaces in minikube share same kube-dns
Run the program
root@app01:/# python python_sdk_example.py
CB Server connection PASSED
Open the bucket...
Done...
Upserting a document...
Done...
Getting non-existent key. Should fail..
Got exception for missing doc
Inserting a doc...
Done...
Getting an existent key. Should pass...
Value for key 'babyliz_liz'
Value for key 'babyliz_liz'
{u'interests': [u'Holy Grail', u'Kingdoms and Dungeons'], u'type': u'Royales', u'name': u'Baby Liz', u'email': u'babyliz@cb.com'}
Delete a doc with key 'u:baby_arthur'...
Done...
Value for key [u:baby_arthur]
Got exception for missing doc for key [u:baby_arthur] with error <Key=u'u:baby_arthur', RC=0xD[The key does not exist on the server], Operational Error, Results=1, C Source=(src/multiresult.c,316), Tracing Output={"u:baby_arthur": {"c": "0000000036fb5729/523b08473029eae3", "b": "default", "i": 1754553113405298788, "l": "172.17.0.9:36304", "s": "kv:Unknown", "r": "cb-opensource-k8s-0001.cb-opensource-k8s.cbdb.svc:11210", "t": 2500000}}>
Closing connection to the bucket...
root@app01:/#
Upserted document should looks like this
We deployed Couchbase Autonomous Operator with version 1.2 on minikue version: v1.2.0. Couchbase cluster requires admission controller, RBACs with role limited to the namespace (more secure). CRD deployed has cluster wide scope, but that is by design. Couchbase cluster deployed had PV support and customer x509 certs. We saw how how Couchbase cluster self-heals, and brings cluster up and healthy back without any user intervention.
We also saw how to install Couchbase python sdk in a Applicaiton pod deployed in its namespace and we can have that application talk to Couchbase server and perform CRUD operations.
Perform these steps below to un-config all the k8s assets created.
sudo kubectl delete -f secret.yaml --namespace cbdb
sudo kubectl delete -f couchbase-persistent-cluster-tls-k8s-minikube.yaml --namespace cbdb
sudo kubectl delete rolebinding couchbase-operator --namespace cbdb
sudo kubectl delete serviceaccount couchbase-operator --namespace cbdb
sudo kubectl delete -f operator-deployment.yaml --namespace cbdb
sudo kubectl get deployments --namespace cbdb
sudo kubectl delete -f admission.yaml --namespace cbdb
sudo kubectl delete pod app01 --namespace apps