The resources (your apps) you create can be organized into namespaces which are particularly helpful when dealing with a larger team.
You can create a namespace via simple command
$ kubectl create namespace demo $ kubectl get namespaces NAME STATUS AGE default Active 22h demo Active 6m kube-public Active 21h kube-system Active 22h
The -o yaml trick is a great way to see what the command created
$ kubectl get namespace demo -o yaml apiVersion: v1 kind: Namespace metadata: creationTimestamp: 2018-07-29T22:27:27Z name: demo resourceVersion: "21330" selfLink: /api/v1/namespaces/demo uid: 92a63dee-937e-11e8-9a0f-080027cd86aa spec: finalizers: - kubernetes status: phase: Active
You can also use the
$ kubectl get namespace/demo --export -o yaml
for something cleaner, something you might store and use later with kubectl apply -f
Deleting a Namespace is also available via command. Deleting the namespace will remove all other objects that were created within this namespace.
$ kubectl delete namespace/demo
Your objects should be created via a file that can be checked into source control. Create a Namespace via yaml.
$ cd 9stepsawesome $ kubectl create -f kubefiles/myspace-namespace.yml # (1) $ kubectl config set-context --current --namespace=myspace # (2) # or if you are using minishift and oc, there is a shortcut $ oc project myspace # or if you installed kubectx, try # kubens myspace
-
Using the yaml allows you to place this file in a source repository
-
The setting of the context means you have to type -n myspace less often. Also, when using Minishift, "oc project myspace" is the equivalent command. In OpenShift projects = namespaces.
kubefiles/myspace-namespace.yml
You can confirm your context with
$ kubectl config view --minify --template '{{ index .contexts 0 "context" "namespace" }}' # or if you are using minishift and oc then there is a handy shortcut $ oc project # or you can use kubens tool which comes with kubectx (brew install kubectx) $ kubens
Create a standalone Pod
cat <<EOF | kubectl apply -f - apiVersion: v1 kind: Pod metadata: name: quarkus-demo spec: containers: - name: quarkus-demo image: quay.io/burrsutter/quarkus-demo:1.0.0 EOF
Delete the Pod
$ kubectl delete pod quarkus-demo
Create a ReplicaSet
cat <<EOF | kubectl apply -f - apiVersion: apps/v1 kind: ReplicaSet metadata: name: rs-quarkus-demo spec: replicas: 3 selector: matchLabels: app: quarkus-demo template: metadata: labels: app: quarkus-demo env: dev spec: containers: - name: quarkus-demo image: quay.io/burrsutter/quarkus-demo:1.0.0 EOF
See what it produced
$ kubectl get pods --show-labels NAME READY STATUS RESTARTS AGE LABELS rs-quarkus-demo-64dnl 1/1 Running 0 49s app=quarkus-demo,env=dev rs-quarkus-demo-8pn8b 1/1 Running 0 29s app=quarkus-demo,env=dev rs-quarkus-demo-ksm5f 1/1 Running 0 16s app=quarkus-demo,env=dev
Pick a pod and delete it
$ kubectl delete pod rs-quarkus-demo-rzfs2
and watch a new one spin up to take its place, it will be the youngest
NAME READY STATUS RESTARTS AGE rs-quarkus-demo-7f56r 1/1 Running 0 6s rs-quarkus-demo-mzb6m 1/1 Running 0 3m rs-quarkus-demo-zxbhq 1/1 Running 0 3m
The pods are owned by the replicaSet
$ kubectl get pod rs-quarkus-demo-vpppb -o json | jq ".metadata.ownerReferences[]"
Delete the replicatset
$ kubectl delete replicaset rs-quarkus-demo
and watch all pods terminate
NAME READY STATUS RESTARTS AGE rs-quarkus-demo-kqbwr 0/1 Terminating 0 1m rs-quarkus-demo-trc5t 0/1 Terminating 0 5m rs-quarkus-demo-x92s7 0/1 Terminating 0 5m
cat <<EOF | kubectl apply -f - apiVersion: apps/v1 kind: Deployment metadata: name: quarkus-demo-deployment spec: replicas: 3 selector: matchLabels: app: quarkus-demo template: metadata: labels: app: quarkus-demo env: dev spec: containers: - name: quarkus-demo image: quay.io/burrsutter/quarkus-demo:1.0.0 imagePullPolicy: Always ports: - containerPort: 8080 EOF
$ kubectl get pods --show-labels NAME READY STATUS RESTARTS AGE LABELS quarkus-demo-deployment-68557484-b6jbr 1/1 Running 0 96s app=quarkus-demo,env=dev,pod-template-hash=68557484 quarkus-demo-deployment-68557484-ccphv 1/1 Running 0 96s app=quarkus-demo,env=dev,pod-template-hash=68557484 quarkus-demo-deployment-68557484-pwk9b 1/1 Running 0 96s app=quarkus-demo,env=dev,pod-template-hash=68557484
Run curl inside of the Pod (more on "exec" later)
$ kubectl exec -it quarkus-demo-deployment-68557484-pwk9b -- curl localhost:8080
Note: No Naked Pods
The Service makes places those more ephemeral individual Pod IPs behind a single IP & load-balancer
cat <<EOF | kubectl apply -f - apiVersion: v1 kind: Service metadata: name: the-service spec: selector: app: quarkus-demo ports: - protocol: TCP port: 80 targetPort: 8080 type: NodePort EOF
curl those Endpoints behind that Service
$ IP=$(minikube --profile 9steps ip) $ PORT=$(kubectl get service the-service -o jsonpath="{.spec.ports[*].nodePort}") $ while true; do curl $IP:$PORT; sleep .5; done
Now that you have a Namespace, you need to craft some code
The "hello" directory contains several "hello world" style projects. Let’s use SpringBoot and run a quick test of the endpoint.
$ cd hello/springboot $ mvn clean package $ java -jar target/boot-demo-0.0.1.jar $ curl localhost:8080
Note: You can also just use the image directly from Quay.io https://quay.io/repository/burrsutter/myboot?tab=tags
Next create the docker image, first double check that you are still connected to the correct Docker daemon
$ docker images | grep 9steps $ docker build -t 9stepsawesome/myboot:v1 .
Test that docker image
$ docker run -d -p 8080:8080 --name myboot 9stepsawesome/myboot:v1 # OR # docker run -d -p 8080:8080 --name myboot quay.io/burrsutter/myboot:v1 # wait for container to come up (1) $ curl $(minikube -p 9steps ip):8080 (2) # If working, you can stop the container with $ docker stop myboot
-
You can check the status of the container from another window using the command: docker logs -f myboot
-
Since the docker container is running "inside" the minikube/minishift VM, you need to use its IP address
Next, deploy it as a pod inside of Kubernetes/OpenShift. You could use the "kubectl run" command but here we will use a Deployment
$ kubectl create -f kubefiles/myboot-deployment.yml
And then see what happened
$ kubectl get all NAME READY STATUS RESTARTS AGE pod/myboot-79bfc988c8-ttz5r 1/1 Running 0 25s NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE deployment.apps/myboot 1 1 1 1 25s NAME DESIRED CURRENT READY AGE replicaset.apps/myboot-79bfc988c8 1 1 1 25s
Create a Service
$ kubectl create -f kubefiles/myboot-service.yml
Get the nodePort
$ kubectl get service/myboot -o jsonpath="{.spec.ports[*].nodePort}"
Curl that url + nodePort
$ curl $(minikube -p 9steps ip):$(kubectl get service/myboot -o jsonpath="{.spec.ports[*].nodePort}")
Perhaps build a little loop to curl that endpoint
while true
do
curl $(minikube -p 9steps ip):$(kubectl get service/myboot -o jsonpath="{.spec.ports[*].nodePort}")
sleep .5;
done
Let’s scale up the application to 3 replicas, there are several possible ways to achieve this result.
You can edit the myboot-deployment.yml, updating replicas to 3
$ kubectl replace -f kubefiles/myboot-deployment.yml
Or use the kubectl scale command
$ kubectl scale --replicas=3 deployment/myboot
Or you might patch it
$ kubectl patch deployment/myboot -p '{"spec":{"replicas":3}}'
Or use the kubectl edit command which essentially gives you "vi" for editing the deployment yaml
$ kubectl edit deployment/myboot /replicas esc-w-q
Note: You can use VS Code as your editor with export KUBE_EDITOR="code -w"
You can then use "kubectl get pods" to see the pods that have been created
$ kubectl get pods
Note: 3 pods might push you out of your memory limits for your VM. Check your memory usage with:
$ minishift -p 9steps ssh # or $ minikube ssh $ free -m $ top -o %MEM
Update MyRESTController.java
greeting = environment.getProperty("GREETING","Bonjour");
Compile & Build the fat jar
mvn clean package
You can test with "java -jar target/boot-demo-0.0.1.jar" and "curl localhost:8080". Ideally, you would have unit tests executed with "mvn test" as well.
Build the new docker image with a v2 tag
$ docker build -t 9stepsawesome/myboot:v2 . $ docker images | grep myboot
Rollout the update
$ kubectl set image deployment/myboot myboot=9stepsawesome/myboot:v2 OR $ kubectl set image deployment/myboot myboot=quay.io/burrsutter/myboot:v2
And if you were running your polling curl command, you might see
Aloha from Spring Boot! 346 on myboot-79bfc988c8-ttz5r Aloha from Spring Boot! 270 on myboot-79bfc988c8-ntb8d Aloha from Spring Boot! 348 on myboot-79bfc988c8-ttz5r curl: (7) Failed to connect to 192.168.99.105 port 31218: Connection refused curl: (7) Failed to connect to 192.168.99.105 port 31218: Connection refused curl: (7) Failed to connect to 192.168.99.105 port 31218: Connection refused curl: (7) Failed to connect to 192.168.99.105 port 31218: Connection refused curl: (7) Failed to connect to 192.168.99.105 port 31218: Connection refused curl: (7) Failed to connect to 192.168.99.105 port 31218: Connection refused curl: (7) Failed to connect to 192.168.99.105 port 31218: Connection refused Bonjour from Spring Boot! 0 on myboot-757658fc4c-qnvqx Bonjour from Spring Boot! 1 on myboot-5955897c9b-zthj9
We will be working on those errors when we address Liveness and Readiness probes in Step 7
For now, undo the update, going back to v1
$ kubectl rollout undo deployment/myboot
Scale to 1
$ kubectl scale --replicas=1 deployment/myboot
Hit the sysresources endpoint
$ curl $(minikube -p 9steps ip):$(kubectl get service/myboot -o jsonpath="{.spec.ports[*].nodePort}")/sysresources Memory: 1324 Cores: 2
Note: you can use echo to see what this URL really looks like on your machine
$ echo $(minikube -p 9steps ip):$(kubectl get service/myboot -o jsonpath="{.spec.ports[*].nodePort}")/sysresources 192.168.99.105:30479/sysresources
The results of the "sysresources" call should be about 1/4 memory and all the cores that were configured for the VM. You can double check this with the following command:
$ minikube config view - cpus: 2 - memory: 6144 - vm-driver: virtualbox
Now, let’s apply resource contraints via Kubernetes & deployment yaml. Look at myboot-deployment-resources.yml
$ kubectl replace -f kubefiles/myboot-deployment-resources.yml
Curl sysresources again
$ curl $(minikube -p 9steps ip):$(kubectl get service/myboot -o jsonpath="{.spec.ports[*].nodePort}")/sysresources Memory: 1324 Cores: 2
In another terminal window, watch the pods
$ kubectl get pods -w
and now curl the consume endpoint
$ curl $(minikube -p 9steps ip):$(kubectl get service/myboot -o jsonpath="{.spec.ports[*].nodePort}")/consume
and you should see a OOMKilled
$ kubectl get pods -w NAME READY STATUS RESTARTS AGE myboot-68d666dd8d-m9m5r 1/1 Running 0 1m myboot-68d666dd8d-m9m5r 0/1 OOMKilled 0 2m myboot-68d666dd8d-m9m5r 1/1 Running 1 2m
And you can also see the OOMKilled with the kubectl describe command
$ kubectl describe pod -l app=myboot ... Last State: Terminated Reason: OOMKilled Exit Code: 137 Started: Tue, 31 Jul 2018 13:42:18 -0400 Finished: Tue, 31 Jul 2018 13:44:24 -0400 Ready: True Restart Count: 1 ...
Mystery to be solved in future steps!