Skip to content
Permalink
Browse files

Merge pull request #41 from 0kashi/master

Updates to this lab, spelling/grammar corrections.
  • Loading branch information...
sabbour committed Oct 18, 2019
2 parents 403621f + ec75901 commit 40fadfc57054a5f73a8459d05d408d98797228c2
@@ -16,7 +16,7 @@ If not logged in via the CLI, click on the dropdown arrow next to your name in t
Then go to your terminal and paste that command and press enter. You will see a similar confirmation message if you successfully logged in.

```sh
[okashi@ok-vm ostoy]# oc login https://openshift.abcd1234.eastus.azmosa.io --token=hUXXXXXX
$ oc login https://openshift.abcd1234.eastus.azmosa.io --token=hUXXXXXX
Logged into "https://openshift.abcd1234.eastus.azmosa.io:443" as "okashi" using the token provided.
You have access to the following projects and can switch between them with 'oc project <projectname>':
@@ -41,7 +41,7 @@ Use the following command
You should receive the following response

```sh
[okashi@ok-vm ostoy]# oc new-project ostoy
$ oc new-project ostoy
Now using project "ostoy" on server "https://openshift.abcd1234.eastus.azmosa.io:443".
You can add applications to this project with the 'new-app' command. For example, try:
@@ -83,7 +83,7 @@ In your command line deploy the microservice using the following command:

You should see the following response:
```
[okashi@ok-vm ostoy]# oc apply -f ostoy-microservice-deployment.yaml
$ oc apply -f ostoy-microservice-deployment.yaml
deployment.apps/ostoy-microservice created
service/ostoy-microservice-svc created
```
@@ -112,7 +112,7 @@ In your command line deploy the frontend along with creating all objects mention
You should see all objects created successfully

```sh
[okashi@ok-vm ostoy]# oc apply -f ostoy-fe-deployment.yaml
$ oc apply -f ostoy-fe-deployment.yaml
persistentvolumeclaim/ostoy-pvc created
deployment.apps/ostoy-frontend created
service/ostoy-frontend-svc created
@@ -20,15 +20,15 @@ Click in the message box for "Log Message (stderr)" and write any message you wa
Go to the CLI and enter the following command to retrieve the name of your frontend pod which we will use to view the pod logs:

```sh
[okashi@ok-vm ~]# oc get pods -o name
$ oc get pods -o name
pod/ostoy-frontend-679cb85695-5cn7x
pod/ostoy-microservice-86b4c6f559-p594d
```

So the pod name in this case is **ostoy-frontend-679cb85695-5cn7x**. Then run `oc logs ostoy-frontend-679cb85695-5cn7x` and you should see your messages:

```sh
[okashi@ok-vm ostoy]# oc logs ostoy-frontend-679cb85695-5cn7x
$ oc logs ostoy-frontend-679cb85695-5cn7x
[...]
ostoy-frontend-679cb85695-5cn7x: server starting on port 8080
Redirecting to /home
@@ -5,17 +5,17 @@ title: Exploring Health Checks
parent-id: lab-clusterapp
---

In this section we will intentionally crash our pods as well as make a pod non-responsive to the liveliness probes from Kubernetes and see how Kubernetes behaves. We will first intentionally crash our pod and see that Kubernetes will self-heal and immediately spin it back up. Then we will trigger the health check by stopping the response on the `/health` endpoint in our app. After three consecutive failures Kubernetes should kill the pod and then recreate it.
In this section we will intentionally crash our pods as well as make a pod non-responsive to the liveness probes and see how Kubernetes behaves. We will first intentionally crash our pod and see that Kubernetes will self-heal by immediately spinning it back up. Then we will trigger the health check by stopping the response on the `/health` endpoint in our app. After three consecutive failures, Kubernetes should kill the pod and then recreate it.

{% collapsible %}

It would be best to prepare by splitting your screen between the OpenShift Web UI and the OSToy application so that you can see the results of our actions immediately.

![Splitscreen](/media/managedlab/23-ostoy-splitscreen.png)

But if your screen is too small or that just won't work, then open the OSToy application in another tab so you can quickly switch to OpenShift Web Console once you click the button. To get to this deployment in the OpenShift Web Console go to:
But if your screen is too small or that just won't work, then open the OSToy application in another tab so you can quickly switch to the OpenShift Web Console once you click the button. To get to this deployment in the OpenShift Web Console go to:

Applications > Deployments > click the number in the "Last Version" column for the "ostoy-frontend" row
*Applications > Deployments >* click the number in the "Last Version" column for the "ostoy-frontend" row

![Deploy Num](/media/managedlab/11-ostoy-deploynum.png)

@@ -31,11 +31,11 @@ You can also check in the pod events and further verify that the container has c

![Pod Events](/media/managedlab/14-ostoy-podevents.png)

Keep the page from the pod events still open from step 4. Then in the OSToy app click on the "Toggle Health" button, in the "Toggle Health Status" tile. You will see the "Current Health" switch to "I'm not feeling all that well".
Keep the page from the pod events still open from the previous step. Then in the OSToy app click on the "Toggle Health" button, in the "Toggle Health Status" tile. You will see the "Current Health" switch to "I'm not feeling all that well".

![Pod Events](/media/managedlab/15-ostoy-togglehealth.png)

This will cause the app to stop responding with a "200 HTTP code". After 3 such consecutive failures ("A"), Kubernetes will kill the pod ("B") and restart it ("C"). Quickly switch back to the pod events tab and you will see that the liveliness probe failed and the pod as being restarted.
This will cause the app to stop responding with a "200 HTTP code". After 3 such consecutive failures ("A"), Kubernetes will kill the pod ("B") and restart it ("C"). Quickly switch back to the pod events tab and you will see that the liveness probe failed and the pod as being restarted.

![Pod Events2](/media/managedlab/16-ostoy-podevents2.png)

@@ -50,12 +50,12 @@ if you enter `ls` you can see all the files you created. Next, let's open the f
You should see the text you entered in the UI.

```
[okashi@ok-vm ostoy]# oc get pods
$ oc get pods
NAME READY STATUS RESTARTS AGE
ostoy-frontend-5fc8d486dc-wsw24 1/1 Running 0 18m
ostoy-microservice-6cf764974f-hx4qm 1/1 Running 0 18m
[okashi@ok-vm ostoy]# oc rsh ostoy-frontend-5fc8d486dc-wsw24
$ oc rsh ostoy-frontend-5fc8d486dc-wsw24
/ $ cd /var/demo_files/
/var/demo_files $ ls
@@ -11,7 +11,7 @@ Let's review how this application is set up...

![OSToy Diagram](/media/managedlab/4-ostoy-arch.png)

As can be seen in the image above we see we have defined at least 2 separate pods, each with its own service. One is the frontend web application (with a service and a publicly accessible route) and the other is the backend microservice with a service object created so that the frontend pod can communicate with the microservice (accross the pods if more than one). Therefore this microservice is not accessible from outside this cluster, nor from other namespaces/projects (due to ARO's network policy, **ovs-networkpolicy**). The sole purpose of this microservice is to serve internal web requests and return a JSON object containing the current hostname and a randomly generated color string. This color string is used to display a box with that color displayed in the tile (titled "Intra-cluster Communication").
As can be seen in the image above we see we have defined at least 2 separate pods, each with its own service. One is the frontend web application (with a service and a publicly accessible route) and the other is the backend microservice with a service object created so that the frontend pod can communicate with the microservice (across the pods if more than one). Therefore this microservice is not accessible from outside this cluster, nor from other namespaces/projects (due to ARO's network policy, **ovs-networkpolicy**). The sole purpose of this microservice is to serve internal web requests and return a JSON object containing the current hostname and a randomly generated color string. This color string is used to display a box with that color displayed in the tile titled "Intra-cluster Communication".

### Networking

@@ -48,7 +48,7 @@ We will see an IP address returned. In our example it is ```172.30.165.246```.

### Scaling

OpenShift allows one to scale up/down the number of pods for each part of an application as needed. This can be accomplished via changing our *replicaset/deployment* definition (declarative), by the command line (imperative), or via the web UI (imperative). In our deployment definition (part of our `ostoy-fe-deployment.yaml`) we stated that we only want one pod for our microservice to start with. This means that the Kubernetes Replication Controler will always strive to keep one pod alive. (We can also define [autoscalling](https://docs.openshift.com/container-platform/3.11/dev_guide/pod_autoscaling.html) based on load to expand past what we defined if needed)
OpenShift allows one to scale up/down the number of pods for each part of an application as needed. This can be accomplished via changing our *replicaset/deployment* definition (declarative), by the command line (imperative), or via the web UI (imperative). In our deployment definition (part of our `ostoy-fe-deployment.yaml`) we stated that we only want one pod for our microservice to start with. This means that the Kubernetes Replication Controller will always strive to keep one pod alive.

{% collapsible %}

@@ -101,6 +101,6 @@ Lastly let's use the web UI to scale back down to one pod. In the project you c

![UI Scale](/media/managedlab/21-ostoy-uiscale.png)

See this visually by visiting the OSToy app and seeing how many boxes you now see. It should be one.
See this visually by visiting the OSToy app and seeing how many boxes you now see. It should be one. You can also confirm this via the CLI or the web UI

{% endcollapsible %}

0 comments on commit 40fadfc

Please sign in to comment.
You can’t perform that action at this time.