Skip to content
Permalink
Browse files

Merge pull request #54 from 0kashi/master

Added steps for Azure Monitor and HPA
  • Loading branch information
sabbour committed Mar 18, 2020
2 parents d1dc69a + bf55168 commit 2cd6dc0c0ddc001f6f4993a4b88961d4794c87bb
@@ -1,13 +1,11 @@
---
sectionid: lab2-logging
sectionclass: h2
title: Logging
title: Logging and Metrics
parent-id: lab-clusterapp
---

Assuming you can access the application via the Route provided and are still logged into the CLI (please go back to part 2 if you need to do any of those) we'll start to use this application. As stated earlier, this application will allow you to "push the buttons" of OpenShift and see how it works.

{% collapsible %}
Assuming you can access the application via the Route provided and are still logged into the CLI (please go back to part 2 if you need to do any of those) we'll start to use this application. As stated earlier, this application will allow you to "push the buttons" of OpenShift and see how it works. We will do this to test the logs.

Click on the *Home* menu item and then click in the message box for "Log Message (stdout)" and write any message you want to output to the *stdout* stream. You can try "**All is well!**". Then click "Send Message".

@@ -17,6 +15,10 @@ Click in the message box for "Log Message (stderr)" and write any message you wa

![Logging stderr](/media/managedlab/9-ostoy-stderr.png)

### View logs directly from the pod

{% collapsible %}

Go to the CLI and enter the following command to retrieve the name of your frontend pod which we will use to view the pod logs:

```sh
@@ -39,3 +41,63 @@ stderr: Oh no! Error!
You should see both the *stdout* and *stderr* messages.

{% endcollapsible %}

### View logs using Azure Monitor Integration

{% collapsible %}

One can use the native Azure service, Azure Monitor, to view and keep application logs along with metrics. This lab assumes that the cluster was already configured to use Azure Monitor for application logs at cluster creation. If you want more information on how to connect this for a new or existing cluster see the docs here: (https://docs.microsoft.com/en-us/azure/azure-monitor/insights/container-insights-azure-redhat-setup)


Access the azure portal at (https://portal.azure.com/)

Click on "Monitor".

![Monitor](/media/managedlab/24-ostoy-azuremonitor.png)

Click Logs in the left menu.

> Note: if you are asked to select a scope select the Log Analytics scope for your cluster
![container logs](/media/managedlab/29-ostoy-logs.png)

Expand "ContainerInsights".

Double click "ContainerLog".

Then click the "Run" button at the top.

![container logs](/media/managedlab/30-ostoy-logs.png)

In the bottom pane you will see the results of the application logs returned. You might need to sort, but you should see the two lines we outputted to *stdout* and *stderr*.

![container logs](/media/managedlab/31-ostoy-logout.png)

{% endcollapsible %}


### View Metrics using Azure Monitor Integration

{% collapsible %}

Click on "Containers" in the left menu under Insights.

![Containers](/media/managedlab/25-ostoy-monitorcontainers.png)

Click on your cluster that is integrated with Azure Monitor.

![Cluster](/media/managedlab/26-ostoy-monitorcluster.png)

You will see metrics for your cluster such as resource consumption over time and pod counts. Feel free to explore the metrics here.

![Metrics](/media/managedlab/27-ostoy-metrics.png)

For example, if you want to see how much resources our OSTOY pods are using click on the "Containers" tab.

Enter "ostoy" into the search box near the top left.

You will see the 2 pods we have, one for the front-end and one for the microservice and the relevant metric. Feel free to select other options to see min, max or other percentile usages of the pods. You can also change to see memory consumption

![container metrics](/media/managedlab/28-ostoy-metrics.png)

{% endcollapsible %}
@@ -0,0 +1,60 @@
---
sectionid: lab2-HPA
sectionclass: h2
title: Autoscaling
parent-id: lab-clusterapp
---

### Autoscaling

In this section we will explore how the [Horizontal Pod Autoscaler](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/) (HPA) can be used and works within Kubernetes/OpenShift.

As defined in the Kubernetes documentation:
> Horizontal Pod Autoscaler automatically scales the number of pods in a replication controller, deployment, replica set or stateful set based on observed CPU utilization.
We will create an HPA and then use OSToy to generate CPU intensive workloads. We will then observe how the HPA will scale up the number of pods in order to handle the increased workloads.

{% collapsible %}

#### 1. Create the Horizontal Pod Autoscaler

Run the following command to create the autoscaler. This will create an HPA that maintains between 1 and 10 replicas of the Pods controlled by the *ostoy-microservice* DeploymentConfig created. Roughly speaking, the HPA will increase and decrease the number of replicas (via the deployment) to maintain an average CPU utilization across all Pods of 80% (since each pod requests 50 millicores, this means average CPU usage of 40 millicores)

`oc autoscale deployment/ostoy-microservice --cpu-percent=80 --min=1 --max=10`

#### 2. View the current number of pods

In the OSToy app in the left menu click on "Autoscaling" to access this portion of the workshop.

![HPA Menu](/media/managedlab/32-hpa-menu.png)

As was in the networking section you will see the total number of pods available for the microservice by counting the number of colored boxes. In this case we have only one. This can be verified through the web UI or from the CLI.

You can use the following command to see the running microservice pods only:
`oc get pods --field-selector=status.phase=Running | grep microservice`

![HPA Main](/media/managedlab/33-hpa-mainpage.png)

#### 3. Increase the load

Now that we know that we only have one pod let's increase the workload that the pod needs to perform. Click the link in the center of the card that says "increase the load". **Please click only *ONCE*!**

This will generate some CPU intensive calculations. (If you are curious about what it is doing you can click [here](https://github.com/openshift-cs/ostoy/blob/master/microservice/app.js#L32)).

> **Note:** The page may become slightly unresponsive. This is normal; so be patient while the new pods spin up.
#### 4. See the pods scale up

After about a minute the new pods will show up on the page (represented by the colored rectangles). Confirm that the pods did indeed scale up through the OpenShift Web Console or the CLI (you can use the command above).

> **Note:** The page may still lag a bit which is normal.
#### 5. Review resources in Azure Monitor

After confirming that the autoscaler did spin up new pods, revisit Azure Monitor like we did in the logging section. By clickin on the containers tab we can see the resource consumption of the pods and see that three pods were created to handle the load.

![HPA Metrics](/media/managedlab/34-ostoy-hpametrics.png)



{% endcollapsible %}
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
@@ -29,7 +29,7 @@ spec:
spec:
containers:
- name: ostoy-frontend
image: quay.io/aroworkshop/ostoy-frontend:1.2.2
image: quay.io/ostoylab/ostoy-frontend:1.3.0
imagePullPolicy: IfNotPresent
ports:
- name: ostoy-port
@@ -16,7 +16,7 @@ spec:
spec:
containers:
- name: ostoy-microservice
image: quay.io/aroworkshop/ostoy-microservice:1.2.2
image: quay.io/ostoylab/ostoy-microservice:1.3.0
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8080

0 comments on commit 2cd6dc0

Please sign in to comment.
You can’t perform that action at this time.