Skip to content

Commit

Permalink
edits
Browse files Browse the repository at this point in the history
  • Loading branch information
mburke5678 committed Apr 11, 2019
1 parent 42b847e commit 0de7325
Show file tree
Hide file tree
Showing 6 changed files with 40 additions and 64 deletions.
25 changes: 20 additions & 5 deletions modules/nodes-cluster-overcommit-configure-nodes.adoc
Expand Up @@ -12,17 +12,32 @@ When the node starts, it ensures that the kernel tunable flags for memory
management are set properly. The kernel should never fail memory allocations
unless it runs out of physical memory.

To ensure this behavior, the node instructs the kernel to always overcommit
memory:
To ensure this behavior, {product-title} configures the kernel to always overcommit
memory by setting the `vm.overcommit_memory` parameter to `1`, overriding the
default operating system setting.

{product-title} also configures the kernel not to panic when it runs out of memory
by setting the `vm.panic_on_oom` parameter to `0`. A setting of 0 instructs the
kernel to call oom_killer in an Out of Memory (OOM) condition, which kills
processes based on priority

You can view the current setting by running the following commands on your node:

----
$ sysctl -w vm.overcommit_memory=1
$ sysctl -a |grep commit
vm.overcommit_memory = 0
----

The node also instructs the kernel not to panic when it runs out of memory.
Instead, the kernel OOM killer should kill processes based on priority:
----
$ sysctl -a |grep panic
vm.panic_on_oom = 0
----

You can change these settings using:

----
$ sysctl -w vm.overcommit_memory=1
$ sysctl -w vm.panic_on_oom=0
----

Expand Down
22 changes: 0 additions & 22 deletions modules/nodes-cluster-overcommit-master-disabling-swap.adoc

This file was deleted.

Expand Up @@ -36,7 +36,7 @@ apiVersion: v1
kind: Pod
metadata:
name: dapi-env-test-pod
spec:bash
spec:
containers:
- name: env-test-container
image: gcr.io/google_containers/busybox
Expand All @@ -47,7 +47,7 @@ spec:bash
configMapKeyRef:
name: myconfigmap
key: mykey
restartPolicy: Never
restartPolicy: Always
----

. Create the pod from the `*_pod.yaml_*` file:
Expand Down
25 changes: 2 additions & 23 deletions modules/nodes-nodes-problem-detector-installing.adoc
Expand Up @@ -15,41 +15,20 @@ You can use the {product-title} console to install the Node Problem Detector Ope
$ oc adm new-project openshift-node-problem-detector --node-selector ""
----

. Create an Operator Group:

.. Add the followng code to a YAML file:
+
----
apiVersion: operators.coreos.com/v1alpha2
kind: OperatorGroup
metadata:
name: npd-operators
namespace: openshift-node-problem-detector
spec:
targetNamespaces:
- openshift-node-problem-detector
----

.. Create the Operator Group:
+
----
$ oc create -f -<file-name>.yaml
----

.Procedure

The process to install the Node Problem Detector involves installing the Node Problem Detector Operator and creating a Node Problem Detector instance.

. In the {product-title} console, click *Catalog* -> *OperatorHub*.

. Choose *Node Problem Detector* from the list of available Operators, and click *Install*.

. On the *Create Operator Subscription* page:

.. Select the `openshift-node-problem-detector` project from the *A specific namespace on the cluster* drop-down list.

.. Click *Subscribe*.

.. Click *Subscribe*.

. On the *Catalog* → *Installed Operators* page, verify that the NodeProblemDetector (CSV) eventually shows up and its *Status* ultimately resolves to *InstallSucceeded*.
+
If it does not, switch to the *Catalog* → *Operator Management* page and inspect the *Operator Subscriptions* and *Install Plans* tabs for any failure or errors under *Status*. Then, check the logs in any Pods in the openshift-operators project (on the *Workloads* → *Pods* page) that are reporting issues to troubleshoot further.
Expand Down
17 changes: 10 additions & 7 deletions modules/nodes-nodes-viewing-memory.adoc
Expand Up @@ -7,7 +7,7 @@

You can display usage statistics about nodes, which provide the runtime
environments for containers. These usage statistics include CPU, memory, and
storage consumption.
storage consumption.

.Prerequisites

Expand All @@ -21,12 +21,15 @@ storage consumption.
+
----
$ oc adm top nodes
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
node-1 297m 29% 4263Mi 55%
node-0 55m 5% 1201Mi 15%
infra-1 85m 8% 1319Mi 17%
infra-0 182m 18% 2524Mi 32%
master-0 178m 8% 2584Mi 16%
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
ip-10-0-12-143.ec2.compute.internal 1503m 100% 4533Mi 61%
ip-10-0-132-16.ec2.compute.internal 76m 5% 1391Mi 18%
ip-10-0-140-137.ec2.compute.internal 398m 26% 2473Mi 33%
ip-10-0-142-44.ec2.compute.internal 656m 43% 6119Mi 82%
ip-10-0-146-165.ec2.compute.internal 188m 12% 3367Mi 45%
ip-10-0-19-62.ec2.compute.internal 896m 59% 5754Mi 77%
ip-10-0-44-193.ec2.compute.internal 632m 42% 5349Mi 72%
----

* To view the usage statistics for nodes with labels:
Expand Down
11 changes: 6 additions & 5 deletions modules/nodes-pods-viewing-usage.adoc
Expand Up @@ -28,11 +28,12 @@ $ oc adm top pods
For example:
+
----
$ oc adm top pods
NAME CPU(cores) MEMORY(bytes)
hawkular-cassandra-1-pqx6l 219m 1240Mi
hawkular-metrics-rddnv 20m 1765Mi
heapster-n94r4 3m 37Mi
$ oc adm top pods -n openshift-console
NAME CPU(cores) MEMORY(bytes)
console-7f58c69899-q8c8k 0m 22Mi
console-7f58c69899-xhbgg 0m 25Mi
downloads-594fcccf94-bcxk8 3m 18Mi
downloads-594fcccf94-kv4p6 2m 15Mi
----

. Run the following command to view the usage statistics for pods with labels:
Expand Down

0 comments on commit 0de7325

Please sign in to comment.