Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -3,10 +3,11 @@
// * observability/monitoring/configuring-the-monitoring-stack.adoc

:_mod-docs-content-type: PROCEDURE
[id="configuring-a-local-persistent-volume-claim_{context}"]
= Configuring a local persistent volume claim
[id="configuring-a-persistent-volume-claim_{context}"]

For monitoring components to use a persistent volume (PV), you must configure a persistent volume claim (PVC).
= Configuring a persistent volume claim

To use a persistent volume (PV) for monitoring components, you must configure a persistent volume claim (PVC).

.Prerequisites

Expand Down Expand Up @@ -40,18 +41,21 @@ metadata:
namespace: openshift-monitoring
data:
config.yaml: |
<component>:
<component>: #<1>
volumeClaimTemplate:
spec:
storageClassName: <storage_class>
storageClassName: <storage_class> #<2>
resources:
requests:
storage: <amount_of_storage>
storage: <amount_of_storage> #<3>
----
<1> Specify the core monitoring component for which you want to configure the PVC.
<2> Specify an existing storage class. If a storage class is not specified, the default storage class is used.
<3> Specify the amount of required storage.
+
See the link:https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims[Kubernetes documentation on PersistentVolumeClaims] for information on how to specify `volumeClaimTemplate`.
+
The following example configures a PVC that claims local persistent storage for the Prometheus instance that monitors core {product-title} components:
The following example configures a PVC that claims persistent storage for the Prometheus instance that monitors core {product-title} components:
+
[source,yaml,subs=quotes]
----
Expand All @@ -65,33 +69,11 @@ data:
*prometheusK8s*:
volumeClaimTemplate:
spec:
storageClassName: *local-storage*
storageClassName: my-storage-class
resources:
requests:
storage: *40Gi*
----
+
In the above example, the storage class created by the Local Storage Operator is called `local-storage`.
+
The following example configures a PVC that claims local persistent storage for Alertmanager:
+
[source,yaml,subs=quotes]
----
apiVersion: v1
kind: ConfigMap
metadata:
name: cluster-monitoring-config
namespace: openshift-monitoring
data:
config.yaml: |
*alertmanagerMain*:
volumeClaimTemplate:
spec:
storageClassName: *local-storage*
resources:
requests:
storage: *10Gi*
----

** *To configure a PVC for a component that monitors user-defined projects*:
.. Edit the `user-workload-monitoring-config` `ConfigMap` object in the `openshift-user-workload-monitoring` project:
Expand All @@ -112,40 +94,21 @@ metadata:
namespace: openshift-user-workload-monitoring
data:
config.yaml: |
<component>:
<component>: #<1>
volumeClaimTemplate:
spec:
storageClassName: <storage_class>
storageClassName: <storage_class> #<2>
resources:
requests:
storage: <amount_of_storage>
storage: <amount_of_storage> #<3>
----
<1> Specify the component for user-defined monitoring for which you want to configure the PVC.
<2> Specify an existing storage class. If a storage class is not specified, the default storage class is used.
<3> Specify the amount of required storage.
+
See the link:https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims[Kubernetes documentation on PersistentVolumeClaims] for information on how to specify `volumeClaimTemplate`.
+
The following example configures a PVC that claims local persistent storage for the Prometheus instance that monitors user-defined projects:
+
[source,yaml,subs=quotes]
----
apiVersion: v1
kind: ConfigMap
metadata:
name: user-workload-monitoring-config
namespace: openshift-user-workload-monitoring
data:
config.yaml: |
*prometheus*:
volumeClaimTemplate:
spec:
storageClassName: *local-storage*
resources:
requests:
storage: *40Gi*
----
+
In the above example, the storage class created by the Local Storage Operator is called `local-storage`.
+
The following example configures a PVC that claims local persistent storage for Thanos Ruler:
The following example configures a PVC that claims persistent storage for Thanos Ruler:
+
[source,yaml,subs=quotes]
----
Expand All @@ -159,7 +122,7 @@ data:
*thanosRuler*:
volumeClaimTemplate:
spec:
storageClassName: *local-storage*
storageClassName: my-storage-class
resources:
requests:
storage: *10Gi*
Expand Down
23 changes: 12 additions & 11 deletions modules/monitoring-configuring-persistent-storage.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -11,25 +11,26 @@ Run cluster monitoring with persistent storage to gain the following benefits:
* Protect your metrics and alerting data from data loss by storing them in a persistent volume (PV). As a result, they can survive pods being restarted or recreated.
* Avoid getting duplicate notifications and losing silences for alerts when the Alertmanager pods are restarted.

For production environments, it is highly recommended to configure persistent storage. Because of the high IO demands, it is advantageous to use local storage.
For production environments, it is highly recommended to configure persistent storage.

[id="persistent-storage-prerequisites"]
[id="persistent-storage-prerequisites_{context}"]
== Persistent storage prerequisites

* Dedicate sufficient local persistent storage to ensure that the disk does not become full. How much storage you need depends on the number of pods.
ifdef::openshift-dedicated,openshift-rosa[]
* Use the block type of storage.
endif::openshift-dedicated,openshift-rosa[]

* Verify that you have a persistent volume (PV) ready to be claimed by the persistent volume claim (PVC), one PV for each replica. Because Prometheus and Alertmanager both have two replicas, you need four PVs to support the entire monitoring stack. The PVs are available from the Local Storage Operator, but not if you have enabled dynamically provisioned storage.
ifndef::openshift-dedicated,openshift-rosa[]
* Dedicate sufficient persistent storage to ensure that the disk does not become full.

* Use `Filesystem` as the storage type value for the `volumeMode` parameter when you configure the persistent volume.
+
[NOTE]
====
If you use a local volume for persistent storage, do not use a raw block volume, which is described with `volumeMode: Block` in the `LocalVolume` object. Prometheus cannot use raw block volumes.
====
+
[IMPORTANT]
====
Prometheus does not support file systems that are not POSIX compliant.
* Do not use a raw block volume, which is described with `volumeMode: Block` in the `PersistentVolume` resource. Prometheus cannot use raw block volumes.

* Prometheus does not support file systems that are not POSIX compliant.
For example, some NFS file system implementations are not POSIX compliant.
If you want to use an NFS file system for storage, verify with the vendor that their NFS implementation is fully POSIX compliant.
====
====
endif::openshift-dedicated,openshift-rosa[]
206 changes: 0 additions & 206 deletions modules/monitoring-resizing-a-persistent-storage-volume.adoc

This file was deleted.

Loading