From b3adac8e2d0ec428231200324463b09a5aeb999e Mon Sep 17 00:00:00 2001 From: Eliska Romanova Date: Mon, 21 Oct 2024 17:12:46 +0200 Subject: [PATCH] Separate UWM and core platform monitoring (part 2) --- ...configuring-a-persistent-volume-claim.adoc | 148 ++++++++--------- ...-and-size-for-prometheus-metrics-data.adoc | 137 +++++++--------- ...nitoring-resizing-a-persistent-volume.adoc | 155 +++++++++--------- .../configuring-the-monitoring-stack.adoc | 25 ++- 4 files changed, 219 insertions(+), 246 deletions(-) diff --git a/modules/monitoring-configuring-a-persistent-volume-claim.adoc b/modules/monitoring-configuring-a-persistent-volume-claim.adoc index 68a6a40c3d5c..fd82e2ba8408 100644 --- a/modules/monitoring-configuring-a-persistent-volume-claim.adoc +++ b/modules/monitoring-configuring-a-persistent-volume-claim.adoc @@ -3,143 +3,122 @@ // * observability/monitoring/configuring-the-monitoring-stack.adoc :_mod-docs-content-type: PROCEDURE -[id="configuring-a-persistent-volume-claim_{context}"] -= Configuring a persistent volume claim +// The ultimate solution DOES NOT NEED separate IDs, it is just needed for now so that the tests will not break + +// tag::CPM[] +[id="configuring-a-persistent-volume-claim-cpm_{context}"] += Configuring a persistent volume claim for core platform monitoring +// end::CPM[] + +// tag::UWM[] +[id="configuring-a-persistent-volume-claim-uwm_{context}"] += Configuring a persistent volume claim for monitoring of user-defined projects +// end::UWM[] + +// Set attributes to distinguish between cluster monitoring example (core platform monitoring - CPM) and user workload monitoring (UWM) examples + +// tag::CPM[] +:configmap-name: cluster-monitoring-config +:namespace-name: openshift-monitoring +:component: prometheusK8s +// end::CPM[] +// tag::UWM[] +:configmap-name: user-workload-monitoring-config +:namespace-name: openshift-user-workload-monitoring +:component: thanosRuler +// end::UWM[] To use a persistent volume (PV) for monitoring components, you must configure a persistent volume claim (PVC). .Prerequisites +// tag::CPM[] +* You have access to the cluster as a user with the `cluster-admin` cluster role. +* You have created the `cluster-monitoring-config` `ConfigMap` object. +// end::CPM[] +// tag::UWM[] ifndef::openshift-dedicated,openshift-rosa[] -* *If you are configuring core {product-title} monitoring components*: -** You have access to the cluster as a user with the `cluster-admin` cluster role. -** You have created the `cluster-monitoring-config` `ConfigMap` object. -* *If you are configuring components that monitor user-defined projects*: -** You have access to the cluster as a user with the `cluster-admin` cluster role, or as a user with the `user-workload-monitoring-config-edit` role in the `openshift-user-workload-monitoring` project. -** A cluster administrator has enabled monitoring for user-defined projects. +* You have access to the cluster as a user with the `cluster-admin` cluster role, or as a user with the `user-workload-monitoring-config-edit` role in the `openshift-user-workload-monitoring` project. +* A cluster administrator has enabled monitoring for user-defined projects. endif::openshift-dedicated,openshift-rosa[] ifdef::openshift-dedicated,openshift-rosa[] * You have access to the cluster as a user with the `dedicated-admin` role. * The `user-workload-monitoring-config` `ConfigMap` object exists. This object is created by default when the cluster is created. endif::openshift-dedicated,openshift-rosa[] +// end::UWM[] * You have installed the OpenShift CLI (`oc`). .Procedure -. Edit the `ConfigMap` object: -ifndef::openshift-dedicated,openshift-rosa[] -** *To configure a PVC for a component that monitors core {product-title} projects*: -.. Edit the `cluster-monitoring-config` `ConfigMap` object in the `openshift-monitoring` project: +. Edit the `{configmap-name}` config map in the `{namespace-name}` project: + -[source,terminal] +[source,terminal,subs="attributes+"] ---- -$ oc -n openshift-monitoring edit configmap cluster-monitoring-config +$ oc -n {namespace-name} edit configmap {configmap-name} ---- -.. Add your PVC configuration for the component under `data/config.yaml`: +. Add your PVC configuration for the component under `data/config.yaml`: + -[source,yaml] +[source,yaml,subs="attributes+"] ---- apiVersion: v1 kind: ConfigMap metadata: - name: cluster-monitoring-config - namespace: openshift-monitoring + name: {configmap-name} + namespace: {namespace-name} data: config.yaml: | - : #<1> + : # <1> volumeClaimTemplate: spec: - storageClassName: #<2> + storageClassName: # <2> resources: requests: - storage: #<3> + storage: # <3> ---- -<1> Specify the core monitoring component for which you want to configure the PVC. +<1> Specify the monitoring component for which you want to configure the PVC. <2> Specify an existing storage class. If a storage class is not specified, the default storage class is used. <3> Specify the amount of required storage. + -See the link:https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims[Kubernetes documentation on PersistentVolumeClaims] for information on how to specify `volumeClaimTemplate`. -+ -The following example configures a PVC that claims persistent storage for the Prometheus instance that monitors core {product-title} components: +The following example configures a PVC that claims persistent storage for +// tag::CPM[] +Prometheus: +// end::CPM[] +// tag::UWM[] +Thanos Ruler: +// end::UWM[] + -[source,yaml] +.Example PVC configuration +[source,yaml,subs="attributes+"] ---- apiVersion: v1 kind: ConfigMap metadata: - name: cluster-monitoring-config - namespace: openshift-monitoring + name: {configmap-name} + namespace: {namespace-name} data: config.yaml: | - prometheusK8s: + {component}: volumeClaimTemplate: spec: storageClassName: my-storage-class resources: requests: +# tag::CPM[] storage: 40Gi ----- - -** *To configure a PVC for a component that monitors user-defined projects*: -endif::openshift-dedicated,openshift-rosa[] -.. Edit the `user-workload-monitoring-config` `ConfigMap` object in the `openshift-user-workload-monitoring` project: -+ -[source,terminal] ----- -$ oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config ----- - -.. Add your PVC configuration for the component under `data/config.yaml`: -+ -[source,yaml] ----- -apiVersion: v1 -kind: ConfigMap -metadata: - name: user-workload-monitoring-config - namespace: openshift-user-workload-monitoring -data: - config.yaml: | - : #<1> - volumeClaimTemplate: - spec: - storageClassName: #<2> - resources: - requests: - storage: #<3> ----- -<1> Specify the component for user-defined monitoring for which you want to configure the PVC. -<2> Specify an existing storage class. If a storage class is not specified, the default storage class is used. -<3> Specify the amount of required storage. -+ -See the link:https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims[Kubernetes documentation on PersistentVolumeClaims] for information on how to specify `volumeClaimTemplate`. -+ -The following example configures a PVC that claims persistent storage for Thanos Ruler: -+ -[source,yaml] ----- -apiVersion: v1 -kind: ConfigMap -metadata: - name: user-workload-monitoring-config - namespace: openshift-user-workload-monitoring -data: - config.yaml: | - thanosRuler: - volumeClaimTemplate: - spec: - storageClassName: my-storage-class - resources: - requests: +# end::CPM[] +# tag::UWM[] storage: 10Gi +# end::UWM[] ---- +// tag::UWM[] + [NOTE] ==== Storage requirements for the `thanosRuler` component depend on the number of rules that are evaluated and how many samples each rule generates. ==== +// end::UWM[] . Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed and the new storage configuration is applied. + @@ -147,3 +126,8 @@ Storage requirements for the `thanosRuler` component depend on the number of rul ==== When you update the config map with a PVC configuration, the affected `StatefulSet` object is recreated, resulting in a temporary service outage. ==== + +// Unset the source code block attributes just to be safe. +:!configmap-name: +:!namespace-name: +:!component: diff --git a/modules/monitoring-modifying-retention-time-and-size-for-prometheus-metrics-data.adoc b/modules/monitoring-modifying-retention-time-and-size-for-prometheus-metrics-data.adoc index a761b57f16b6..00f99ad40845 100644 --- a/modules/monitoring-modifying-retention-time-and-size-for-prometheus-metrics-data.adoc +++ b/modules/monitoring-modifying-retention-time-and-size-for-prometheus-metrics-data.adoc @@ -3,22 +3,39 @@ // * observability/monitoring/configuring-the-monitoring-stack.adoc :_mod-docs-content-type: PROCEDURE -[id="modifying-retention-time-and-size-for-prometheus-metrics-data_{context}"] -= Modifying the retention time and size for Prometheus metrics data +// The ultimate solution DOES NOT NEED separate IDs, it is just needed for now so that the tests will not break + +// tag::CPM[] +[id="modifying-retention-time-and-size-for-prometheus-metrics-data-cpm_{context}"] += Modifying the retention time and size for Prometheus metrics data for core platform monitoring +// end::CPM[] + +// tag::UWM[] +[id="modifying-retention-time-and-size-for-prometheus-metrics-data-uwm_{context}"] += Modifying the retention time and size for Prometheus metrics data for user-defined monitoring +// end::UWM[] + +// Set attributes to distinguish between cluster monitoring example (core platform monitoring - CPM) and user workload monitoring (UWM) examples + +// tag::CPM[] +:configmap-name: cluster-monitoring-config +:namespace-name: openshift-monitoring +:component: prometheusK8s +// end::CPM[] +// tag::UWM[] +:configmap-name: user-workload-monitoring-config +:namespace-name: openshift-user-workload-monitoring +:component: prometheus +// end::UWM[] + +// The following section will be removed and made into its separate concept module. By default, Prometheus retains metrics data for the following durations: * *Core platform monitoring*: 15 days * *Monitoring for user-defined projects*: 24 hours -You can modify the retention time for -ifndef::openshift-dedicated,openshift-rosa[] -Prometheus -endif::openshift-dedicated,openshift-rosa[] -ifdef::openshift-dedicated,openshift-rosa[] -the Prometheus instance that monitors user-defined projects, -endif::openshift-dedicated,openshift-rosa[] -to change how soon the data is deleted. You can also set the maximum amount of disk space the retained metrics data uses. If the data reaches this size limit, Prometheus deletes the oldest data first until the disk space used is again below the limit. +You can modify the retention time for the Prometheus instance to change how soon the data is deleted. You can also set the maximum amount of disk space the retained metrics data uses. If the data reaches this size limit, Prometheus deletes the oldest data first until the disk space used is again below the limit. Note the following behaviors of these data retention settings: @@ -37,113 +54,73 @@ If any data blocks exceed the defined retention time or the defined size limit, ==== Data compaction occurs every two hours. Therefore, a persistent volume (PV) might fill up before compaction, potentially exceeding the `retentionSize` limit. In such cases, the `KubePersistentVolumeFillingUp` alert fires until the space on a PV is lower than the `retentionSize` limit. ==== +// The section above will be removed and made into its separate concept module. .Prerequisites +// tag::CPM[] +* You have access to the cluster as a user with the `cluster-admin` cluster role. +* You have created the `cluster-monitoring-config` `ConfigMap` object. +// end::CPM[] +// tag::UWM[] ifndef::openshift-dedicated,openshift-rosa[] -* *If you are configuring core {product-title} monitoring components*: -** You have access to the cluster as a user with the `cluster-admin` cluster role. -** You have created the `cluster-monitoring-config` `ConfigMap` object. -* *If you are configuring components that monitor user-defined projects*: -** You have access to the cluster as a user with the `cluster-admin` cluster role, or as a user with the `user-workload-monitoring-config-edit` role in the `openshift-user-workload-monitoring` project. -** A cluster administrator has enabled monitoring for user-defined projects. +* You have access to the cluster as a user with the `cluster-admin` cluster role, or as a user with the `user-workload-monitoring-config-edit` role in the `openshift-user-workload-monitoring` project. +* A cluster administrator has enabled monitoring for user-defined projects. endif::openshift-dedicated,openshift-rosa[] ifdef::openshift-dedicated,openshift-rosa[] * You have access to the cluster as a user with the `dedicated-admin` role. * The `user-workload-monitoring-config` `ConfigMap` object exists. This object is created by default when the cluster is created. endif::openshift-dedicated,openshift-rosa[] +// end::UWM[] * You have installed the OpenShift CLI (`oc`). .Procedure -. Edit the `ConfigMap` object: -ifndef::openshift-dedicated,openshift-rosa[] -** *To modify the retention time and size for the Prometheus instance that monitors core {product-title} projects*: -.. Edit the `cluster-monitoring-config` `ConfigMap` object in the `openshift-monitoring` project: +. Edit the `{configmap-name}` config map in the `{namespace-name}` project: + -[source,terminal] +[source,terminal,subs="attributes+"] ---- -$ oc -n openshift-monitoring edit configmap cluster-monitoring-config +$ oc -n {namespace-name} edit configmap {configmap-name} ---- -.. Add the retention time and size configuration under `data/config.yaml`: +. Add the retention time and size configuration under `data/config.yaml`: + -[source,yaml] +[source,yaml,subs="attributes+"] ---- apiVersion: v1 kind: ConfigMap metadata: - name: cluster-monitoring-config - namespace: openshift-monitoring + name: {configmap-name} + namespace: {namespace-name} data: config.yaml: | - prometheusK8s: - retention: <1> - retentionSize: <2> + {component}: + retention: # <1> + retentionSize: # <2> ---- -+ <1> The retention time: a number directly followed by `ms` (milliseconds), `s` (seconds), `m` (minutes), `h` (hours), `d` (days), `w` (weeks), or `y` (years). You can also combine time values for specific times, such as `1h30m15s`. <2> The retention size: a number directly followed by `B` (bytes), `KB` (kilobytes), `MB` (megabytes), `GB` (gigabytes), `TB` (terabytes), `PB` (petabytes), and `EB` (exabytes). + -The following example sets the retention time to 24 hours and the retention size to 10 gigabytes for the Prometheus instance that monitors core {product-title} components: -+ -[source,yaml] ----- -apiVersion: v1 -kind: ConfigMap -metadata: - name: cluster-monitoring-config - namespace: openshift-monitoring -data: - config.yaml: | - prometheusK8s: - retention: 24h - retentionSize: 10GB ----- - -** *To modify the retention time and size for the Prometheus instance that monitors user-defined projects*: -endif::openshift-dedicated,openshift-rosa[] -.. Edit the `user-workload-monitoring-config` `ConfigMap` object in the `openshift-user-workload-monitoring` project: -+ -[source,terminal] ----- -$ oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config ----- - -.. Add the retention time and size configuration under `data/config.yaml`: +The following example sets the retention time to 24 hours and the retention size to 10 gigabytes for the Prometheus instance: + -[source,yaml] +.Example of setting retention time for Prometheus +[source,yaml,subs="attributes+"] ---- apiVersion: v1 kind: ConfigMap metadata: - name: user-workload-monitoring-config - namespace: openshift-user-workload-monitoring + name: {configmap-name} + namespace: {namespace-name} data: config.yaml: | - prometheus: - retention: <1> - retentionSize: <2> ----- -+ -<1> The retention time: a number directly followed by `ms` (milliseconds), `s` (seconds), `m` (minutes), `h` (hours), `d` (days), `w` (weeks), or `y` (years). -You can also combine time values for specific times, such as `1h30m15s`. -<2> The retention size: a number directly followed by `B` (bytes), `KB` (kilobytes), `MB` (megabytes), `GB` (gigabytes), `TB` (terabytes), `PB` (petabytes), or `EB` (exabytes). -+ -The following example sets the retention time to 24 hours and the retention size to 10 gigabytes for the Prometheus instance that monitors user-defined projects: -+ -[source,yaml] ----- -apiVersion: v1 -kind: ConfigMap -metadata: - name: user-workload-monitoring-config - namespace: openshift-user-workload-monitoring -data: - config.yaml: | - prometheus: + {component}: retention: 24h retentionSize: 10GB ---- . Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed. + +// Unset the source code block attributes just to be safe. +:!configmap-name: +:!namespace-name: +:!component: \ No newline at end of file diff --git a/modules/monitoring-resizing-a-persistent-volume.adoc b/modules/monitoring-resizing-a-persistent-volume.adoc index 229eefa28aaf..9977896478d5 100644 --- a/modules/monitoring-resizing-a-persistent-volume.adoc +++ b/modules/monitoring-resizing-a-persistent-volume.adoc @@ -3,10 +3,39 @@ // * observability/monitoring/configuring-the-monitoring-stack.adoc :_mod-docs-content-type: PROCEDURE -[id="resizing-a-persistent-volume_{context}"] -= Resizing a persistent volume -You can resize a persistent volume (PV) for monitoring components, such as Prometheus, Thanos Ruler, or Alertmanager. You need to manually expand a persistent volume claim (PVC), and then update the config map in which the component is configured. +// The ultimate solution DOES NOT NEED separate IDs, it is just needed for now so that the tests will not break + +// tag::CPM[] +[id="resizing-a-persistent-volume-cpm_{context}"] += Resizing a persistent volume for core platform monitoring +// end::CPM[] + +// tag::UWM[] +[id="resizing-a-persistent-volume-uwm_{context}"] += Resizing a persistent volume for monitoring of user-defined projects +// end::UWM[] + +// Set attributes to distinguish between cluster monitoring example (core platform monitoring - CPM) and user workload monitoring (UWM) examples + +// tag::CPM[] +:configmap-name: cluster-monitoring-config +:namespace-name: openshift-monitoring +:component: prometheusK8s +// end::CPM[] +// tag::UWM[] +:configmap-name: user-workload-monitoring-config +:namespace-name: openshift-user-workload-monitoring +:component: thanosRuler +// end::UWM[] + +// tag::CPM[] +You can resize a persistent volume (PV) for monitoring components, such as Prometheus or Alertmanager. +// end::CPM[] +// tag::UWM[] +You can resize a persistent volume (PV) for the instances of Prometheus, Thanos Ruler, and Alertmanager. +// end::UWM[] +You need to manually expand a persistent volume claim (PVC), and then update the config map in which the component is configured. [IMPORTANT] ==== @@ -14,128 +43,87 @@ You can only expand the size of the PVC. Shrinking the storage size is not possi ==== .Prerequisites - +// tag::CPM[] +* You have access to the cluster as a user with the `cluster-admin` cluster role. +* You have created the `cluster-monitoring-config` `ConfigMap` object. +* You have configured at least one PVC for core {product-title} monitoring components. +// end::CPM[] +// tag::UWM[] +* You have access to the cluster as a user with the `cluster-admin` cluster role, or as a user with the `user-workload-monitoring-config-edit` role in the `openshift-user-workload-monitoring` project. +* A cluster administrator has enabled monitoring for user-defined projects. +* You have configured at least one PVC for components that monitor user-defined projects. +// end::UWM[] * You have installed the OpenShift CLI (`oc`). -* *If you are configuring core {product-title} monitoring components*: -** You have access to the cluster as a user with the `cluster-admin` cluster role. -** You have created the `cluster-monitoring-config` `ConfigMap` object. -** You have configured at least one PVC for core {product-title} monitoring components. -* *If you are configuring components that monitor user-defined projects*: -** You have access to the cluster as a user with the `cluster-admin` cluster role, or as a user with the `user-workload-monitoring-config-edit` role in the `openshift-user-workload-monitoring` project. -** A cluster administrator has enabled monitoring for user-defined projects. -** You have configured at least one PVC for components that monitor user-defined projects. .Procedure . Manually expand a PVC with the updated storage request. For more information, see "Expanding persistent volume claims (PVCs) with a file system" in _Expanding persistent volumes_. -. Edit the `ConfigMap` object: -** *If you are configuring core {product-title} monitoring components*: -.. Edit the `cluster-monitoring-config` `ConfigMap` object in the `openshift-monitoring` project: +. Edit the `{configmap-name}` config map in the `{namespace-name}` project: + -[source,terminal] +[source,terminal,subs="attributes+"] ---- -$ oc -n openshift-monitoring edit configmap cluster-monitoring-config +$ oc -n {namespace-name} edit configmap {configmap-name} ---- -.. Add a new storage size for the PVC configuration for the component under `data/config.yaml`: +. Add a new storage size for the PVC configuration for the component under `data/config.yaml`: + -[source,yaml] +[source,yaml,subs="attributes+"] ---- apiVersion: v1 kind: ConfigMap metadata: - name: cluster-monitoring-config - namespace: openshift-monitoring + name: {configmap-name} + namespace: {namespace-name} data: config.yaml: | - : #<1> + : # <1> volumeClaimTemplate: spec: resources: requests: - storage: #<2> + storage: # <2> ---- <1> The component for which you want to change the storage size. <2> Specify the new size for the storage volume. It must be greater than the previous value. + -The following example sets the new PVC request to 100 gigabytes for the Prometheus instance that monitors core {product-title} components: +The following example sets the new PVC request to +// tag::CPM[] +100 gigabytes for the Prometheus instance: +// end::CPM[] +// tag::UWM[] +20 gigabytes for Thanos Ruler: +// end::UWM[] + -[source,yaml] +.Example storage configuration for `{component}` +[source,yaml,subs="attributes+"] ---- apiVersion: v1 kind: ConfigMap metadata: - name: cluster-monitoring-config - namespace: openshift-monitoring + name: {configmap-name} + namespace: {namespace-name} data: config.yaml: | - prometheusK8s: + {component}: volumeClaimTemplate: spec: resources: requests: +# tag::CPM[] storage: 100Gi ----- - -** *If you are configuring components that monitor user-defined projects*: -+ -[NOTE] -==== -You can resize the volumes for the Thanos Ruler and for instances of Alertmanager and Prometheus that monitor user-defined projects. -==== -+ -.. Edit the `user-workload-monitoring-config` `ConfigMap` object in the `openshift-user-workload-monitoring` project: -+ -[source,terminal] ----- -$ oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config ----- - -.. Update the PVC configuration for the monitoring component under `data/config.yaml`: -+ -[source,yaml] ----- -apiVersion: v1 -kind: ConfigMap -metadata: - name: user-workload-monitoring-config - namespace: openshift-user-workload-monitoring -data: - config.yaml: | - : #<1> - volumeClaimTemplate: - spec: - resources: - requests: - storage: #<2> ----- -<1> The component for which you want to change the storage size. -<2> Specify the new size for the storage volume. It must be greater than the previous value. -+ -The following example sets the new PVC request to 20 gigabytes for Thanos Ruler: -+ -[source,yaml] ----- -apiVersion: v1 -kind: ConfigMap -metadata: - name: user-workload-monitoring-config - namespace: openshift-user-workload-monitoring -data: - config.yaml: | - thanosRuler: - volumeClaimTemplate: - spec: - resources: - requests: +# end::CPM[] +# tag::UWM[] storage: 20Gi +# end::UWM[] ---- +// tag::UWM[] + [NOTE] ==== Storage requirements for the `thanosRuler` component depend on the number of rules that are evaluated and how many samples each rule generates. ==== +// end::UWM[] . Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed. + @@ -143,3 +131,8 @@ Storage requirements for the `thanosRuler` component depend on the number of rul ==== When you update the config map with a new storage size, the affected `StatefulSet` object is recreated, resulting in a temporary service outage. ==== + +// Unset the source code block attributes just to be safe. +:!configmap-name: +:!namespace-name: +:!component: diff --git a/observability/monitoring/configuring-the-monitoring-stack.adoc b/observability/monitoring/configuring-the-monitoring-stack.adoc index 61f4881b9518..2e697a6ab6e4 100644 --- a/observability/monitoring/configuring-the-monitoring-stack.adoc +++ b/observability/monitoring/configuring-the-monitoring-stack.adoc @@ -168,14 +168,28 @@ You can ensure that the containers that run monitoring components have enough CP You can configure these limits and requests for core platform monitoring components in the `openshift-monitoring` namespace and for the components that monitor user-defined projects in the `openshift-user-workload-monitoring` namespace. include::modules/monitoring-about-specifying-limits-and-requests-for-monitoring-components.adoc[leveloffset=+2] + include::modules/monitoring-specifying-limits-and-requests-for-monitoring-components.adoc[leveloffset=+2] // Configuring persistent storage include::modules/monitoring-configuring-persistent-storage.adoc[leveloffset=+1] -include::modules/monitoring-configuring-a-persistent-volume-claim.adoc[leveloffset=+2] + +// Configuring a persistent volume claim +// The following module should only include core platform monitoring (CPM tags) +include::modules/monitoring-configuring-a-persistent-volume-claim.adoc[leveloffset=+2,tags=**;CPM;!UWM] +// The following module should only include monitoring for user-defined projects (UWM tags) +include::modules/monitoring-configuring-a-persistent-volume-claim.adoc[leveloffset=+2,tags=**;!CPM;UWM] + +[role="_additional-resources"] +.Additional resources +* link:https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims[PersistentVolumeClaims](Kubernetes documentation about how to specify `volumeClaimTemplate`) ifndef::openshift-dedicated,openshift-rosa[] -include::modules/monitoring-resizing-a-persistent-volume.adoc[leveloffset=+2] +// Resizing a persistent volume +// The following module should only include core platform monitoring (CPM tags) +include::modules/monitoring-resizing-a-persistent-volume.adoc[leveloffset=+2,tags=**;CPM;!UWM] +// The following module should only include monitoring for user-defined projects (UWM tags) +include::modules/monitoring-resizing-a-persistent-volume.adoc[leveloffset=+2,tags=**;!CPM;UWM] [role="_additional-resources"] .Additional resources @@ -183,7 +197,12 @@ include::modules/monitoring-resizing-a-persistent-volume.adoc[leveloffset=+2] * xref:../../storage/expanding-persistent-volumes.adoc#expanding-pvc-filesystem_expanding-persistent-volumes[Expanding persistent volume claims (PVCs) with a file system] endif::openshift-dedicated,openshift-rosa[] -include::modules/monitoring-modifying-retention-time-and-size-for-prometheus-metrics-data.adoc[leveloffset=+2] +// Modifying the retention time and size for Prometheus metrics data +// The following module should only include core platform monitoring (CPM tags) +include::modules/monitoring-modifying-retention-time-and-size-for-prometheus-metrics-data.adoc[leveloffset=+2,tags=**;CPM;!UWM] +// The following module should only include monitoring for user-defined projects (UWM tags) +include::modules/monitoring-modifying-retention-time-and-size-for-prometheus-metrics-data.adoc[leveloffset=+2,tags=**;!CPM;UWM] + include::modules/monitoring-modifying-the-retention-time-for-thanos-ruler-metrics-data.adoc[leveloffset=+2] [role="_additional-resources"]