diff --git a/modules/cluster-logging-about-crd.adoc b/modules/cluster-logging-about-crd.adoc index 81c28d8e83b0..4a3bca94a358 100644 --- a/modules/cluster-logging-about-crd.adoc +++ b/modules/cluster-logging-about-crd.adoc @@ -43,10 +43,11 @@ spec: request: memory: 8G proxy: - limits: - memory: 100Mi - requests: - memory: 100Mi + resources: + limits: + memory: 256Mi + requests: + memory: 256Mi visualization: type: "kibana" kibana: diff --git a/modules/cluster-logging-deploy-cli.adoc b/modules/cluster-logging-deploy-cli.adoc index a850fd9853f9..c12ee14d014e 100644 --- a/modules/cluster-logging-deploy-cli.adoc +++ b/modules/cluster-logging-deploy-cli.adoc @@ -364,10 +364,11 @@ spec: requests: memory: "8Gi" proxy: <8> - limits: - memory: 256Mi - requests: - memory: 256Mi + resources: + limits: + memory: 256Mi + requests: + memory: 256Mi redundancyPolicy: "SingleRedundancy" visualization: type: "kibana" <9> diff --git a/modules/cluster-logging-deploy-console.adoc b/modules/cluster-logging-deploy-console.adoc index 324242bfc6f5..2d5ff7eee64c 100644 --- a/modules/cluster-logging-deploy-console.adoc +++ b/modules/cluster-logging-deploy-console.adoc @@ -196,10 +196,11 @@ spec: storage: storageClassName: "" <6> size: 200G - resources: <7> - requests: - memory: "8Gi" - proxy: <8> + resources: <7> + requests: + memory: "8Gi" + proxy: <8> + resources: limits: memory: 256Mi requests: diff --git a/modules/cluster-logging-logstore-limits.adoc b/modules/cluster-logging-logstore-limits.adoc index 5cf6d2b9dd84..2761aa51fbe5 100644 --- a/modules/cluster-logging-logstore-limits.adoc +++ b/modules/cluster-logging-logstore-limits.adoc @@ -3,7 +3,7 @@ // * logging/cluster-logging-elasticsearch.adoc [id="cluster-logging-logstore-limits_{context}"] -= Configuring CPU and memory requests for the log store += Configuring CPU and memory requests for the log store Each component specification allows for adjustments to both the CPU and memory requests. You should not have to manually adjust these values as the Elasticsearch @@ -14,7 +14,7 @@ Operator sets values sufficient for your environment. In large-scale clusters, the default memory limit for the Elasticsearch proxy container might not be sufficient, causing the proxy container to be OOMKilled. If you experience this issue, increase the memory requests and limits for the Elasticsearch proxy. ==== -Each Elasticsearch node can operate with a lower memory setting though this is *not* recommended for production deployments. +Each Elasticsearch node can operate with a lower memory setting though this is *not* recommended for production deployments. For production use, you should have no less than the default 16Gi allocated to each pod. Preferably you should allocate as much as possible, up to 64Gi per pod. .Prerequisites @@ -50,16 +50,17 @@ spec: cpu: "1" memory: "64Gi" proxy: <2> - limits: - memory: 100Mi - requests: - memory: 100Mi + resources: + limits: + memory: 100Mi + requests: + memory: 100Mi ---- <1> Specify the CPU and memory requests for Elasticsearch as needed. If you leave these values blank, the Elasticsearch Operator sets default values that should be sufficient for most deployments. The default values are `16Gi` for the memory request and `1` for the CPU request. <2> Specify the CPU and memory requests for the Elasticsearch proxy as needed. If you leave these values blank, the Elasticsearch Operator sets default values that should be sufficient for most deployments. The default values are `256Mi` for the memory request and `100m` for the CPU request. -If you adjust the amount of Elasticsearch memory, you must change both the request value and the limit value. +If you adjust the amount of Elasticsearch memory, you must change both the request value and the limit value. For example: @@ -73,6 +74,6 @@ For example: memory: "32Gi" ---- -Kubernetes generally adheres the node configuration and does not allow Elasticsearch to use the specified limits. +Kubernetes generally adheres the node configuration and does not allow Elasticsearch to use the specified limits. Setting the same value for the `requests` and `limits` ensures that Elasticsearch can use the memory you want, assuming the node has the memory available. diff --git a/modules/differences-between-machinesets-and-machineconfigpool.adoc b/modules/differences-between-machinesets-and-machineconfigpool.adoc index d93a63b11322..def5913d9301 100644 --- a/modules/differences-between-machinesets-and-machineconfigpool.adoc +++ b/modules/differences-between-machinesets-and-machineconfigpool.adoc @@ -9,7 +9,7 @@ `MachineSet` objects describe {product-title} nodes with respect to the cloud or machine provider. -The `MachineConfigPool` object allows `MachineConfigControlle`r components to define and provide the status of machines in the context of upgrades. +The `MachineConfigPool` object allows `MachineConfigController` components to define and provide the status of machines in the context of upgrades. The `MachineConfigPool` object allows users to configure how upgrades are rolled out to the {product-title} nodes in the machine config pool. diff --git a/modules/pipelines-document-attributes.adoc b/modules/pipelines-document-attributes.adoc index eb88bc153c57..6a78fe7b1ae0 100644 --- a/modules/pipelines-document-attributes.adoc +++ b/modules/pipelines-document-attributes.adoc @@ -9,4 +9,5 @@ // :pipelines-title: Red Hat OpenShift Pipelines :pipelines-shortname: Pipelines -:pipelines-ver: release-tech-preview-3 +:pipelines-ver: release-tech-preview-2 +// Checked for impact in pages using the {pipelines-ver} attribute; all good.