Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
12 changes: 6 additions & 6 deletions modules/coo-troubleshooting-ui-plugin-using.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ Other signal types require optional components to be installed:
image::coo-troubleshooting-panel-link.png[Troubleshooting Panel link]
+
Click on the **Troubleshooting Panel** link to display the panel.
. The panel consists of query details and a topology graph of the query results. The selected alert is converted into a Korrel8r query string and sent to the `korrel8r` service.
. The panel consists of query details and a topology graph of the query results. The selected alert is converted into a Korrel8r query string and sent to the `korrel8r` service.
The results are displayed as a graph network connecting the returned signals and resources. This is a _neighbourhood_ graph, starting at the current resource and including related objects up to 3 steps away from the starting point.
Clicking on nodes in the graph takes you to the corresponding web console pages for those resouces.
. You can use the troubleshooting panel to find resources relating to the chosen alert.
Expand Down Expand Up @@ -62,11 +62,11 @@ image::coo-troubleshooting-experimental.png[Experimental features]
[arabic]
... **Hide Query** hides the experimental features.

... The query that identifies the starting point for the graph.
The query language, part of the link:https://korrel8r.github.io/korrel8r[Korrel8r] correlation engine used to create the graphs, is experimental and may change in future.
The query is updated by the **Focus** button to correspond to the resources in the main web console window.
... The query that identifies the starting point for the graph.
The query language, part of the link:https://korrel8r.github.io/korrel8r[Korrel8r] correlation engine used to create the graphs, is experimental and may change in future.
The query is updated by the **Focus** button to correspond to the resources in the main web console window.

... **Neighbourhood depth** is used to display a smaller or larger neighbourhood.
... **Neighbourhood depth** is used to display a smaller or larger neighbourhood.
+
[NOTE]
====
Expand All @@ -80,4 +80,4 @@ Setting a large value in a large cluster might cause the query to fail, if the n

**** `netflow:network` representing any network observability network event.

**** `log:__LOG_TYPE__` representing stored logs, where `__LOG_TYPE__` must be one of `application`, `infrastructure` or `audit`.
**** `log:__LOG_TYPE__` representing stored logs, where `__LOG_TYPE__` must be one of `application`, `infrastructure` or `audit`.
16 changes: 8 additions & 8 deletions modules/dr-restoring-cluster-state.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -3,12 +3,12 @@
// * disaster_recovery/scenario-2-restoring-cluster-state.adoc
// * post_installation_configuration/cluster-tasks.adoc

// Contributors: Some changes for the `etcd` restore procedure are only valid for 4.14+.
// In the 4.14+ documentation, OVN-K requires different steps because there is no centralized OVN
// control plane to be converted. For more information, see PR #64939.
// Contributors: Some changes for the `etcd` restore procedure are only valid for 4.14+.
// In the 4.14+ documentation, OVN-K requires different steps because there is no centralized OVN
// control plane to be converted. For more information, see PR #64939.
// Do not cherry pick from "main" to "enterprise-4.12" or "enterprise-4.13" because the cherry pick
// procedure is different for these versions. Instead, open a separate PR for 4.13 and
// cherry pick to 4.12 or make the updates directly in 4.12.
// procedure is different for these versions. Instead, open a separate PR for 4.13 and
// cherry pick to 4.12 or make the updates directly in 4.12.

:_mod-docs-content-type: PROCEDURE
[id="dr-scenario-2-restoring-cluster-state_{context}"]
Expand Down Expand Up @@ -123,7 +123,7 @@ $ sudo mv -v /etc/kubernetes/manifests/kube-scheduler-pod.yaml /tmp
[source,terminal]
----
$ sudo crictl ps | grep kube-scheduler | egrep -v "operator|guard"
----
----
If the output of this command is not empty, wait a few minutes and check again.

.. Move the `etcd` data directory to a different location with the following example:
Expand Down Expand Up @@ -464,8 +464,8 @@ $ oc get csr
+
[source,terminal]
----
NAME AGE SIGNERNAME REQUESTOR CONDITION
csr-<uuid> 8m3s kubernetes.io/kubelet-serving system:node:<node_name> Pending
NAME AGE SIGNERNAME REQUESTOR CONDITION
csr-<uuid> 8m3s kubernetes.io/kubelet-serving system:node:<node_name> Pending
----

... Approve all new CSRs by running the following command, replacing `csr-<uuid>` with the name of the CSR:
Expand Down
4 changes: 2 additions & 2 deletions modules/network-observability-filtering-ebpf-rule.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -52,7 +52,7 @@ spec:
----
<1> To enable eBPF flow filtering, set `spec.agent.ebpf.flowFilter.enable` to `true`.
<2> To define the action for the flow filter rule, set the required `action` parameter. Valid values are `Accept` or `Reject`.
<3> To define the IP address and CIDR mask for the flow filter rule, set the required `cidr` parameter. This parameter supports both IPv4 and IPv6 address formats. To match any IP address, use `0.0.0.0/0` for IPv4 or ``::/0` for IPv6.
<3> To define the IP address and CIDR mask for the flow filter rule, set the required `cidr` parameter. This parameter supports both IPv4 and IPv6 address formats. To match any IP address, use `0.0.0.0/0` for IPv4 or `::/0` for IPv6.
<4> To define the sampling rate for matched flows and override the global sampling setting `spec.agent.ebpf.sampling`, set the `sampling` parameter.
<5> To filter flows by Peer IP CIDR, set the `peerCIDR` parameter.

Expand Down Expand Up @@ -86,4 +86,4 @@ spec:
<2> To report packet drops for each network flow, add the `PacketDrop` value to the `spec.agent.ebpf.features` list.
<3> To enable eBPF flow filtering, set `spec.agent.ebpf.flowFilter.enable` to `true`.
<4> To define the action for the flow filter rule, set the required `action` parameter. Valid values are `Accept` or `Reject`.
<5> To filter flows containing drops, set `pktDrops` to `true`.
<5> To filter flows containing drops, set `pktDrops` to `true`.
Original file line number Diff line number Diff line change
Expand Up @@ -54,28 +54,28 @@ spec:
- uid=0
- gid=0
- cache=strict <6>
- nosharesock <7>
- actimeo=30 <8>
- nosharesock <7>
- actimeo=30 <8>
- nobrl <9>
csi:
driver: file.csi.azure.com
volumeHandle: "{resource-group-name}#{account-name}#{file-share-name}" <10>
volumeAttributes:
shareName: EXISTING_FILE_SHARE_NAME <11>
shareName: EXISTING_FILE_SHARE_NAME <11>
nodeStageSecretRef:
name: azure-secret <12>
namespace: <my-namespace> <13>
----
<1> Volume size.
<2> Access mode. Defines the read-write and mount permissions. For more information, under _Additional Resources_, see _Access modes_.
<2> Access mode. Defines the read-write and mount permissions. For more information, under _Additional resources_, see _Access modes_.
<3> Reclaim policy. Tells the cluster what to do with the volume after it is released. Accepted values are `Retain`, `Recycle`, or `Delete`.
<4> Storage class name. This name is used by the PVC to bind to this specific PV. For static provisioning, a `StorageClass` object does not need to exist, but the name in the PV and PVC must match.
<5> Modify this permission if you want to enhance the security.
<6> Cache mode. Accepted values are `none`, `strict`, and `loose`. The default is `strict`.
<7> Use to reduce the probability of a reconnect race.
<8> The time (in seconds) that the CIFS client caches attributes of a file or directory before it requests attribute information from a server.
<8> The time (in seconds) that the CIFS client caches attributes of a file or directory before it requests attribute information from a server.
<9> Disables sending byte range lock requests to the server, and for applications which have challenges with POSIX locks.
<10> Ensure that `volumeHandle` is unique across the cluster. The `resource-group-name` is the Azure resource group where the storage account resides.
<10> Ensure that `volumeHandle` is unique across the cluster. The `resource-group-name` is the Azure resource group where the storage account resides.
<11> File share name. Use only the file share name; do not use full path.
<12> Provide the name of the secret created in step 1 of this procedure. In this example, it is _azure-secret_.
<13> The namespace that the secret was created in. This must be the namespace where the PV is consumed.
Expand Down Expand Up @@ -103,7 +103,7 @@ spec:
<2> Namespace for the PVC.
<3> The name of the PV that you created in the previous step.
<4> Storage class name. This name is used by the PVC to bind to this specific PV. For static provisioning, a `StorageClass` object does not need to exist, but the name in the PV and PVC must match.
<5> Access mode. Defines the requested read-write access for the PVC. Claims use the same conventions as volumes when requesting storage with specific access modes. For more information, under _Additional Resources_, see _Access modes_.
<5> Access mode. Defines the requested read-write access for the PVC. Claims use the same conventions as volumes when requesting storage with specific access modes. For more information, under _Additional resources_, see _Access modes_.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🤖 [error] RedHat.TermsErrors: Use 'read/write' rather than 'read-write'. For more information, see RedHat.TermsErrors.

<6> PVC size.

. Ensure that the PVC is created and in `Bound` status after a while by running the following command:
Expand Down
8 changes: 4 additions & 4 deletions modules/persistent-storage-csi-drivers-supported.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -28,12 +28,12 @@ ifndef::openshift-rosa,openshift-rosa-hcp[]
If your CSI driver is not listed in the following table, you must follow the installation instructions provided by your CSI storage vendor to use their supported CSI features.
====

For a list of third-party-certified CSI drivers, see the _Red Hat ecosystem portal_ under _Additional Resources_.
For a list of third-party-certified CSI drivers, see the _Red Hat ecosystem portal_ under _Additional resources_.

endif::openshift-rosa,openshift-rosa-hcp[]

ifdef::openshift-rosa,openshift-rosa-hcp,openshift-aro[]
In addition to the drivers listed in the following table, ROSA functions with CSI drivers from third-party storage vendors. Red Hat does not oversee third-party provisioners or the connected CSI drivers and the vendors fully control source code, deployment, operation, and Kubernetes compatibility. These volume provisioners are considered customer-managed and the respective vendors are responsible for providing support. See the link:https://docs.openshift.com/rosa/rosa_architecture/rosa_policy_service_definition/rosa-policy-responsibility-matrix.html#rosa-policy-responsibilities_rosa-policy-responsibility-matrix[Shared responsibilities for {product-title}] matrix for more information.
In addition to the drivers listed in the following table, ROSA functions with CSI drivers from third-party storage vendors. Red Hat does not oversee third-party provisioners or the connected CSI drivers and the vendors fully control source code, deployment, operation, and Kubernetes compatibility. These volume provisioners are considered customer-managed and the respective vendors are responsible for providing support. See the link:https://docs.openshift.com/rosa/rosa_architecture/rosa_policy_service_definition/rosa-policy-responsibility-matrix.html#rosa-policy-responsibilities_rosa-policy-responsibility-matrix[Shared responsibilities for {product-title}] matrix for more information.
endif::openshift-rosa,openshift-rosa-hcp,openshift-aro[]

.Supported CSI drivers and features in {product-title}
Expand Down Expand Up @@ -91,5 +91,5 @@ If your CSI driver is not listed in the preceding table, you must follow the ins
====
endif::openshift-rosa[]
ifdef::openshift-rosa[]
In addition to the drivers listed in the preceding table, ROSA functions with CSI drivers from third-party storage vendors. Red Hat does not oversee third-party provisioners or the connected CSI drivers and the vendors fully control source code, deployment, operation, and Kubernetes compatibility. These volume provisioners are considered customer-managed and the respective vendors are responsible for providing support. See the link:https://docs.openshift.com/rosa/rosa_architecture/rosa_policy_service_definition/rosa-policy-responsibility-matrix.html#rosa-policy-responsibilities_rosa-policy-responsibility-matrix[Shared responsibilities for {product-title}] matrix for more information.
endif::openshift-rosa[]
In addition to the drivers listed in the preceding table, ROSA functions with CSI drivers from third-party storage vendors. Red Hat does not oversee third-party provisioners or the connected CSI drivers and the vendors fully control source code, deployment, operation, and Kubernetes compatibility. These volume provisioners are considered customer-managed and the respective vendors are responsible for providing support. See the link:https://docs.openshift.com/rosa/rosa_architecture/rosa_policy_service_definition/rosa-policy-responsibility-matrix.html#rosa-policy-responsibilities_rosa-policy-responsibility-matrix[Shared responsibilities for {product-title}] matrix for more information.
endif::openshift-rosa[]
4 changes: 2 additions & 2 deletions modules/rosa-sdpolicy-platform.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ endif::rosa-with-hcp[]

[IMPORTANT]
====
Red Hat does not provide a backup method for ROSA clusters with STS, which is the default. It is critical that customers have a backup plan for their applications and application data.
Red Hat does not provide a backup method for ROSA clusters with STS, which is the default. It is critical that customers have a backup plan for their applications and application data.
ifndef::rosa-with-hcp[]
The table below only applies to clusters created with IAM user credentials.
endif::rosa-with-hcp[]
Expand Down Expand Up @@ -183,4 +183,4 @@ All Operators listed in the OperatorHub marketplace should be available for inst

ifeval::["{context}" == "rosa-hcp-service-definition"]
:!rosa-with-hcp:
endif::[]
endif::[]