diff --git a/modules/modifying-an-existing-ingress-controller.adoc b/modules/modifying-an-existing-ingress-controller.adoc index a833f1f981d5..e6733fc166ad 100644 --- a/modules/modifying-an-existing-ingress-controller.adoc +++ b/modules/modifying-an-existing-ingress-controller.adoc @@ -16,12 +16,14 @@ As a cluster administrator, you can modify an existing Ingress Controller to man .Procedure . Modify the chosen `IngressController` to set `dnsManagementPolicy`: - + [source,terminal] ---- SCOPE=$(oc -n openshift-ingress-operator get ingresscontroller -o=jsonpath="{.status.endpointPublishingStrategy.loadBalancer.scope}") - +---- ++ +[source,terminal] +---- oc -n openshift-ingress-operator patch ingresscontrollers/ --type=merge --patch='{"spec":{"endpointPublishingStrategy":{"type":"LoadBalancerService","loadBalancer":{"dnsManagementPolicy":"Unmanaged", "scope":"${SCOPE}"}}}}' ---- diff --git a/modules/monitoring-enabling-query-logging-for-thanos-querier.adoc b/modules/monitoring-enabling-query-logging-for-thanos-querier.adoc index c85b85d2ff25..0569d0a3346e 100644 --- a/modules/monitoring-enabling-query-logging-for-thanos-querier.adoc +++ b/modules/monitoring-enabling-query-logging-for-thanos-querier.adoc @@ -49,7 +49,7 @@ data: ---- <1> Set the value to `true` to enable logging and `false` to disable logging. The default value is `false`. <2> Set the value to `debug`, `info`, `warn`, or `error`. If no value exists for `logLevel`, the log level defaults to `error`. -+ + . Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed. .Verification @@ -60,14 +60,19 @@ data: ---- $ oc -n openshift-monitoring get pods ---- -+ + . Run a test query using the following sample commands as a model: + [source,terminal] ---- $ token=`oc create token prometheus-k8s -n openshift-monitoring` +---- ++ +[source,terminal] +---- $ oc -n openshift-monitoring exec -c prometheus prometheus-k8s-0 -- curl -k -H "Authorization: Bearer $token" 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=cluster_version' ---- + . Run the following command to read the query log: + [source,terminal] @@ -79,6 +84,6 @@ $ oc -n openshift-monitoring logs -c thanos-query ==== Because the `thanos-querier` pods are highly available (HA) pods, you might be able to see logs in only one pod. ==== -+ + . After you examine the logged query information, disable query logging by changing the `enableRequestLogging` value to `false` in the config map. diff --git a/modules/nodes-cluster-worker-latency-profiles-examining.adoc b/modules/nodes-cluster-worker-latency-profiles-examining.adoc index cde14729522e..1ef8a40ff6b3 100644 --- a/modules/nodes-cluster-worker-latency-profiles-examining.adoc +++ b/modules/nodes-cluster-worker-latency-profiles-examining.adoc @@ -2,7 +2,6 @@ // // scalability_and_performance/scaling-worker-latency-profiles.adoc - :_mod-docs-content-type: PROCEDURE [id="nodes-cluster-worker-latency-profiles-examining_{context}"] = Example steps for displaying resulting values of workerLatencyProfile @@ -46,14 +45,22 @@ node-monitor-grace-period: [source,terminal] ---- $ oc debug node/ +---- ++ +[source,terminal] +---- $ chroot /host +---- ++ +[source,terminal] +---- # cat /etc/kubernetes/kubelet.conf|grep nodeStatusUpdateFrequency ---- + .Example output [source,terminal] ---- - “nodeStatusUpdateFrequency”: “10s” +“nodeStatusUpdateFrequency”: “10s” ---- -These outputs validate the set of timing variables for the Worker Latency Profile. \ No newline at end of file +These outputs validate the set of timing variables for the Worker Latency Profile. diff --git a/modules/nodes-containers-dev-fuse-configuring.adoc b/modules/nodes-containers-dev-fuse-configuring.adoc index 8564759d2bb8..82c8750bf2a9 100644 --- a/modules/nodes-containers-dev-fuse-configuring.adoc +++ b/modules/nodes-containers-dev-fuse-configuring.adoc @@ -12,7 +12,7 @@ By exposing the `/dev/fuse` device to an unprivileged pod, you grant it the capa . Define the pod with `/dev/fuse` access: + -* Create a YAML file named `fuse-builder-pod.yaml` with the following content: +.. Create a YAML file named `fuse-builder-pod.yaml` with the following content: + [source,yaml] ---- @@ -21,29 +21,30 @@ kind: Pod metadata: name: fuse-builder-pod annotations: - io.kubernetes.cri-o.Devices: "/dev/fuse" <1> + io.kubernetes.cri-o.Devices: "/dev/fuse" spec: containers: - name: build-container - image: quay.io/podman/stable <2> + image: quay.io/podman/stable command: ["/bin/sh", "-c"] - args: ["echo 'Container is running. Use oc exec to get a shell.'; sleep infinity"] <3> - securityContext: <4> + args: ["echo 'Container is running. Use oc exec to get a shell.'; sleep infinity"] + securityContext: runAsUser: 1000 ---- + -<1> The `io.kubernetes.cri-o.Devices: "/dev/fuse"` annotation makes the FUSE device available. -<2> This annotation specifies a container that uses an image that includes `podman` (for example, `quay.io/podman/stable`). -<3> This command keeps the container running so you can `exec` into it. -<4> This annotation specifies a `securityContext` that runs the container as an unprivileged user (for example, `runAsUser: 1000`). -* +where: ++ +`io.kubernetes.cri-o.Devices`:: The `io.kubernetes.cri-o.Devices: "/dev/fuse"` annotation makes the FUSE device available. +`image`:: This annotation specifies a container that uses an image that includes `podman` (for example, `quay.io/podman/stable`). +`args`:: This command keeps the container running so you can `exec` into it. +`securityContext`:: This annotation specifies a `securityContext` that runs the container as an unprivileged user (for example, `runAsUser: 1000`). + [NOTE] ==== Depending on your cluster's Security Context Constraints (SCCs) or other policies, you might need to further adjust the `securityContext` specification, for example, by allowing specific capabilities if `/dev/fuse` alone is not sufficient for `fuse-overlayfs` to operate. ==== + -* Create the pod by running the following command: +.. Create the pod by running the following command: + [source,terminal] ---- @@ -71,7 +72,15 @@ You are now inside the container. Because the default working directory might no [source,terminal] ---- $ cd /tmp +---- ++ +[source,terminal] +---- $ pwd +---- ++ +[source,terminal] +---- /tmp ---- @@ -115,21 +124,21 @@ This should output the content of the `/app/build_info.txt` file and the copied . Exit the pod and clean up: + -* After you are done, exit the shell session in the pod: +.. After you are done, exit the shell session in the pod: + [source,terminal] ---- $ exit ---- + -* You can then delete the pod if it's no longer needed: +.. Delete the pod if it's no longer needed: + [source,terminal] ---- $ oc delete pod fuse-builder-pod ---- + -* Remove the local YAML file: +.. Remove the local YAML file: + [source,terminal] ---- diff --git a/modules/nw-control-dns-records-public-hosted-zone-azure.adoc b/modules/nw-control-dns-records-public-hosted-zone-azure.adoc index 1892096ebdd7..19f09a85c42b 100644 --- a/modules/nw-control-dns-records-public-hosted-zone-azure.adoc +++ b/modules/nw-control-dns-records-public-hosted-zone-azure.adoc @@ -20,9 +20,25 @@ You can create Domain Name Server (DNS) records on a public or private DNS zone [source,terminal] ---- $ CLIENT_ID=$(oc get secrets azure-credentials -n kube-system --template={{.data.azure_client_id}} | base64 -d) +---- ++ +[source,terminal] +---- $ CLIENT_SECRET=$(oc get secrets azure-credentials -n kube-system --template={{.data.azure_client_secret}} | base64 -d) +---- ++ +[source,terminal] +---- $ RESOURCE_GROUP=$(oc get secrets azure-credentials -n kube-system --template={{.data.azure_resourcegroup}} | base64 -d) +---- ++ +[source,terminal] +---- $ SUBSCRIPTION_ID=$(oc get secrets azure-credentials -n kube-system --template={{.data.azure_subscription_id}} | base64 -d) +---- ++ +[source,terminal] +---- $ TENANT_ID=$(oc get secrets azure-credentials -n kube-system --template={{.data.azure_tenant_id}} | base64 -d) ---- @@ -64,7 +80,6 @@ $ az network dns zone list --resource-group "${RESOURCE_GROUP}" $ az network private-dns zone list -g "${RESOURCE_GROUP}" ---- - . Create a YAML file, for example, `external-dns-sample-azure.yaml`, that defines the `ExternalDNS` object: + .Example `external-dns-sample-azure.yaml` file diff --git a/modules/oadp-using-ca-certificates-with-velero-command.adoc b/modules/oadp-using-ca-certificates-with-velero-command.adoc index d40efde57760..b72f1c6d4f45 100644 --- a/modules/oadp-using-ca-certificates-with-velero-command.adoc +++ b/modules/oadp-using-ca-certificates-with-velero-command.adoc @@ -44,7 +44,10 @@ Server: [source,terminal] ---- $ CA_CERT=$(oc -n openshift-adp get dataprotectionapplications.oadp.openshift.io -o jsonpath='{.spec.backupLocations[0].velero.objectStorage.caCert}') - +---- ++ +[source,terminal] +---- $ [[ -n $CA_CERT ]] && echo "$CA_CERT" | base64 -d | oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c "cat > /tmp/your-cacert.txt" || echo "DPA BSL has no caCert" ---- + @@ -72,4 +75,4 @@ $ oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c "ls /tmp/your-c /tmp/your-cacert.txt ---- -In a future release of OpenShift API for Data Protection (OADP), we plan to mount the certificate to the Velero pod so that this step is not required. \ No newline at end of file +In a future release of OpenShift API for Data Protection (OADP), we plan to mount the certificate to the Velero pod so that this step is not required. diff --git a/modules/op-authenticating-to-an-oci-registry.adoc b/modules/op-authenticating-to-an-oci-registry.adoc index 33211290f37d..64a86cc5119e 100644 --- a/modules/op-authenticating-to-an-oci-registry.adoc +++ b/modules/op-authenticating-to-an-oci-registry.adoc @@ -15,11 +15,21 @@ Before pushing signatures to an OCI registry, cluster administrators must config + [source,terminal] ---- -$ export NAMESPACE= <1> -$ export SERVICE_ACCOUNT_NAME= <2> +$ export NAMESPACE= ---- -<1> The namespace associated with the service account. -<2> The name of the service account. ++ +where: ++ +``:: The namespace associated with the service account. ++ +[source,terminal] +---- +$ export SERVICE_ACCOUNT_NAME= +---- ++ +where: ++ +``:: The name of the service account. . Create a Kubernetes secret. + @@ -41,14 +51,14 @@ $ oc patch serviceaccount $SERVICE_ACCOUNT_NAME \ ---- + If you patch the default `pipeline` service account that {pipelines-title} assigns to all task runs, the {pipelines-title} Operator will override the service account. As a best practice, you can perform the following steps: - ++ .. Create a separate service account to assign to user's task runs. + [source,terminal] ---- $ oc create serviceaccount ---- - ++ .. Associate the service account to the task runs by setting the value of the `serviceaccountname` field in the task run template. + [source,yaml] @@ -58,9 +68,12 @@ kind: TaskRun metadata: name: build-push-task-run-2 spec: -serviceAccountName: build-bot <1> +serviceAccountName: build-bot taskRef: name: build-push ... ---- -<1> Substitute with the name of the newly created service account. ++ +where: ++ +``:: Substitute with the name of the newly created service account. diff --git a/modules/op-using-tekton-chains-to-sign-and-verify-image-and-provenance.adoc b/modules/op-using-tekton-chains-to-sign-and-verify-image-and-provenance.adoc index 24bd141625d3..020c350c9e2f 100644 --- a/modules/op-using-tekton-chains-to-sign-and-verify-image-and-provenance.adoc +++ b/modules/op-using-tekton-chains-to-sign-and-verify-image-and-provenance.adoc @@ -36,9 +36,9 @@ $ cosign generate-key-pair k8s://openshift-pipelines/signing-secrets Provide a password when prompted. Cosign stores the resulting private key as part of the `signing-secrets` Kubernetes secret in the `openshift-pipelines` namespace, and writes the public key to the `cosign.pub` local file. . Configure authentication for the image registry. - ++ .. To configure the {tekton-chains} controller for pushing signature to an OCI registry, use the credentials associated with the service account of the task run. For detailed information, see the "Authenticating to an OCI registry" section. - ++ .. To configure authentication for a Kaniko task that builds and pushes image to the registry, create a Kubernetes secret of the docker `config.json` file containing the required credentials. + [source,terminal] @@ -54,33 +54,51 @@ $ oc create secret generic \ <1> [source,terminal] ---- $ oc patch configmap chains-config -n openshift-pipelines -p='{"data":{"artifacts.taskrun.format": "in-toto"}}' - +---- ++ +[source,terminal] +---- $ oc patch configmap chains-config -n openshift-pipelines -p='{"data":{"artifacts.taskrun.storage": "oci"}}' - +---- ++ +[source,terminal] +---- $ oc patch configmap chains-config -n openshift-pipelines -p='{"data":{"transparency.enabled": "true"}}' ---- . Start the Kaniko task. - ++ .. Apply the Kaniko task to the cluster. + [source,terminal] ---- $ oc apply -f examples/kaniko/kaniko.yaml <1> ---- -<1> Substitute with the URI or file path to your Kaniko task. - ++ +where: ++ +``:: Substitute with the URI or file path to your Kaniko task. ++ .. Set the appropriate environment variables. + [source,terminal] ---- -$ export REGISTRY= <1> - +$ export REGISTRY= +---- ++ +where: ++ +``:: Substitute with the URL of the registry where you want to push the image. ++ +[source,terminal] +---- $ export DOCKERCONFIG_SECRET_NAME= <2> ---- -<1> Substitute with the URL of the registry where you want to push the image. -<2> Substitute with the name of the secret in the docker `config.json` file. - ++ +where: ++ +``:: Substitute with the name of the secret in the docker `config.json` file. ++ .. Start the Kaniko task. + [source,terminal] @@ -109,14 +127,17 @@ $ oc get tr \ <1> [source,terminal] ---- $ cosign verify --key cosign.pub $REGISTRY/kaniko-chains - +---- ++ +[source,terminal] +---- $ cosign verify-attestation --key cosign.pub $REGISTRY/kaniko-chains ---- . Find the provenance for the image in Rekor. - ++ .. Get the digest of the $REGISTRY/kaniko-chains image. You can search for it ing the task run, or pull the image to extract the digest. - ++ .. Search Rekor to find all entries that match the `sha256` digest of the image. + [source,terminal] @@ -132,7 +153,7 @@ $ rekor-cli search --sha <1> <3> The second matching UUID. + The search result displays UUIDs of the matching entries. One of those UUIDs holds the attestation. - ++ .. Check the attestation. + [source,terminal] diff --git a/modules/ossm-cert-manage-verify-cert.adoc b/modules/ossm-cert-manage-verify-cert.adoc index bab0b77450a6..6ed1cf68bdb7 100644 --- a/modules/ossm-cert-manage-verify-cert.adoc +++ b/modules/ossm-cert-manage-verify-cert.adoc @@ -13,8 +13,20 @@ Use the Bookinfo sample application to verify that the workload certificates are [source,terminal] ---- $ sleep 60 +---- ++ +[source,terminal] +---- $ oc -n bookinfo exec "$(oc -n bookinfo get pod -l app=productpage -o jsonpath={.items..metadata.name})" -c istio-proxy -- openssl s_client -showcerts -connect details:9080 > bookinfo-proxy-cert.txt +---- ++ +[source,terminal] +---- $ sed -n '/-----BEGIN CERTIFICATE-----/{:start /-----END CERTIFICATE-----/!{N;b start};/.*/p}' bookinfo-proxy-cert.txt > certs.pem +---- ++ +[source,terminal] +---- $ awk 'BEGIN {counter=0;} /BEGIN CERT/{counter++} { print > "proxy-cert-" counter ".pem"}' < certs.pem ---- + @@ -44,25 +56,27 @@ $ diff -s /tmp/root-cert.crt.txt /tmp/pod-root-cert.crt.txt You should see the following result: `Files /tmp/root-cert.crt.txt and /tmp/pod-root-cert.crt.txt are identical` - . Verify that the CA certificate is the same as the one specified by the administrator. Replace `` with the path to your certificates. + [source,terminal] ---- $ openssl x509 -in /ca-cert.pem -text -noout > /tmp/ca-cert.crt.txt ---- ++ Run the following syntax at the terminal window. + [source,terminal] ---- $ openssl x509 -in ./proxy-cert-2.pem -text -noout > /tmp/pod-cert-chain-ca.crt.txt ---- ++ Compare the certificates by running the following syntax at the terminal window. + [source,terminal] ---- $ diff -s /tmp/ca-cert.crt.txt /tmp/pod-cert-chain-ca.crt.txt ---- ++ You should see the following result: `Files /tmp/ca-cert.crt.txt and /tmp/pod-cert-chain-ca.crt.txt are identical.` @@ -72,5 +86,6 @@ You should see the following result: ---- $ openssl verify -CAfile <(cat /ca-cert.pem /root-cert.pem) ./proxy-cert-1.pem ---- ++ You should see the following result: `./proxy-cert-1.pem: OK` \ No newline at end of file diff --git a/modules/ossm-migrating-to-20.adoc b/modules/ossm-migrating-to-20.adoc index bef0ed47f9d8..623730d6b636 100644 --- a/modules/ossm-migrating-to-20.adoc +++ b/modules/ossm-migrating-to-20.adoc @@ -38,6 +38,10 @@ $ oc get smcp -o yaml ---- $ oc get smcp.v1.maistra.io > smcp-resource.yaml #Edit the smcp-resource.yaml file. +---- ++ +[source,terminal] +---- $ oc replace -f smcp-resource.yaml ---- + diff --git a/modules/persistent-storage-csi-azure-file-cross-sub-dynamic-provisioning-procedure.adoc b/modules/persistent-storage-csi-azure-file-cross-sub-dynamic-provisioning-procedure.adoc index e4344453490c..966b489e5beb 100644 --- a/modules/persistent-storage-csi-azure-file-cross-sub-dynamic-provisioning-procedure.adoc +++ b/modules/persistent-storage-csi-azure-file-cross-sub-dynamic-provisioning-procedure.adoc @@ -23,7 +23,10 @@ To use Azure File dynamic provisioning across subscriptions: [source,terminal] ---- $ sp_id=$(oc -n openshift-cluster-csi-drivers get secret azure-file-credentials -o jsonpath='{.data.azure_client_id}' | base64 --decode) - +---- ++ +[source,terminal] +---- $ az ad sp show --id ${sp_id} --query displayName --output tsv ---- + @@ -32,7 +35,10 @@ $ az ad sp show --id ${sp_id} --query displayName --output tsv [source,terminal] ---- $ mi_id=$(oc -n openshift-cluster-csi-drivers get secret azure-file-credentials -o jsonpath='{.data.azure_client_id}' | base64 --decode) - +---- ++ +[source,terminal] +---- $ az identity list --query "[?clientId=='${mi_id}'].{Name:name}" --output tsv ----