Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 4 additions & 2 deletions modules/modifying-an-existing-ingress-controller.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -16,12 +16,14 @@ As a cluster administrator, you can modify an existing Ingress Controller to man
.Procedure

. Modify the chosen `IngressController` to set `dnsManagementPolicy`:

+
[source,terminal]
----
SCOPE=$(oc -n openshift-ingress-operator get ingresscontroller <name> -o=jsonpath="{.status.endpointPublishingStrategy.loadBalancer.scope}")

----
+
[source,terminal]
----
oc -n openshift-ingress-operator patch ingresscontrollers/<name> --type=merge --patch='{"spec":{"endpointPublishingStrategy":{"type":"LoadBalancerService","loadBalancer":{"dnsManagementPolicy":"Unmanaged", "scope":"${SCOPE}"}}}}'
----

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -49,7 +49,7 @@ data:
----
<1> Set the value to `true` to enable logging and `false` to disable logging. The default value is `false`.
<2> Set the value to `debug`, `info`, `warn`, or `error`. If no value exists for `logLevel`, the log level defaults to `error`.
+

. Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed.

.Verification
Expand All @@ -60,14 +60,19 @@ data:
----
$ oc -n openshift-monitoring get pods
----
+

. Run a test query using the following sample commands as a model:
+
[source,terminal]
----
$ token=`oc create token prometheus-k8s -n openshift-monitoring`
----
+
[source,terminal]
----
$ oc -n openshift-monitoring exec -c prometheus prometheus-k8s-0 -- curl -k -H "Authorization: Bearer $token" 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=cluster_version'
----

. Run the following command to read the query log:
+
[source,terminal]
Expand All @@ -79,6 +84,6 @@ $ oc -n openshift-monitoring logs <thanos_querier_pod_name> -c thanos-query
====
Because the `thanos-querier` pods are highly available (HA) pods, you might be able to see logs in only one pod.
====
+

. After you examine the logged query information, disable query logging by changing the `enableRequestLogging` value to `false` in the config map.

13 changes: 10 additions & 3 deletions modules/nodes-cluster-worker-latency-profiles-examining.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,6 @@
//
// scalability_and_performance/scaling-worker-latency-profiles.adoc


:_mod-docs-content-type: PROCEDURE
[id="nodes-cluster-worker-latency-profiles-examining_{context}"]
= Example steps for displaying resulting values of workerLatencyProfile
Expand Down Expand Up @@ -46,14 +45,22 @@ node-monitor-grace-period:
[source,terminal]
----
$ oc debug node/<worker-node-name>
----
+
[source,terminal]
----
$ chroot /host
----
+
[source,terminal]
----
# cat /etc/kubernetes/kubelet.conf|grep nodeStatusUpdateFrequency
----
+
.Example output
[source,terminal]
----
“nodeStatusUpdateFrequency”: “10s”
“nodeStatusUpdateFrequency”: “10s”
----

These outputs validate the set of timing variables for the Worker Latency Profile.
These outputs validate the set of timing variables for the Worker Latency Profile.
37 changes: 23 additions & 14 deletions modules/nodes-containers-dev-fuse-configuring.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ By exposing the `/dev/fuse` device to an unprivileged pod, you grant it the capa

. Define the pod with `/dev/fuse` access:
+
* Create a YAML file named `fuse-builder-pod.yaml` with the following content:
.. Create a YAML file named `fuse-builder-pod.yaml` with the following content:
+
[source,yaml]
----
Expand All @@ -21,29 +21,30 @@ kind: Pod
metadata:
name: fuse-builder-pod
annotations:
io.kubernetes.cri-o.Devices: "/dev/fuse" <1>
io.kubernetes.cri-o.Devices: "/dev/fuse"
spec:
containers:
- name: build-container
image: quay.io/podman/stable <2>
image: quay.io/podman/stable
command: ["/bin/sh", "-c"]
args: ["echo 'Container is running. Use oc exec to get a shell.'; sleep infinity"] <3>
securityContext: <4>
args: ["echo 'Container is running. Use oc exec to get a shell.'; sleep infinity"]
securityContext:
runAsUser: 1000
----
+
<1> The `io.kubernetes.cri-o.Devices: "/dev/fuse"` annotation makes the FUSE device available.
<2> This annotation specifies a container that uses an image that includes `podman` (for example, `quay.io/podman/stable`).
<3> This command keeps the container running so you can `exec` into it.
<4> This annotation specifies a `securityContext` that runs the container as an unprivileged user (for example, `runAsUser: 1000`).
*
where:
+
`io.kubernetes.cri-o.Devices`:: The `io.kubernetes.cri-o.Devices: "/dev/fuse"` annotation makes the FUSE device available.
`image`:: This annotation specifies a container that uses an image that includes `podman` (for example, `quay.io/podman/stable`).
`args`:: This command keeps the container running so you can `exec` into it.
`securityContext`:: This annotation specifies a `securityContext` that runs the container as an unprivileged user (for example, `runAsUser: 1000`).
+
[NOTE]
====
Depending on your cluster's Security Context Constraints (SCCs) or other policies, you might need to further adjust the `securityContext` specification, for example, by allowing specific capabilities if `/dev/fuse` alone is not sufficient for `fuse-overlayfs` to operate.
====
+
* Create the pod by running the following command:
.. Create the pod by running the following command:
+
[source,terminal]
----
Expand Down Expand Up @@ -71,7 +72,15 @@ You are now inside the container. Because the default working directory might no
[source,terminal]
----
$ cd /tmp
----
+
[source,terminal]
----
$ pwd
----
+
[source,terminal]
----
/tmp
----

Expand Down Expand Up @@ -115,21 +124,21 @@ This should output the content of the `/app/build_info.txt` file and the copied

. Exit the pod and clean up:
+
* After you are done, exit the shell session in the pod:
.. After you are done, exit the shell session in the pod:
+
[source,terminal]
----
$ exit
----
+
* You can then delete the pod if it's no longer needed:
.. Delete the pod if it's no longer needed:
+
[source,terminal]
----
$ oc delete pod fuse-builder-pod
----
+
* Remove the local YAML file:
.. Remove the local YAML file:
+
[source,terminal]
----
Expand Down
16 changes: 16 additions & 0 deletions modules/nw-control-dns-records-public-hosted-zone-azure.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -20,9 +20,25 @@ You can create DNS records on a public DNS zone for Azure by using the External
[source,terminal]
----
$ CLIENT_ID=$(oc get secrets azure-credentials -n kube-system --template={{.data.azure_client_id}} | base64 -d)
----
+
[source,terminal]
----
$ CLIENT_SECRET=$(oc get secrets azure-credentials -n kube-system --template={{.data.azure_client_secret}} | base64 -d)
----
+
[source,terminal]
----
$ RESOURCE_GROUP=$(oc get secrets azure-credentials -n kube-system --template={{.data.azure_resourcegroup}} | base64 -d)
----
+
[source,terminal]
----
$ SUBSCRIPTION_ID=$(oc get secrets azure-credentials -n kube-system --template={{.data.azure_subscription_id}} | base64 -d)
----
+
[source,terminal]
----
$ TENANT_ID=$(oc get secrets azure-credentials -n kube-system --template={{.data.azure_tenant_id}} | base64 -d)
----

Expand Down
7 changes: 5 additions & 2 deletions modules/oadp-using-ca-certificates-with-velero-command.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,10 @@ Server:
[source,terminal]
----
$ CA_CERT=$(oc -n openshift-adp get dataprotectionapplications.oadp.openshift.io <dpa-name> -o jsonpath='{.spec.backupLocations[0].velero.objectStorage.caCert}')

----
+
[source,terminal]
----
$ [[ -n $CA_CERT ]] && echo "$CA_CERT" | base64 -d | oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c "cat > /tmp/your-cacert.txt" || echo "DPA BSL has no caCert"
----
+
Expand Down Expand Up @@ -72,4 +75,4 @@ $ oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c "ls /tmp/your-c
/tmp/your-cacert.txt
----

In a future release of OpenShift API for Data Protection (OADP), we plan to mount the certificate to the Velero pod so that this step is not required.
In a future release of OpenShift API for Data Protection (OADP), we plan to mount the certificate to the Velero pod so that this step is not required.
29 changes: 21 additions & 8 deletions modules/op-authenticating-to-an-oci-registry.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -15,11 +15,21 @@ Before pushing signatures to an OCI registry, cluster administrators must config
+
[source,terminal]
----
$ export NAMESPACE=<namespace> <1>
$ export SERVICE_ACCOUNT_NAME=<service_account> <2>
$ export NAMESPACE=<namespace>
----
<1> The namespace associated with the service account.
<2> The name of the service account.
+
where:
+
`<namespace>`:: The namespace associated with the service account.
+
[source,terminal]
----
$ export SERVICE_ACCOUNT_NAME=<service_account>
----
+
where:
+
`<service_account>`:: The name of the service account.

. Create a Kubernetes secret.
+
Expand All @@ -41,14 +51,14 @@ $ oc patch serviceaccount $SERVICE_ACCOUNT_NAME \
----
+
If you patch the default `pipeline` service account that {pipelines-title} assigns to all task runs, the {pipelines-title} Operator will override the service account. As a best practice, you can perform the following steps:

+
.. Create a separate service account to assign to user's task runs.
+
[source,terminal]
----
$ oc create serviceaccount <service_account_name>
----

+
.. Associate the service account to the task runs by setting the value of the `serviceaccountname` field in the task run template.
+
[source,yaml]
Expand All @@ -58,9 +68,12 @@ kind: TaskRun
metadata:
name: build-push-task-run-2
spec:
serviceAccountName: build-bot <1>
serviceAccountName: build-bot
taskRef:
name: build-push
...
----
<1> Substitute with the name of the newly created service account.
+
where:
+
`<serviceAccountName>`:: Substitute with the name of the newly created service account.
Original file line number Diff line number Diff line change
Expand Up @@ -36,9 +36,9 @@ $ cosign generate-key-pair k8s://openshift-pipelines/signing-secrets
Provide a password when prompted. Cosign stores the resulting private key as part of the `signing-secrets` Kubernetes secret in the `openshift-pipelines` namespace, and writes the public key to the `cosign.pub` local file.

. Configure authentication for the image registry.

+
.. To configure the {tekton-chains} controller for pushing signature to an OCI registry, use the credentials associated with the service account of the task run. For detailed information, see the "Authenticating to an OCI registry" section.

+
.. To configure authentication for a Kaniko task that builds and pushes image to the registry, create a Kubernetes secret of the docker `config.json` file containing the required credentials.
+
[source,terminal]
Expand All @@ -54,33 +54,51 @@ $ oc create secret generic <docker_config_secret_name> \ <1>
[source,terminal]
----
$ oc patch configmap chains-config -n openshift-pipelines -p='{"data":{"artifacts.taskrun.format": "in-toto"}}'

----
+
[source,terminal]
----
$ oc patch configmap chains-config -n openshift-pipelines -p='{"data":{"artifacts.taskrun.storage": "oci"}}'

----
+
[source,terminal]
----
$ oc patch configmap chains-config -n openshift-pipelines -p='{"data":{"transparency.enabled": "true"}}'
----

. Start the Kaniko task.

+
.. Apply the Kaniko task to the cluster.
+
[source,terminal]
----
$ oc apply -f examples/kaniko/kaniko.yaml <1>
----
<1> Substitute with the URI or file path to your Kaniko task.

+
where:
+
`<examples/kaniko/kaniko.yaml>`:: Substitute with the URI or file path to your Kaniko task.
+
.. Set the appropriate environment variables.
+
[source,terminal]
----
$ export REGISTRY=<url_of_registry> <1>

$ export REGISTRY=<url_of_registry>
----
+
where:
+
`<url_of_registry>`:: Substitute with the URL of the registry where you want to push the image.
+
[source,terminal]
----
$ export DOCKERCONFIG_SECRET_NAME=<name_of_the_secret_in_docker_config_json> <2>
----
<1> Substitute with the URL of the registry where you want to push the image.
<2> Substitute with the name of the secret in the docker `config.json` file.

+
where:
+
`<name_of_the_secret_in_docker_config_json>`:: Substitute with the name of the secret in the docker `config.json` file.
+
.. Start the Kaniko task.
+
[source,terminal]
Expand Down Expand Up @@ -109,14 +127,17 @@ $ oc get tr <task_run_name> \ <1>
[source,terminal]
----
$ cosign verify --key cosign.pub $REGISTRY/kaniko-chains

----
+
[source,terminal]
----
$ cosign verify-attestation --key cosign.pub $REGISTRY/kaniko-chains
----

. Find the provenance for the image in Rekor.

+
.. Get the digest of the $REGISTRY/kaniko-chains image. You can search for it ing the task run, or pull the image to extract the digest.

+
.. Search Rekor to find all entries that match the `sha256` digest of the image.
+
[source,terminal]
Expand All @@ -132,7 +153,7 @@ $ rekor-cli search --sha <image_digest> <1>
<3> The second matching UUID.
+
The search result displays UUIDs of the matching entries. One of those UUIDs holds the attestation.

+
.. Check the attestation.
+
[source,terminal]
Expand Down
Loading