Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 4 additions & 2 deletions modules/modifying-an-existing-ingress-controller.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -16,12 +16,14 @@ As a cluster administrator, you can modify an existing Ingress Controller to man
.Procedure

. Modify the chosen `IngressController` to set `dnsManagementPolicy`:

+
[source,terminal]
----
SCOPE=$(oc -n openshift-ingress-operator get ingresscontroller <name> -o=jsonpath="{.status.endpointPublishingStrategy.loadBalancer.scope}")

----
+
[source,terminal]
----
oc -n openshift-ingress-operator patch ingresscontrollers/<name> --type=merge --patch='{"spec":{"endpointPublishingStrategy":{"type":"LoadBalancerService","loadBalancer":{"dnsManagementPolicy":"Unmanaged", "scope":"${SCOPE}"}}}}'
----

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -49,7 +49,7 @@ data:
----
<1> Set the value to `true` to enable logging and `false` to disable logging. The default value is `false`.
<2> Set the value to `debug`, `info`, `warn`, or `error`. If no value exists for `logLevel`, the log level defaults to `error`.
+

. Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed.

.Verification
Expand All @@ -60,14 +60,19 @@ data:
----
$ oc -n openshift-monitoring get pods
----
+

. Run a test query using the following sample commands as a model:
+
[source,terminal]
----
$ token=`oc create token prometheus-k8s -n openshift-monitoring`
----
+
[source,terminal]
----
$ oc -n openshift-monitoring exec -c prometheus prometheus-k8s-0 -- curl -k -H "Authorization: Bearer $token" 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=cluster_version'
----

. Run the following command to read the query log:
+
[source,terminal]
Expand All @@ -79,6 +84,6 @@ $ oc -n openshift-monitoring logs <thanos_querier_pod_name> -c thanos-query
====
Because the `thanos-querier` pods are highly available (HA) pods, you might be able to see logs in only one pod.
====
+

. After you examine the logged query information, disable query logging by changing the `enableRequestLogging` value to `false` in the config map.

13 changes: 10 additions & 3 deletions modules/nodes-cluster-worker-latency-profiles-examining.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,6 @@
//
// scalability_and_performance/scaling-worker-latency-profiles.adoc


:_mod-docs-content-type: PROCEDURE
[id="nodes-cluster-worker-latency-profiles-examining_{context}"]
= Example steps for displaying resulting values of workerLatencyProfile
Expand Down Expand Up @@ -46,14 +45,22 @@ node-monitor-grace-period:
[source,terminal]
----
$ oc debug node/<worker-node-name>
----
+
[source,terminal]
----
$ chroot /host
----
+
[source,terminal]
----
# cat /etc/kubernetes/kubelet.conf|grep nodeStatusUpdateFrequency
----
+
.Example output
[source,terminal]
----
“nodeStatusUpdateFrequency”: “10s”
“nodeStatusUpdateFrequency”: “10s”
----

These outputs validate the set of timing variables for the Worker Latency Profile.
These outputs validate the set of timing variables for the Worker Latency Profile.
16 changes: 16 additions & 0 deletions modules/nw-control-dns-records-public-hosted-zone-azure.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -20,9 +20,25 @@ You can create DNS records on a public DNS zone for Azure by using the External
[source,terminal]
----
$ CLIENT_ID=$(oc get secrets azure-credentials -n kube-system --template={{.data.azure_client_id}} | base64 -d)
----
+
[source,terminal]
----
$ CLIENT_SECRET=$(oc get secrets azure-credentials -n kube-system --template={{.data.azure_client_secret}} | base64 -d)
----
+
[source,terminal]
----
$ RESOURCE_GROUP=$(oc get secrets azure-credentials -n kube-system --template={{.data.azure_resourcegroup}} | base64 -d)
----
+
[source,terminal]
----
$ SUBSCRIPTION_ID=$(oc get secrets azure-credentials -n kube-system --template={{.data.azure_subscription_id}} | base64 -d)
----
+
[source,terminal]
----
$ TENANT_ID=$(oc get secrets azure-credentials -n kube-system --template={{.data.azure_tenant_id}} | base64 -d)
----

Expand Down
7 changes: 5 additions & 2 deletions modules/oadp-using-ca-certificates-with-velero-command.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,10 @@ Server:
[source,terminal]
----
$ CA_CERT=$(oc -n openshift-adp get dataprotectionapplications.oadp.openshift.io <dpa-name> -o jsonpath='{.spec.backupLocations[0].velero.objectStorage.caCert}')

----
+
[source,terminal]
----
$ [[ -n $CA_CERT ]] && echo "$CA_CERT" | base64 -d | oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c "cat > /tmp/your-cacert.txt" || echo "DPA BSL has no caCert"
----
+
Expand Down Expand Up @@ -72,4 +75,4 @@ $ oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c "ls /tmp/your-c
/tmp/your-cacert.txt
----

In a future release of OpenShift API for Data Protection (OADP), we plan to mount the certificate to the Velero pod so that this step is not required.
In a future release of OpenShift API for Data Protection (OADP), we plan to mount the certificate to the Velero pod so that this step is not required.
29 changes: 21 additions & 8 deletions modules/op-authenticating-to-an-oci-registry.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -15,11 +15,21 @@ Before pushing signatures to an OCI registry, cluster administrators must config
+
[source,terminal]
----
$ export NAMESPACE=<namespace> <1>
$ export SERVICE_ACCOUNT_NAME=<service_account> <2>
$ export NAMESPACE=<namespace>
----
<1> The namespace associated with the service account.
<2> The name of the service account.
+
where:
+
`<namespace>`:: The namespace associated with the service account.
+
[source,terminal]
----
$ export SERVICE_ACCOUNT_NAME=<service_account>
----
+
where:
+
`<service_account>`:: The name of the service account.

. Create a Kubernetes secret.
+
Expand All @@ -41,14 +51,14 @@ $ oc patch serviceaccount $SERVICE_ACCOUNT_NAME \
----
+
If you patch the default `pipeline` service account that {pipelines-title} assigns to all task runs, the {pipelines-title} Operator will override the service account. As a best practice, you can perform the following steps:

+
.. Create a separate service account to assign to user's task runs.
+
[source,terminal]
----
$ oc create serviceaccount <service_account_name>
----

+
.. Associate the service account to the task runs by setting the value of the `serviceaccountname` field in the task run template.
+
[source,yaml]
Expand All @@ -58,9 +68,12 @@ kind: TaskRun
metadata:
name: build-push-task-run-2
spec:
serviceAccountName: build-bot <1>
serviceAccountName: build-bot
taskRef:
name: build-push
...
----
<1> Substitute with the name of the newly created service account.
+
where:
+
`<serviceAccountName>`:: Substitute with the name of the newly created service account.
Original file line number Diff line number Diff line change
Expand Up @@ -36,9 +36,9 @@ $ cosign generate-key-pair k8s://openshift-pipelines/signing-secrets
Provide a password when prompted. Cosign stores the resulting private key as part of the `signing-secrets` Kubernetes secret in the `openshift-pipelines` namespace, and writes the public key to the `cosign.pub` local file.

. Configure authentication for the image registry.

+
.. To configure the {tekton-chains} controller for pushing signature to an OCI registry, use the credentials associated with the service account of the task run. For detailed information, see the "Authenticating to an OCI registry" section.

+
.. To configure authentication for a Kaniko task that builds and pushes image to the registry, create a Kubernetes secret of the docker `config.json` file containing the required credentials.
+
[source,terminal]
Expand All @@ -54,33 +54,51 @@ $ oc create secret generic <docker_config_secret_name> \ <1>
[source,terminal]
----
$ oc patch configmap chains-config -n openshift-pipelines -p='{"data":{"artifacts.taskrun.format": "in-toto"}}'

----
+
[source,terminal]
----
$ oc patch configmap chains-config -n openshift-pipelines -p='{"data":{"artifacts.taskrun.storage": "oci"}}'

----
+
[source,terminal]
----
$ oc patch configmap chains-config -n openshift-pipelines -p='{"data":{"transparency.enabled": "true"}}'
----

. Start the Kaniko task.

+
.. Apply the Kaniko task to the cluster.
+
[source,terminal]
----
$ oc apply -f examples/kaniko/kaniko.yaml <1>
----
<1> Substitute with the URI or file path to your Kaniko task.

+
where:
+
`<examples/kaniko/kaniko.yaml>`:: Substitute with the URI or file path to your Kaniko task.
+
.. Set the appropriate environment variables.
+
[source,terminal]
----
$ export REGISTRY=<url_of_registry> <1>

$ export REGISTRY=<url_of_registry>
----
+
where:
+
`<url_of_registry>`:: Substitute with the URL of the registry where you want to push the image.
+
[source,terminal]
----
$ export DOCKERCONFIG_SECRET_NAME=<name_of_the_secret_in_docker_config_json> <2>
----
<1> Substitute with the URL of the registry where you want to push the image.
<2> Substitute with the name of the secret in the docker `config.json` file.

+
where:
+
`<name_of_the_secret_in_docker_config_json>`:: Substitute with the name of the secret in the docker `config.json` file.
+
.. Start the Kaniko task.
+
[source,terminal]
Expand Down Expand Up @@ -109,14 +127,17 @@ $ oc get tr <task_run_name> \ <1>
[source,terminal]
----
$ cosign verify --key cosign.pub $REGISTRY/kaniko-chains

----
+
[source,terminal]
----
$ cosign verify-attestation --key cosign.pub $REGISTRY/kaniko-chains
----

. Find the provenance for the image in Rekor.

+
.. Get the digest of the $REGISTRY/kaniko-chains image. You can search for it ing the task run, or pull the image to extract the digest.

+
.. Search Rekor to find all entries that match the `sha256` digest of the image.
+
[source,terminal]
Expand All @@ -132,7 +153,7 @@ $ rekor-cli search --sha <image_digest> <1>
<3> The second matching UUID.
+
The search result displays UUIDs of the matching entries. One of those UUIDs holds the attestation.

+
.. Check the attestation.
+
[source,terminal]
Expand Down
17 changes: 16 additions & 1 deletion modules/ossm-cert-manage-verify-cert.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -13,8 +13,20 @@ Use the Bookinfo sample application to verify that the workload certificates are
[source,terminal]
----
$ sleep 60
----
+
[source,terminal]
----
$ oc -n bookinfo exec "$(oc -n bookinfo get pod -l app=productpage -o jsonpath={.items..metadata.name})" -c istio-proxy -- openssl s_client -showcerts -connect details:9080 > bookinfo-proxy-cert.txt
----
+
[source,terminal]
----
$ sed -n '/-----BEGIN CERTIFICATE-----/{:start /-----END CERTIFICATE-----/!{N;b start};/.*/p}' bookinfo-proxy-cert.txt > certs.pem
----
+
[source,terminal]
----
$ awk 'BEGIN {counter=0;} /BEGIN CERT/{counter++} { print > "proxy-cert-" counter ".pem"}' < certs.pem
----
+
Expand Down Expand Up @@ -44,25 +56,27 @@ $ diff -s /tmp/root-cert.crt.txt /tmp/pod-root-cert.crt.txt
You should see the following result:
`Files /tmp/root-cert.crt.txt and /tmp/pod-root-cert.crt.txt are identical`


. Verify that the CA certificate is the same as the one specified by the administrator. Replace `<path>` with the path to your certificates.
+
[source,terminal]
----
$ openssl x509 -in <path>/ca-cert.pem -text -noout > /tmp/ca-cert.crt.txt
----
+
Run the following syntax at the terminal window.
+
[source,terminal]
----
$ openssl x509 -in ./proxy-cert-2.pem -text -noout > /tmp/pod-cert-chain-ca.crt.txt
----
+
Compare the certificates by running the following syntax at the terminal window.
+
[source,terminal]
----
$ diff -s /tmp/ca-cert.crt.txt /tmp/pod-cert-chain-ca.crt.txt
----
+
You should see the following result:
`Files /tmp/ca-cert.crt.txt and /tmp/pod-cert-chain-ca.crt.txt are identical.`

Expand All @@ -72,5 +86,6 @@ You should see the following result:
----
$ openssl verify -CAfile <(cat <path>/ca-cert.pem <path>/root-cert.pem) ./proxy-cert-1.pem
----
+
You should see the following result:
`./proxy-cert-1.pem: OK`
4 changes: 4 additions & 0 deletions modules/ossm-migrating-to-20.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -38,6 +38,10 @@ $ oc get smcp -o yaml
----
$ oc get smcp.v1.maistra.io <smcp_name> > smcp-resource.yaml
#Edit the smcp-resource.yaml file.
----
+
[source,terminal]
----
$ oc replace -f smcp-resource.yaml
----
+
Expand Down