diff --git a/modules/admin-limit-operations.adoc b/modules/admin-limit-operations.adoc index 1adf5e01b81c..a1252d04ebf6 100644 --- a/modules/admin-limit-operations.adoc +++ b/modules/admin-limit-operations.adoc @@ -12,7 +12,7 @@ Shown here is an example procedure to follow for creating a limit range. .Procedure -. Create the object: +* Create the object: + [source,terminal] ---- @@ -67,9 +67,8 @@ openshift.io/ImageStream openshift.io/image-tags - 10 - == Deleting a limit range To remove a limit range, run the following command: -+ + [source,terminal] ---- $ oc delete limits ---- -S \ No newline at end of file diff --git a/modules/admin-quota-usage.adoc b/modules/admin-quota-usage.adoc index b4c9524c469e..995c7e77e094 100644 --- a/modules/admin-quota-usage.adoc +++ b/modules/admin-quota-usage.adoc @@ -40,11 +40,11 @@ metadata: name: core-object-counts spec: hard: - configmaps: "10" <1> - persistentvolumeclaims: "4" <2> - replicationcontrollers: "20" <3> - secrets: "10" <4> - services: "10" <5> + configmaps: "10" # <1> + persistentvolumeclaims: "4" # <2> + replicationcontrollers: "20" # <3> + secrets: "10" # <4> + services: "10" # <5> ---- <1> The total number of `ConfigMap` objects that can exist in the project. <2> The total number of persistent volume claims (PVCs) that can exist in the project. @@ -63,7 +63,7 @@ metadata: name: openshift-object-counts spec: hard: - openshift.io/imagestreams: "10" <1> + openshift.io/imagestreams: "10" # <1> ---- <1> The total number of image streams that can exist in the project. @@ -78,13 +78,13 @@ metadata: name: compute-resources spec: hard: - pods: "4" <1> - requests.cpu: "1" <2> - requests.memory: 1Gi <3> - requests.ephemeral-storage: 2Gi <4> - limits.cpu: "2" <5> - limits.memory: 2Gi <6> - limits.ephemeral-storage: 4Gi <7> + pods: "4" # <1> + requests.cpu: "1" # <2> + requests.memory: 1Gi # <3> + requests.ephemeral-storage: 2Gi # <4> + limits.cpu: "2" # <5> + limits.memory: 2Gi # <6> + limits.ephemeral-storage: 4Gi # <7> ---- <1> The total number of pods in a non-terminal state that can exist in the project. <2> Across all pods in a non-terminal state, the sum of CPU requests cannot exceed 1 core. @@ -105,9 +105,9 @@ metadata: name: besteffort spec: hard: - pods: "1" <1> + pods: "1" # <1> scopes: - - BestEffort <2> + - BestEffort # <2> ---- <1> The total number of pods in a non-terminal state with *BestEffort* quality of service that can exist in the project. <2> Restricts the quota to only matching pods that have *BestEffort* quality of service for either memory or CPU. @@ -122,10 +122,10 @@ metadata: name: compute-resources-long-running spec: hard: - pods: "4" <1> - limits.cpu: "4" <2> - limits.memory: "2Gi" <3> - limits.ephemeral-storage: "4Gi" <4> + pods: "4" # <1> + limits.cpu: "4" # <2> + limits.memory: "2Gi" # <3> + limits.ephemeral-storage: "4Gi" # <4> scopes: - NotTerminating <5> ---- @@ -145,10 +145,10 @@ metadata: name: compute-resources-time-bound spec: hard: - pods: "2" <1> - limits.cpu: "1" <2> - limits.memory: "1Gi" <3> - limits.ephemeral-storage: "1Gi" <4> + pods: "2" # <1> + limits.cpu: "1" # <2> + limits.memory: "1Gi" # <3> + limits.ephemeral-storage: "1Gi" # <4> scopes: - Terminating <5> ---- @@ -169,13 +169,13 @@ metadata: name: storage-consumption spec: hard: - persistentvolumeclaims: "10" <1> - requests.storage: "50Gi" <2> - gold.storageclass.storage.k8s.io/requests.storage: "10Gi" <3> - silver.storageclass.storage.k8s.io/requests.storage: "20Gi" <4> - silver.storageclass.storage.k8s.io/persistentvolumeclaims: "5" <5> - bronze.storageclass.storage.k8s.io/requests.storage: "0" <6> - bronze.storageclass.storage.k8s.io/persistentvolumeclaims: "0" <7> + persistentvolumeclaims: "10" # <1> + requests.storage: "50Gi" # <2> + gold.storageclass.storage.k8s.io/requests.storage: "10Gi" # <3> + silver.storageclass.storage.k8s.io/requests.storage: "20Gi" # <4> + silver.storageclass.storage.k8s.io/persistentvolumeclaims: "5" # <5> + bronze.storageclass.storage.k8s.io/requests.storage: "0" # <6> + bronze.storageclass.storage.k8s.io/persistentvolumeclaims: "0" # <7> ---- <1> The total number of persistent volume claims in a project <2> Across all persistent volume claims in a project, the sum of storage requested cannot exceed this value. @@ -221,8 +221,16 @@ $ oc create quota --hard=count/.=,count/ The group or resource to whose consumption is limited by default. <2> The name of the resource tracked by quota associated with the group/resource to limit by default. - In the above example, the quota system intercepts every operation that creates or updates a `PersistentVolumeClaim`. It checks what resources controlled by quota would be consumed. If there is no covering quota for those resources in the project, the request is denied. In this example, if a user creates a `PersistentVolumeClaim` that uses storage associated with the gold storage class and there is no matching quota in the project, the request is denied. endif::[] diff --git a/modules/ai-adding-worker-nodes-to-cluster.adoc b/modules/ai-adding-worker-nodes-to-cluster.adoc index b7e5cba0741f..097423b49965 100644 --- a/modules/ai-adding-worker-nodes-to-cluster.adoc +++ b/modules/ai-adding-worker-nodes-to-cluster.adoc @@ -31,14 +31,14 @@ $ export API_URL= <1> <1> Replace `` with the Assisted Installer API URL, for example, `https://api.openshift.com` . Import the {sno} cluster by running the following commands: - ++ .. Set the `$OPENSHIFT_CLUSTER_ID` variable. Log in to the cluster and run the following command: + [source,terminal] ---- $ export OPENSHIFT_CLUSTER_ID=$(oc get clusterversion -o jsonpath='{.items[].spec.clusterID}') ---- - ++ .. Set the `$CLUSTER_REQUEST` variable that is used to import the cluster: + [source,terminal] @@ -51,7 +51,7 @@ $ export CLUSTER_REQUEST=$(jq --null-input --arg openshift_cluster_id "$OPENSHIF ---- <1> Replace `` with the hostname for the cluster's API server. This can be the DNS domain for the API server or the IP address of the single node which the worker node can reach. For example, `api.compute-1.example.com`. <2> Replace `` with the plain text name for the cluster. The cluster name should match the cluster name that was set during the Day 1 cluster installation. - ++ .. Import the cluster and set the `$CLUSTER_ID` variable. Run the following command: + [source,terminal] @@ -61,9 +61,9 @@ $ CLUSTER_ID=$(curl "$API_URL/api/assisted-install/v2/clusters/import" -H "Autho ---- . Generate the `InfraEnv` resource for the cluster and set the `$INFRA_ENV_ID` variable by running the following commands: - ++ .. Download the pull secret file from Red Hat OpenShift Cluster Manager at link:console.redhat.com/openshift/install/pull-secret[console.redhat.com]. - ++ .. Set the `$INFRA_ENV_REQUEST` variable: + [source,terminal] @@ -83,7 +83,7 @@ export INFRA_ENV_REQUEST=$(jq --null-input \ <2> Replace `` with the path to the public SSH key required to access the host. If you do not set this value, you cannot access the host while in discovery mode. <3> Replace `` with the plain text name for the `InfraEnv` resource. <4> Replace `` with the ISO image type, either `full-iso` or `minimal-iso`. - ++ .. Post the `$INFRA_ENV_REQUEST` to the link:https://api.openshift.com/?urls.primaryName=assisted-service%20service#/installer/RegisterInfraEnv[/v2/infra-envs] API and set the `$INFRA_ENV_ID` variable: + [source,terminal] diff --git a/modules/albo-deleting.adoc b/modules/albo-deleting.adoc index c5c6cfba2005..8902b44d7a39 100644 --- a/modules/albo-deleting.adoc +++ b/modules/albo-deleting.adoc @@ -22,6 +22,10 @@ $ oc delete subscription aws-load-balancer-operator -n aws-load-balancer-operato $ aws iam detach-role-policy \ --role-name "-alb-operator" \ --policy-arn +---- ++ +[source,terminal] +---- $ aws iam delete-role \ --role-name "-alb-operator" ---- @@ -31,4 +35,4 @@ $ aws iam delete-role \ [source,terminal] ---- $ aws iam delete-policy --policy-arn ----- \ No newline at end of file +---- diff --git a/modules/albo-installation.adoc b/modules/albo-installation.adoc index 1046fd818a31..b5eac8bf9a1f 100644 --- a/modules/albo-installation.adoc +++ b/modules/albo-installation.adoc @@ -70,6 +70,7 @@ $ aws iam create-role --role-name "${CLUSTER_NAME}-alb-operator" \ Take note of the Operator role ARN in the output. This is referred to as the `$OPERATOR_ROLE_ARN` for the remainder of this process. .. Associate the Operator role and policy: + +[source,terminal] ---- $ aws iam attach-role-policy --role-name "${CLUSTER_NAME}-alb-operator" \ --policy-arn $OPERATOR_POLICY_ARN @@ -160,6 +161,7 @@ Take note of the Controller role ARN in the output. This is referred to as the ` .. Associate the Controller role and policy: + +[source,terminal] ---- $ aws iam attach-role-policy \ --role-name "${CLUSTER_NAME}-albo-controller" \ diff --git a/modules/albo-prerequisites.adoc b/modules/albo-prerequisites.adoc index d6a8ad08e8e3..09c7a3c61d08 100644 --- a/modules/albo-prerequisites.adoc +++ b/modules/albo-prerequisites.adoc @@ -44,10 +44,30 @@ $ oc login --token= --server= [source,terminal] ---- $ export CLUSTER_NAME=$(oc get infrastructure cluster -o=jsonpath="{.status.apiServerURL}" | sed 's|^https://||' | awk -F . '{print $2}') +---- ++ +[source,terminal] +---- $ export REGION=$(oc get infrastructure cluster -o=jsonpath="{.status.platformStatus.aws.region}") +---- ++ +[source,terminal] +---- $ export OIDC_ENDPOINT=$(oc get authentication.config.openshift.io cluster -o jsonpath='{.spec.serviceAccountIssuer}' | sed 's|^https://||') +---- ++ +[source,terminal] +---- $ export AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text) +---- ++ +[source,terminal] +---- $ export SCRATCH="/tmp/${CLUSTER_NAME}/alb-operator" +---- ++ +[source,terminal] +---- $ mkdir -p ${SCRATCH} ---- + @@ -91,7 +111,15 @@ You must tag your AWS VPC resources before you install the AWS Load Balancer Ope [source,terminal] ---- $ export VPC_ID= +---- ++ +[source,terminal] +---- $ export PUBLIC_SUBNET_IDS=" " +---- ++ +[source,terminal] +---- $ export PRIVATE_SUBNET_IDS=" " ---- @@ -127,4 +155,4 @@ EOF [source,bash] ---- bash ${SCRATCH}/tag-subnets.sh ----- \ No newline at end of file +---- diff --git a/modules/albo-validate-install.adoc b/modules/albo-validate-install.adoc index f56428c0c67c..b8f8b0b618dc 100644 --- a/modules/albo-validate-install.adoc +++ b/modules/albo-validate-install.adoc @@ -82,6 +82,10 @@ ALB provisioning takes a few minutes. If you receive an error that says `curl: ( ---- $ ALB_INGRESS=$(oc -n hello-world get ingress hello-openshift-alb \ -o jsonpath='{.status.loadBalancer.ingress[0].hostname}') +---- ++ +[source,termnial] +---- $ curl "http://${ALB_INGRESS}" ---- + @@ -127,18 +131,18 @@ NLB provisioning takes a few minutes. If you receive an error that says `curl: ( ---- $ NLB=$(oc -n hello-world get service hello-openshift-nlb \ -o jsonpath='{.status.loadBalancer.ingress[0].hostname}') -$ curl "http://${NLB}" ---- + -.Example output -[source,text] +[source,termnial] ---- -Hello OpenShift! +$ curl "http://${NLB}" ---- ++ +Expected output shows `Hello OpenShift!`. . You can now delete the sample application and all resources in the `hello-world` namespace. + [source,terminal] ---- $ oc delete project hello-world ----- \ No newline at end of file +---- diff --git a/modules/cluster-logging-collector-log-forward-cloudwatch.adoc b/modules/cluster-logging-collector-log-forward-cloudwatch.adoc index d57b6dd53a93..7804264fa847 100644 --- a/modules/cluster-logging-collector-log-forward-cloudwatch.adoc +++ b/modules/cluster-logging-collector-log-forward-cloudwatch.adoc @@ -106,7 +106,16 @@ To generate log data for this example, you run a `busybox` pod in a namespace ca [source,terminal] ---- $ oc run busybox --image=busybox -- sh -c 'while true; do echo "My life is my message"; sleep 3; done' +---- + +[source,terminal] +---- $ oc logs -f busybox +---- + +.Example output +[source,terminal] +---- My life is my message My life is my message My life is my message diff --git a/modules/cluster-logging-logcli-reference.adoc b/modules/cluster-logging-logcli-reference.adoc index 4dd09f0a557a..81c129983ab8 100644 --- a/modules/cluster-logging-logcli-reference.adoc +++ b/modules/cluster-logging-logcli-reference.adoc @@ -11,7 +11,15 @@ You can use Loki's command-line interface `logcli` to query logs. [source,terminal] ---- $ oc extract cm/lokistack-sample-ca-bundle --to=lokistack --confirm +---- + +[source,terminal] +---- $ cat lokistack/*.crt >lokistack_ca.crt +---- + +[source,terminal] +---- $ logcli -o raw --bearer-token="${bearer_token}" --ca-cert="lokistack_ca.crt" --addr xxxxxx ---- diff --git a/modules/cnf-image-based-upgrade-generate-seed-image.adoc b/modules/cnf-image-based-upgrade-generate-seed-image.adoc index 6f900d662e47..398e4509daa0 100644 --- a/modules/cnf-image-based-upgrade-generate-seed-image.adoc +++ b/modules/cnf-image-based-upgrade-generate-seed-image.adoc @@ -28,18 +28,18 @@ Use the {lcao} to generate a seed image from a managed cluster. The Operator che .Procedure . Detach the managed cluster from the hub to delete any {rh-rhacm}-specific resources from the seed cluster that must not be in the seed image: - ++ .. Manually detach the seed cluster by running the following command: + [source,terminal] ---- $ oc delete managedcluster sno-worker-example ---- - ++ ... Wait until the managed cluster is removed. After the cluster is removed, create the proper `SeedGenerator` CR. The {lcao} cleans up the {rh-rhacm} artifacts. - ++ .. If you are using {ztp}, detach your cluster by removing the seed cluster's `SiteConfig` CR from the `kustomization.yaml`. - ++ ... If you have a `kustomization.yaml` file that references multiple `SiteConfig` CRs, remove your seed cluster's `SiteConfig` CR from the `kustomization.yaml`: + [source,yaml] @@ -52,7 +52,7 @@ generators: - example-target-sno2.yaml - example-target-sno3.yaml ---- - ++ ... If you have a `kustomization.yaml` that references one `SiteConfig` CR, remove your seed cluster's `SiteConfig` CR from the `kustomization.yaml` and add the `generators: {}` line: + [source,yaml] @@ -62,32 +62,37 @@ kind: Kustomization generators: {} ---- - ++ ... Commit the `kustomization.yaml` changes in your Git repository and push the changes to your repository. + The ArgoCD pipeline detects the changes and removes the managed cluster. . Create the `Secret` object so that you can push the seed image to your registry. - ++ .. Create the authentication file by running the following commands: + --- [source,terminal] ---- $ MY_USER=myuserid +---- ++ +[source,terminal] +---- $ AUTHFILE=/tmp/my-auth.json +---- ++ +[source,terminal] +---- $ podman login --authfile ${AUTHFILE} -u ${MY_USER} quay.io/${MY_USER} ---- - ++ [source,terminal] ---- $ base64 -w 0 ${AUTHFILE} ; echo ---- --- - ++ .. Copy the output into the `seedAuth` field in the `Secret` YAML file named `seedgen` in the `openshift-lifecycle-agent` namespace: + --- [source,yaml] ---- apiVersion: v1 @@ -101,8 +106,7 @@ data: ---- <1> The `Secret` resource must have the `name: seedgen` and `namespace: openshift-lifecycle-agent` fields. <2> Specifies a base64-encoded authfile for write-access to the registry for pushing the generated seed images. --- - ++ .. Apply the `Secret` by running the following command: + [source,terminal] @@ -112,7 +116,6 @@ $ oc apply -f secretseedgenerator.yaml . Create the `SeedGenerator` CR: + --- [source,yaml] ---- apiVersion: lca.openshift.io/v1 @@ -124,7 +127,6 @@ spec: ---- <1> The `SeedGenerator` CR must be named `seedimage`. <2> Specify the container image URL, for example, `quay.io/example/seed-container-image:`. It is recommended to use the `:` format. --- . Generate the seed image by running the following command: + @@ -132,7 +134,6 @@ spec: ---- $ oc apply -f seedgenerator.yaml ---- - + [IMPORTANT] ==== @@ -146,7 +147,6 @@ If you want to generate more seed images, you must provision a new seed cluster * After the cluster recovers and it is available, you can check the status of the `SeedGenerator` CR by running the following command: + --- [source,terminal] ---- $ oc get seedgenerator -o yaml @@ -171,5 +171,4 @@ status: type: SeedGenCompleted <1> observedGeneration: 1 ---- -<1> The seed image generation is complete. --- \ No newline at end of file +<1> The seed image generation is complete. \ No newline at end of file diff --git a/modules/cnf-topology-aware-lifecycle-manager-preparing-for-updates.adoc b/modules/cnf-topology-aware-lifecycle-manager-preparing-for-updates.adoc index 2e3a525b5b73..4fc787aa449f 100644 --- a/modules/cnf-topology-aware-lifecycle-manager-preparing-for-updates.adoc +++ b/modules/cnf-topology-aware-lifecycle-manager-preparing-for-updates.adoc @@ -27,14 +27,14 @@ imageContentSources: ---- . Save the image signature of the desired platform image that was mirrored. You must add the image signature to the `{policy-gen-cr}` CR for platform updates. To get the image signature, perform the following steps: - ++ .. Specify the desired {product-title} tag by running the following command: + [source,terminal] ---- $ OCP_RELEASE_NUMBER= ---- - ++ .. Specify the architecture of the cluster by running the following command: + [source,terminal] @@ -42,58 +42,61 @@ $ OCP_RELEASE_NUMBER= $ ARCHITECTURE= <1> ---- <1> Specify the architecture of the cluster, such as `x86_64`, `aarch64`, `s390x`, or `ppc64le`. - - ++ .. Get the release image digest from Quay by running the following command + [source,terminal] ---- $ DIGEST="$(oc adm release info quay.io/openshift-release-dev/ocp-release:${OCP_RELEASE_NUMBER}-${ARCHITECTURE} | sed -n 's/Pull From: .*@//p')" ---- - ++ .. Set the digest algorithm by running the following command: + [source,terminal] ---- $ DIGEST_ALGO="${DIGEST%%:*}" ---- - ++ .. Set the digest signature by running the following command: + [source,terminal] ---- $ DIGEST_ENCODED="${DIGEST#*:}" ---- - ++ .. Get the image signature from the link:https://mirror.openshift.com/pub/openshift-v4/signatures/openshift/release/[mirror.openshift.com] website by running the following command: + [source,terminal] ---- $ SIGNATURE_BASE64=$(curl -s "https://mirror.openshift.com/pub/openshift-v4/signatures/openshift/release/${DIGEST_ALGO}=${DIGEST_ENCODED}/signature-1" | base64 -w0 && echo) ---- - ++ .. Save the image signature to the `checksum-.yaml` file by running the following commands: + [source,terminal] ---- $ cat >checksum-${OCP_RELEASE_NUMBER}.yaml < +---- + +[source,terminal] +---- $ oc adm policy add-role-to-group ----