Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 4 additions & 1 deletion modules/oadp-using-ca-certificates-with-velero-command.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -29,9 +29,12 @@ $ alias velero='oc -n openshift-adp exec deployment/velero -c velero -it -- ./ve
. Check that the alias is working by running the following command:
+
[source,terminal]
.Example
----
$ velero version
----
+
[source,terminal]
----
Client:
Version: v1.12.1-OADP
Git commit: -
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -142,15 +142,18 @@ $ cosign verify-attestation --key cosign.pub $REGISTRY/kaniko-chains
+
[source,terminal]
----
$ rekor-cli search --sha <image_digest> <1>

$ rekor-cli search --sha <image_digest>
----
* `<image_digest>`: Substitute with the `sha256` digest of the image.
+
[source,terminal]
----
<uuid_1> <2>
<uuid_2> <3>
...
----
<1> Substitute with the `sha256` digest of the image.
<2> The first matching universally unique identifier (UUID).
<3> The second matching UUID.
* `<uuid_1>`: The first matching universally unique identifier (UUID).
* `<uuid_2>`: The second matching UUID.
+
The search result displays UUIDs of the matching entries. One of those UUIDs holds the attestation.
+
Expand Down
5 changes: 1 addition & 4 deletions modules/ossm-cert-manage-verify-cert.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -80,12 +80,9 @@ $ diff -s /tmp/ca-cert.crt.txt /tmp/pod-cert-chain-ca.crt.txt
You should see the following result:
`Files /tmp/ca-cert.crt.txt and /tmp/pod-cert-chain-ca.crt.txt are identical.`

. Verify the certificate chain from the root certificate to the workload certificate. Replace `<path>` with the path to your certificates.
. Verify the certificate chain from the root certificate to the workload certificate. Replace `<path>` with the path to your certificates. After you run the command, the expected output shows `./proxy-cert-1.pem: OK`.
+
[source,terminal]
----
$ openssl verify -CAfile <(cat <path>/ca-cert.pem <path>/root-cert.pem) ./proxy-cert-1.pem
----
+
You should see the following result:
`./proxy-cert-1.pem: OK`
90 changes: 66 additions & 24 deletions modules/preparing-aws-credentials-for-oadp.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,7 @@
An {aws-full} account must be prepared and configured to accept an {oadp-first} installation.

.Procedure

. Create the following environment variables by running the following commands:
+
[IMPORTANT]
Expand All @@ -19,33 +20,73 @@ Change the cluster name to match your ROSA cluster, and ensure you are logged in
+
[source,terminal]
----
$ export CLUSTER_NAME=my-cluster <1>
export ROSA_CLUSTER_ID=$(rosa describe cluster -c ${CLUSTER_NAME} --output json | jq -r .id)
export REGION=$(rosa describe cluster -c ${CLUSTER_NAME} --output json | jq -r .region.id)
export OIDC_ENDPOINT=$(oc get authentication.config.openshift.io cluster -o jsonpath='{.spec.serviceAccountIssuer}' | sed 's|^https://||')
export AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text)
export CLUSTER_VERSION=$(rosa describe cluster -c ${CLUSTER_NAME} -o json | jq -r .version.raw_id | cut -f -2 -d '.')
export ROLE_NAME="${CLUSTER_NAME}-openshift-oadp-aws-cloud-credentials"
export SCRATCH="/tmp/${CLUSTER_NAME}/oadp"
mkdir -p ${SCRATCH}
echo "Cluster ID: ${ROSA_CLUSTER_ID}, Region: ${REGION}, OIDC Endpoint:
${OIDC_ENDPOINT}, AWS Account ID: ${AWS_ACCOUNT_ID}"
$ export CLUSTER_NAME=my-cluster
----
+
--
* `my-cluster`: Replace `my-cluster` with your cluster name.
--
+
[source,terminal]
----
$ export ROSA_CLUSTER_ID=$(rosa describe cluster -c ${CLUSTER_NAME} --output json | jq -r .id)
----
+
[source,terminal]
----
$ export REGION=$(rosa describe cluster -c ${CLUSTER_NAME} --output json | jq -r .region.id)
----
+
[source,terminal]
----
$ export OIDC_ENDPOINT=$(oc get authentication.config.openshift.io cluster -o jsonpath='{.spec.serviceAccountIssuer}' | sed 's|^https://||')
----
+
[source,terminal]
----
$ export AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text)
----
+
[source,terminal]
----
$ export CLUSTER_VERSION=$(rosa describe cluster -c ${CLUSTER_NAME} -o json | jq -r .version.raw_id | cut -f -2 -d '.')
----
+
[source,terminal]
----
$ export ROLE_NAME="${CLUSTER_NAME}-openshift-oadp-aws-cloud-credentials"
----
+
<1> Replace `my-cluster` with your ROSA cluster name.
[source,terminal]
----
$ export SCRATCH="/tmp/${CLUSTER_NAME}/oadp"
----
+
[source,terminal]
----
$ mkdir -p ${SCRATCH}
----
+
[source,terminal]
----
$ echo "Cluster ID: ${ROSA_CLUSTER_ID}, Region: ${REGION}, OIDC Endpoint:
${OIDC_ENDPOINT}, AWS Account ID: ${AWS_ACCOUNT_ID}"
----

. On the {aws-short} account, create an IAM policy to allow access to {aws-short} S3:

+
.. Check to see if the policy exists by running the following command:
+
[source,terminal]
----
$ POLICY_ARN=$(aws iam list-policies --query "Policies[?PolicyName=='RosaOadpVer1'].{ARN:Arn}" --output text) <1>
$ POLICY_ARN=$(aws iam list-policies --query "Policies[?PolicyName=='RosaOadpVer1'].{ARN:Arn}" --output text)
----
+
<1> Replace `RosaOadp` with your policy name.

.. Enter the following command to create the policy JSON file and then create the policy in ROSA:
--
* `RosaOadp`: Replace `RosaOadp` with your policy name.
--
+
.. Enter the following command to create the policy JSON file and then create the policy:
+
[NOTE]
====
Expand All @@ -55,7 +96,7 @@ If the policy ARN is not found, the command creates the policy. If the policy AR
[source,terminal]
----
$ if [[ -z "${POLICY_ARN}" ]]; then
cat << EOF > ${SCRATCH}/policy.json <1>
cat << EOF > ${SCRATCH}/policy.json
{
"Version": "2012-10-17",
"Statement": [
Expand Down Expand Up @@ -100,18 +141,19 @@ EOF
fi
----
+
<1> `SCRATCH` is a name for a temporary directory created for the environment variables.

--
* `SCRATCH`: `SCRATCH` is a name for a temporary directory created for the environment variables.
--
+
.. View the policy ARN by running the following command:
+
[source,terminal]
----
$ echo ${POLICY_ARN}
----


. Create an IAM role trust policy for the cluster:

+
.. Create the trust policy file by running the following command:
+
[source,terminal]
Expand All @@ -136,7 +178,7 @@ $ cat <<EOF > ${SCRATCH}/trust-policy.json
}
EOF
----

+
.. Create the role by running the following command:
+
[source,terminal]
Expand All @@ -147,7 +189,7 @@ $ ROLE_ARN=$(aws iam create-role --role-name \
--tags Key=rosa_cluster_id,Value=${ROSA_CLUSTER_ID} Key=rosa_openshift_version,Value=${CLUSTER_VERSION} Key=rosa_role_prefix,Value=ManagedOpenShift Key=operator_namespace,Value=openshift-adp Key=operator_name,Value=openshift-oadp \
--query Role.Arn --output text)
----

+
.. View the role ARN by running the following command:
+
[source,terminal]
Expand Down
60 changes: 41 additions & 19 deletions modules/preparing-aws-sts-credentials-for-oadp.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,7 @@
An {aws-full} account must be prepared and configured to accept an {oadp-first} installation. Prepare the {aws-short} credentials by using the following procedure.

.Procedure

. Define the `cluster_name` environment variable by running the following command:
+
[source,terminal]
Expand All @@ -23,45 +24,67 @@ $ export CLUSTER_NAME= <AWS_cluster_name> <1>
[source,terminal]
----
$ export CLUSTER_VERSION=$(oc get clusterversion version -o jsonpath='{.status.desired.version}{"\n"}')

export AWS_CLUSTER_ID=$(oc get clusterversion version -o jsonpath='{.spec.clusterID}{"\n"}')

export OIDC_ENDPOINT=$(oc get authentication.config.openshift.io cluster -o jsonpath='{.spec.serviceAccountIssuer}' | sed 's|^https://||')

export REGION=$(oc get infrastructures cluster -o jsonpath='{.status.platformStatus.aws.region}' --allow-missing-template-keys=false || echo us-east-2)

export AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text)

export ROLE_NAME="${CLUSTER_NAME}-openshift-oadp-aws-cloud-credentials"
----
+
[source,terminal]
----
$ export AWS_CLUSTER_ID=$(oc get clusterversion version -o jsonpath='{.spec.clusterID}{"\n"}')
----
+
[source,terminal]
----
$ export OIDC_ENDPOINT=$(oc get authentication.config.openshift.io cluster -o jsonpath='{.spec.serviceAccountIssuer}' | sed 's|^https://||')
----
+
[source,terminal]
----
$ export REGION=$(oc get infrastructures cluster -o jsonpath='{.status.platformStatus.aws.region}' --allow-missing-template-keys=false || echo us-east-2)
----
+
[source,terminal]
----
$ export AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text)
----
+
[source,terminal]
----
$ export ROLE_NAME="${CLUSTER_NAME}-openshift-oadp-aws-cloud-credentials"
----

. Create a temporary directory to store all of the files by running the following command:
+
[source,terminal]
----
$ export SCRATCH="/tmp/${CLUSTER_NAME}/oadp"
mkdir -p ${SCRATCH}
----

. Display all of the gathered details by running the following command:
+
[source,terminal]
----
$ echo "Cluster ID: ${AWS_CLUSTER_ID}, Region: ${REGION}, OIDC Endpoint:
${OIDC_ENDPOINT}, AWS Account ID: ${AWS_ACCOUNT_ID}"
----
. On the {aws-short} account, create an IAM policy to allow access to {aws-short} S3:

. On the {aws-short} account, create an IAM policy to allow access to {aws-short} S3:
+
.. Check to see if the policy exists by running the following commands:
+
[source,terminal]
----
$ export POLICY_NAME="OadpVer1" <1>
$ export POLICY_NAME="OadpVer1"
----
<1> The variable can be set to any value.
+
--
* `POLICY_NAME`: The variable can be set to any value.
--
+
[source,terminal]
----
$ POLICY_ARN=$(aws iam list-policies --query "Policies[?PolicyName=='$POLICY_NAME'].{ARN:Arn}" --output text)
----
+
.. Enter the following command to create the policy JSON file and then create the policy:
+
[NOTE]
Expand Down Expand Up @@ -113,12 +136,11 @@ EOF
POLICY_ARN=$(aws iam create-policy --policy-name $POLICY_NAME \
--policy-document file:///${SCRATCH}/policy.json --query Policy.Arn \
--tags Key=openshift_version,Value=${CLUSTER_VERSION} Key=operator_namespace,Value=openshift-adp Key=operator_name,Value=oadp \
--output text) <1>
--output text)
fi
----
* `SCRATCH`: The name for a temporary directory created for storing the files.
+
<1> `SCRATCH` is a name for a temporary directory created for storing the files.

.. View the policy ARN by running the following command:
+
[source,terminal]
Expand All @@ -127,7 +149,7 @@ $ echo ${POLICY_ARN}
----

. Create an IAM role trust policy for the cluster:

+
.. Create the trust policy file by running the following command:
+
[source,terminal]
Expand All @@ -152,7 +174,7 @@ $ cat <<EOF > ${SCRATCH}/trust-policy.json
}
EOF
----

+
.. Create an IAM role trust policy for the cluster by running the following command:
+
[source,terminal]
Expand All @@ -162,7 +184,7 @@ $ ROLE_ARN=$(aws iam create-role --role-name \
--assume-role-policy-document file://${SCRATCH}/trust-policy.json \
--tags Key=cluster_id,Value=${AWS_CLUSTER_ID} Key=openshift_version,Value=${CLUSTER_VERSION} Key=operator_namespace,Value=openshift-adp Key=operator_name,Value=oadp --query Role.Arn --output text)
----

+
.. View the role ARN by running the following command:
+
[source,terminal]
Expand Down
5 changes: 3 additions & 2 deletions modules/querying-cluster-node-journal-logs.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -32,11 +32,12 @@ endif::openshift-rosa,openshift-dedicated[]
+
[source,terminal]
----
$ oc adm node-logs --role=master -u kubelet <1>
$ oc adm node-logs --role=worker -u kubelet
----
<1> Replace `kubelet` as appropriate to query other unit logs.
* `kubelet`: Replace as appropriate to query other unit logs.

. Collect logs from specific subdirectories under `/var/log/` on cluster nodes.
+
.. Retrieve a list of logs contained within a `/var/log/` subdirectory. The following example lists files in `/var/log/openshift-apiserver/` on all control plane nodes:
+
[source,terminal]
Expand Down
20 changes: 14 additions & 6 deletions modules/rhcos-enabling-multipath.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -41,33 +41,41 @@ The following procedure enables multipath at installation time and appends kerne
----
$ mpathconf --enable && systemctl start multipathd.service
----
** Optional: If booting the PXE or ISO, you can instead enable multipath by adding `rd.multipath=default` from the kernel command line.
+
.. Optional: If booting the PXE or ISO, you can instead enable multipath by adding `rd.multipath=default` from the kernel command line.

. Append the kernel arguments by invoking the `coreos-installer` program:
+
* If there is only one multipath device connected to the machine, it should be available at path `/dev/mapper/mpatha`. For example:
+
[source,terminal]
----
$ coreos-installer install /dev/mapper/mpatha \// <1>
$ coreos-installer install /dev/mapper/mpatha \//
--ignition-url=http://host/worker.ign \
--append-karg rd.multipath=default \
--append-karg root=/dev/disk/by-label/dm-mpath-root \
--append-karg rw
----
<1> Indicates the path of the single multipathed device.
+
--
* `/dev/mapper/mpatha`: Indicates the path of the single multipathed device.
--
+
* If there are multiple multipath devices connected to the machine, or to be more explicit, instead of using `/dev/mapper/mpatha`, it is recommended to use the World Wide Name (WWN) symlink available in `/dev/disk/by-id`. For example:
+
[source,terminal]
----
$ coreos-installer install /dev/disk/by-id/wwn-<wwn_ID> \// <1>
$ coreos-installer install /dev/disk/by-id/wwn-<wwn_ID> \//
--ignition-url=http://host/worker.ign \
--append-karg rd.multipath=default \
--append-karg root=/dev/disk/by-label/dm-mpath-root \
--append-karg rw
--append-karg rw \
--offline
----
<1> Indicates the WWN ID of the target multipathed device. For example, `0xx194e957fcedb4841`.
+
--
* `<wwn_ID>`: Indicates the WWN ID of the target multipathed device. For example, `0xx194e957fcedb4841`.
--
+
This symlink can also be used as the `coreos.inst.install_dev` kernel argument when using special `coreos.inst.*` arguments to direct the live installer. For more information, see "Installing {op-system} and starting the {product-title} bootstrap process".

Expand Down
Loading