Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 2 additions & 3 deletions modules/admin-limit-operations.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ Shown here is an example procedure to follow for creating a limit range.

.Procedure

. Create the object:
* Create the object:
+
[source,terminal]
----
Expand Down Expand Up @@ -67,9 +67,8 @@ openshift.io/ImageStream openshift.io/image-tags - 10 -
== Deleting a limit range

To remove a limit range, run the following command:
+

[source,terminal]
----
$ oc delete limits <limit_name>
----
S
86 changes: 52 additions & 34 deletions modules/admin-quota-usage.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -40,11 +40,11 @@ metadata:
name: core-object-counts
spec:
hard:
configmaps: "10" <1>
persistentvolumeclaims: "4" <2>
replicationcontrollers: "20" <3>
secrets: "10" <4>
services: "10" <5>
configmaps: "10" # <1>
persistentvolumeclaims: "4" # <2>
replicationcontrollers: "20" # <3>
secrets: "10" # <4>
services: "10" # <5>
----
<1> The total number of `ConfigMap` objects that can exist in the project.
<2> The total number of persistent volume claims (PVCs) that can exist in the project.
Expand All @@ -63,7 +63,7 @@ metadata:
name: openshift-object-counts
spec:
hard:
openshift.io/imagestreams: "10" <1>
openshift.io/imagestreams: "10" # <1>
----
<1> The total number of image streams that can exist in the project.

Expand All @@ -78,13 +78,13 @@ metadata:
name: compute-resources
spec:
hard:
pods: "4" <1>
requests.cpu: "1" <2>
requests.memory: 1Gi <3>
requests.ephemeral-storage: 2Gi <4>
limits.cpu: "2" <5>
limits.memory: 2Gi <6>
limits.ephemeral-storage: 4Gi <7>
pods: "4" # <1>
requests.cpu: "1" # <2>
requests.memory: 1Gi # <3>
requests.ephemeral-storage: 2Gi # <4>
limits.cpu: "2" # <5>
limits.memory: 2Gi # <6>
limits.ephemeral-storage: 4Gi # <7>
----
<1> The total number of pods in a non-terminal state that can exist in the project.
<2> Across all pods in a non-terminal state, the sum of CPU requests cannot exceed 1 core.
Expand All @@ -105,9 +105,9 @@ metadata:
name: besteffort
spec:
hard:
pods: "1" <1>
pods: "1" # <1>
scopes:
- BestEffort <2>
- BestEffort # <2>
----
<1> The total number of pods in a non-terminal state with *BestEffort* quality of service that can exist in the project.
<2> Restricts the quota to only matching pods that have *BestEffort* quality of service for either memory or CPU.
Expand All @@ -122,10 +122,10 @@ metadata:
name: compute-resources-long-running
spec:
hard:
pods: "4" <1>
limits.cpu: "4" <2>
limits.memory: "2Gi" <3>
limits.ephemeral-storage: "4Gi" <4>
pods: "4" # <1>
limits.cpu: "4" # <2>
limits.memory: "2Gi" # <3>
limits.ephemeral-storage: "4Gi" # <4>
scopes:
- NotTerminating <5>
----
Expand All @@ -145,10 +145,10 @@ metadata:
name: compute-resources-time-bound
spec:
hard:
pods: "2" <1>
limits.cpu: "1" <2>
limits.memory: "1Gi" <3>
limits.ephemeral-storage: "1Gi" <4>
pods: "2" # <1>
limits.cpu: "1" # <2>
limits.memory: "1Gi" # <3>
limits.ephemeral-storage: "1Gi" # <4>
scopes:
- Terminating <5>
----
Expand All @@ -169,13 +169,13 @@ metadata:
name: storage-consumption
spec:
hard:
persistentvolumeclaims: "10" <1>
requests.storage: "50Gi" <2>
gold.storageclass.storage.k8s.io/requests.storage: "10Gi" <3>
silver.storageclass.storage.k8s.io/requests.storage: "20Gi" <4>
silver.storageclass.storage.k8s.io/persistentvolumeclaims: "5" <5>
bronze.storageclass.storage.k8s.io/requests.storage: "0" <6>
bronze.storageclass.storage.k8s.io/persistentvolumeclaims: "0" <7>
persistentvolumeclaims: "10" # <1>
requests.storage: "50Gi" # <2>
gold.storageclass.storage.k8s.io/requests.storage: "10Gi" # <3>
silver.storageclass.storage.k8s.io/requests.storage: "20Gi" # <4>
silver.storageclass.storage.k8s.io/persistentvolumeclaims: "5" # <5>
bronze.storageclass.storage.k8s.io/requests.storage: "0" # <6>
bronze.storageclass.storage.k8s.io/persistentvolumeclaims: "0" # <7>
----
<1> The total number of persistent volume claims in a project
<2> Across all persistent volume claims in a project, the sum of storage requested cannot exceed this value.
Expand Down Expand Up @@ -221,8 +221,16 @@ $ oc create quota <name> --hard=count/<resource>.<group>=<quota>,count/<resource
----
$ oc create quota test --hard=count/deployments.extensions=2,count/replicasets.extensions=4,count/pods=3,count/secrets=4
resourcequota "test" created
----

[source,terminal]
----
$ oc describe quota test
----

.Example output
[source,terminal]
----
Name: test
Namespace: quota
Resource Used Hard
Expand All @@ -243,23 +251,30 @@ You can also use the CLI to view quota details:

. First, get the list of quotas defined in the project. For example, for a project called `demoproject`:
+

[source,terminal]
----
$ oc get quota -n demoproject
----
+
.Example output
[source,terminal]
----
NAME AGE
besteffort 11m
compute-resources 2m
core-object-counts 29m
----


. Describe the quota you are interested in, for example the `core-object-counts` quota:
+

[source,terminal]
----
$ oc describe quota core-object-counts -n demoproject
----
+
.Example output
[source,terminal]
----
Name: core-object-counts
Namespace: demoproject
Resource Used Hard
Expand Down Expand Up @@ -299,6 +314,10 @@ After making any changes, restart the controller services to apply them.
[source,terminal]
----
$ master-restart api
----

[source,terminal]
----
$ master-restart controllers
----

Expand Down Expand Up @@ -337,7 +356,6 @@ admissionConfig:
<1> The group or resource to whose consumption is limited by default.
<2> The name of the resource tracked by quota associated with the group/resource to limit by default.


In the above example, the quota system intercepts every operation that creates or updates a `PersistentVolumeClaim`. It checks what resources controlled by quota would be consumed. If there is no covering quota for those resources in the project, the request is denied. In this example, if a user creates a `PersistentVolumeClaim` that uses storage associated with the gold storage class and there is no matching quota in the project, the request is denied.

endif::[]
12 changes: 6 additions & 6 deletions modules/ai-adding-worker-nodes-to-cluster.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -31,14 +31,14 @@ $ export API_URL=<api_url> <1>
<1> Replace `<api_url>` with the Assisted Installer API URL, for example, `https://api.openshift.com`

. Import the {sno} cluster by running the following commands:

+
.. Set the `$OPENSHIFT_CLUSTER_ID` variable. Log in to the cluster and run the following command:
+
[source,terminal]
----
$ export OPENSHIFT_CLUSTER_ID=$(oc get clusterversion -o jsonpath='{.items[].spec.clusterID}')
----

+
.. Set the `$CLUSTER_REQUEST` variable that is used to import the cluster:
+
[source,terminal]
Expand All @@ -51,7 +51,7 @@ $ export CLUSTER_REQUEST=$(jq --null-input --arg openshift_cluster_id "$OPENSHIF
----
<1> Replace `<api_vip>` with the hostname for the cluster's API server. This can be the DNS domain for the API server or the IP address of the single node which the worker node can reach. For example, `api.compute-1.example.com`.
<2> Replace `<openshift_cluster_name>` with the plain text name for the cluster. The cluster name should match the cluster name that was set during the Day 1 cluster installation.

+
.. Import the cluster and set the `$CLUSTER_ID` variable. Run the following command:
+
[source,terminal]
Expand All @@ -61,9 +61,9 @@ $ CLUSTER_ID=$(curl "$API_URL/api/assisted-install/v2/clusters/import" -H "Autho
----

. Generate the `InfraEnv` resource for the cluster and set the `$INFRA_ENV_ID` variable by running the following commands:

+
.. Download the pull secret file from Red Hat OpenShift Cluster Manager at link:console.redhat.com/openshift/install/pull-secret[console.redhat.com].

+
.. Set the `$INFRA_ENV_REQUEST` variable:
+
[source,terminal]
Expand All @@ -83,7 +83,7 @@ export INFRA_ENV_REQUEST=$(jq --null-input \
<2> Replace `<path_to_ssh_pub_key>` with the path to the public SSH key required to access the host. If you do not set this value, you cannot access the host while in discovery mode.
<3> Replace `<infraenv_name>` with the plain text name for the `InfraEnv` resource.
<4> Replace `<iso_image_type>` with the ISO image type, either `full-iso` or `minimal-iso`.

+
.. Post the `$INFRA_ENV_REQUEST` to the link:https://api.openshift.com/?urls.primaryName=assisted-service%20service#/installer/RegisterInfraEnv[/v2/infra-envs] API and set the `$INFRA_ENV_ID` variable:
+
[source,terminal]
Expand Down
6 changes: 5 additions & 1 deletion modules/albo-deleting.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,10 @@ $ oc delete subscription aws-load-balancer-operator -n aws-load-balancer-operato
$ aws iam detach-role-policy \
--role-name "<cluster-id>-alb-operator" \
--policy-arn <operator-policy-arn>
----
+
[source,terminal]
----
$ aws iam delete-role \
--role-name "<cluster-id>-alb-operator"
----
Expand All @@ -31,4 +35,4 @@ $ aws iam delete-role \
[source,terminal]
----
$ aws iam delete-policy --policy-arn <operator-policy-arn>
----
----
2 changes: 2 additions & 0 deletions modules/albo-installation.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -70,6 +70,7 @@ $ aws iam create-role --role-name "${CLUSTER_NAME}-alb-operator" \
Take note of the Operator role ARN in the output. This is referred to as the `$OPERATOR_ROLE_ARN` for the remainder of this process.
.. Associate the Operator role and policy:
+
[source,terminal]
----
$ aws iam attach-role-policy --role-name "${CLUSTER_NAME}-alb-operator" \
--policy-arn $OPERATOR_POLICY_ARN
Expand Down Expand Up @@ -160,6 +161,7 @@ Take note of the Controller role ARN in the output. This is referred to as the `

.. Associate the Controller role and policy:
+
[source,terminal]
----
$ aws iam attach-role-policy \
--role-name "${CLUSTER_NAME}-albo-controller" \
Expand Down
30 changes: 29 additions & 1 deletion modules/albo-prerequisites.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -44,10 +44,30 @@ $ oc login --token=<token> --server=<cluster_url>
[source,terminal]
----
$ export CLUSTER_NAME=$(oc get infrastructure cluster -o=jsonpath="{.status.apiServerURL}" | sed 's|^https://||' | awk -F . '{print $2}')
----
+
[source,terminal]
----
$ export REGION=$(oc get infrastructure cluster -o=jsonpath="{.status.platformStatus.aws.region}")
----
+
[source,terminal]
----
$ export OIDC_ENDPOINT=$(oc get authentication.config.openshift.io cluster -o jsonpath='{.spec.serviceAccountIssuer}' | sed 's|^https://||')
----
+
[source,terminal]
----
$ export AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text)
----
+
[source,terminal]
----
$ export SCRATCH="/tmp/${CLUSTER_NAME}/alb-operator"
----
+
[source,terminal]
----
$ mkdir -p ${SCRATCH}
----
+
Expand Down Expand Up @@ -91,7 +111,15 @@ You must tag your AWS VPC resources before you install the AWS Load Balancer Ope
[source,terminal]
----
$ export VPC_ID=<vpc-id>
----
+
[source,terminal]
----
$ export PUBLIC_SUBNET_IDS="<public-subnet-a-id> <public-subnet-b-id> <public-subnet-c-id>"
----
+
[source,terminal]
----
$ export PRIVATE_SUBNET_IDS="<private-subnet-a-id> <private-subnet-b-id> <private-subnet-c-id>"
----

Expand Down Expand Up @@ -127,4 +155,4 @@ EOF
[source,bash]
----
bash ${SCRATCH}/tag-subnets.sh
----
----
14 changes: 9 additions & 5 deletions modules/albo-validate-install.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -82,6 +82,10 @@ ALB provisioning takes a few minutes. If you receive an error that says `curl: (
----
$ ALB_INGRESS=$(oc -n hello-world get ingress hello-openshift-alb \
-o jsonpath='{.status.loadBalancer.ingress[0].hostname}')
----
+
[source,termnial]
----
$ curl "http://${ALB_INGRESS}"
----
+
Expand Down Expand Up @@ -127,18 +131,18 @@ NLB provisioning takes a few minutes. If you receive an error that says `curl: (
----
$ NLB=$(oc -n hello-world get service hello-openshift-nlb \
-o jsonpath='{.status.loadBalancer.ingress[0].hostname}')
$ curl "http://${NLB}"
----
+
.Example output
[source,text]
[source,termnial]
----
Hello OpenShift!
$ curl "http://${NLB}"
----
+
Expected output shows `Hello OpenShift!`.

. You can now delete the sample application and all resources in the `hello-world` namespace.
+
[source,terminal]
----
$ oc delete project hello-world
----
----
9 changes: 9 additions & 0 deletions modules/cluster-logging-collector-log-forward-cloudwatch.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -106,7 +106,16 @@ To generate log data for this example, you run a `busybox` pod in a namespace ca
[source,terminal]
----
$ oc run busybox --image=busybox -- sh -c 'while true; do echo "My life is my message"; sleep 3; done'
----

[source,terminal]
----
$ oc logs -f busybox
----

.Example output
[source,terminal]
----
My life is my message
My life is my message
My life is my message
Expand Down
Loading