Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
45 changes: 27 additions & 18 deletions modules/ai-adding-worker-nodes-to-cluster.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -26,9 +26,10 @@ You can add worker nodes to clusters using the Assisted Installer REST API.
+
[source,terminal]
----
$ export API_URL=<api_url> <1>
$ export API_URL=<api_url>
----
<1> Replace `<api_url>` with the Assisted Installer API URL, for example, `https://api.openshift.com`
+
Replace `<api_url>` with the Assisted Installer API URL, for example, `https://api.openshift.com`

. Import the {sno} cluster by running the following commands:
+
Expand All @@ -44,14 +45,17 @@ $ export OPENSHIFT_CLUSTER_ID=$(oc get clusterversion -o jsonpath='{.items[].spe
[source,terminal]
----
$ export CLUSTER_REQUEST=$(jq --null-input --arg openshift_cluster_id "$OPENSHIFT_CLUSTER_ID" '{
"api_vip_dnsname": "<api_vip>", <1>
"api_vip_dnsname": "<api_vip>",
"openshift_cluster_id": $openshift_cluster_id,
"name": "<openshift_cluster_name>" <2>
"name": "<openshift_cluster_name>"
}')
----
<1> Replace `<api_vip>` with the hostname for the cluster's API server. This can be the DNS domain for the API server or the IP address of the single node which the worker node can reach. For example, `api.compute-1.example.com`.
<2> Replace `<openshift_cluster_name>` with the plain text name for the cluster. The cluster name should match the cluster name that was set during the Day 1 cluster installation.
+
where:

`<api_vip>`:: Specifies the hostname for the cluster's API server. This can be the DNS domain for the API server or the IP address of the single node which the worker node can reach. For example, `api.compute-1.example.com`.
`<openshift_cluster_name>`:: Specifies the plain text name for the cluster. The cluster name should match the cluster name that was set during the Day 1 cluster installation.

.. Import the cluster and set the `$CLUSTER_ID` variable. Run the following command:
+
[source,terminal]
Expand All @@ -69,20 +73,23 @@ $ CLUSTER_ID=$(curl "$API_URL/api/assisted-install/v2/clusters/import" -H "Autho
[source,terminal]
----
export INFRA_ENV_REQUEST=$(jq --null-input \
--slurpfile pull_secret <path_to_pull_secret_file> \//<1>
--arg ssh_pub_key "$(cat <path_to_ssh_pub_key>)" \//<2>
--slurpfile pull_secret <path_to_pull_secret_file> \
--arg ssh_pub_key "$(cat <path_to_ssh_pub_key>)" \
--arg cluster_id "$CLUSTER_ID" '{
"name": "<infraenv_name>", <3>
"name": "<infraenv_name>",
"pull_secret": $pull_secret[0] | tojson,
"cluster_id": $cluster_id,
"ssh_authorized_key": $ssh_pub_key,
"image_type": "<iso_image_type>" <4>
"image_type": "<iso_image_type>"
}')
----
<1> Replace `<path_to_pull_secret_file>` with the path to the local file containing the downloaded pull secret from Red Hat OpenShift Cluster Manager at link:console.redhat.com/openshift/install/pull-secret[console.redhat.com].
<2> Replace `<path_to_ssh_pub_key>` with the path to the public SSH key required to access the host. If you do not set this value, you cannot access the host while in discovery mode.
<3> Replace `<infraenv_name>` with the plain text name for the `InfraEnv` resource.
<4> Replace `<iso_image_type>` with the ISO image type, either `full-iso` or `minimal-iso`.
+
where:

`<path_to_pull_secret_file>`:: Specifies the path to the local file containing the downloaded pull secret from Red Hat OpenShift Cluster Manager at link:console.redhat.com/openshift/install/pull-secret[console.redhat.com].
`<path_to_ssh_pub_key>`:: Specifies the path to the public SSH key required to access the host. If you do not set this value, you cannot access the host while in discovery mode.
`<infraenv_name>`:: Specifies the plain text name for the `InfraEnv` resource.
`<iso_image_type>`:: Specifies the ISO image type, either `full-iso` or `minimal-iso`.
+
.. Post the `$INFRA_ENV_REQUEST` to the link:https://api.openshift.com/?urls.primaryName=assisted-service%20service#/installer/RegisterInfraEnv[/v2/infra-envs] API and set the `$INFRA_ENV_ID` variable:
+
Expand All @@ -108,9 +115,10 @@ https://api.openshift.com/api/assisted-images/images/41b91e72-c33e-42ee-b80f-b5c
+
[source,terminal]
----
$ curl -L -s '<iso_url>' --output rhcos-live-minimal.iso <1>
$ curl -L -s '<iso_url>' --output rhcos-live-minimal.iso
----
<1> Replace `<iso_url>` with the URL for the ISO from the previous step.
+
Replace `<iso_url>` with the URL for the ISO from the previous step.

. Boot the new worker host from the downloaded `rhcos-live-minimal.iso`.

Expand All @@ -131,9 +139,10 @@ $ curl -s "$API_URL/api/assisted-install/v2/clusters/$CLUSTER_ID" -H "Authorizat
+
[source,terminal]
----
$ HOST_ID=<host_id> <1>
$ HOST_ID=<host_id>
----
<1> Replace `<host_id>` with the host ID from the previous step.
+
Replace `<host_id>` with the host ID from the previous step.

. Check that the host is ready to install by running the following command:
+
Expand Down
25 changes: 15 additions & 10 deletions modules/binding-infra-node-workloads-using-taints-tolerations.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -76,7 +76,7 @@ spec:
----
====
+
These examples place a taint on `node1` that has the `node-role.kubernetes.io/infra` key and the `NoSchedule` taint effect. Nodes with the `NoSchedule` effect schedule only pods that tolerate the taint, but allow existing pods to remain scheduled on the node.
These examples place a taint on `node1` that has the `node-role.kubernetes.io/infra` key and the `NoSchedule` taint effect. Nodes with the `NoSchedule` effect schedule only pods that tolerate the taint, but allow existing pods to remain scheduled on the node.
+
If you added a `NoSchedule` taint to the infrastructure node, any pods that are controlled by a daemon set on that node are marked as `misscheduled`. You must either delete the pods or add a toleration to the pods as shown in the Red Hat Knowledgebase solution link:https://access.redhat.com/solutions/6592171[add toleration on `misscheduled` DNS pods]. Note that you cannot add a toleration to a daemon set object that is managed by an operator.
+
Expand All @@ -98,15 +98,20 @@ metadata:
spec:
# ...
tolerations:
- key: node-role.kubernetes.io/infra <1>
value: reserved <2>
effect: NoSchedule <3>
operator: Equal <4>
- key: node-role.kubernetes.io/infra
value: reserved
effect: NoSchedule
operator: Equal
----
<1> Specify the key that you added to the node.
<2> Specify the value of the key-value pair taint that you added to the node.
<3> Specify the effect that you added to the node.
<4> Specify the `Equal` Operator to require a taint with the key `node-role.kubernetes.io/infra` to be present on the node.
+
where:
+
--
`spec.tolerations.key`:: Specifies the key that you added to the node.
`spec.tolerations.value`:: Specifies the value of the key-value pair taint that you added to the node.
`spec.tolerations.effect`:: Specifies the effect that you added to the node.
`spec.tolerations.operator`:: Specifies the `Equal` Operator to require a taint with the key `node-role.kubernetes.io/infra` to be present on the node.
--
+
This toleration matches the taint created by the `oc adm taint` command. A pod with this toleration can be scheduled onto the infrastructure node.
+
Expand All @@ -117,4 +122,4 @@ Moving pods for an Operator installed via OLM to an infrastructure node is not a

. Schedule the pod to the infrastructure node by using a scheduler. See the documentation for "Controlling pod placement using the scheduler" for details.

. Remove any workloads that you do not want, or that do not belong, on the new infrastructure node. See the list of workloads supported for use on infrastructure nodes in "{product-title} infrastructure components".
. Remove any workloads that you do not want, or that do not belong, on the new infrastructure node. See the list of workloads supported for use on infrastructure nodes in "{product-title} infrastructure components".
25 changes: 15 additions & 10 deletions modules/customize-certificates-replace-default-router.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -24,10 +24,11 @@ You can replace the default ingress certificate for all applications under the `
[source,terminal]
----
$ oc create configmap custom-ca \
--from-file=ca-bundle.crt=</path/to/example-ca.crt> \//<1>
--from-file=ca-bundle.crt=</path/to/example-ca.crt> \
-n openshift-config
----
<1> `</path/to/example-ca.crt>` is the path to the root CA certificate file on your local file system. For example, `/etc/pki/ca-trust/source/anchors`.
+
`</path/to/example-ca.crt>` is the path to the root CA certificate file on your local file system. For example, `/etc/pki/ca-trust/source/anchors`.

. Update the cluster-wide proxy configuration with the newly created config map:
+
Expand All @@ -49,25 +50,29 @@ If you change any other parameter in the `openshift-config-user-ca-bundle.crt` f
+
[source,terminal]
----
$ oc create secret tls <secret> \//<1>
--cert=</path/to/cert.crt> \//<2>
--key=</path/to/cert.key> \//<3>
$ oc create secret tls <secret> \
--cert=</path/to/cert.crt> \
--key=</path/to/cert.key> \
-n openshift-ingress
----
<1> `<secret>` is the name of the secret that will contain the certificate chain and private key.
<2> `</path/to/cert.crt>` is the path to the certificate chain on your local file system.
<3> `</path/to/cert.key>` is the path to the private key associated with this certificate.
+
where:

`<secret>`:: Specifies the name of the secret that will contain the certificate chain and private key.
`</path/to/cert.crt>`:: Specifies the path to the certificate chain on your local file system.
`</path/to/cert.key>`:: Specifies the path to the private key associated with this certificate.

. Update the Ingress Controller configuration with the newly created secret:
+
[source,terminal]
----
$ oc patch ingresscontroller.operator default \
--type=merge -p \
'{"spec":{"defaultCertificate": {"name": "<secret>"}}}' \//<1>
'{"spec":{"defaultCertificate": {"name": "<secret>"}}}' \
-n openshift-ingress-operator
----
<1> Replace `<secret>` with the name used for the secret in the previous step.
+
Replace `<secret>` with the name used for the secret in the previous step.
+
[IMPORTANT]
====
Expand Down
81 changes: 42 additions & 39 deletions modules/installation-gcp-user-infra-shared-vpc-config-yaml.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -16,30 +16,30 @@ This sample YAML file is provided for reference only. You must obtain your `inst
[source,yaml]
----
apiVersion: v1
baseDomain: example.com <1>
controlPlane: <2>
hyperthreading: Enabled <3> <4>
baseDomain: example.com
controlPlane:
hyperthreading: Enabled
name: master
platform:
gcp:
type: n2-standard-4
zones:
- us-central1-a
- us-central1-c
tags: <5>
tags:
- control-plane-tag1
- control-plane-tag2
replicas: 3
compute: <2>
- hyperthreading: Enabled <3>
compute:
- hyperthreading: Enabled
name: worker
platform:
gcp:
type: n2-standard-4
zones:
- us-central1-a
- us-central1-c
tags: <5>
tags:
- compute-tag1
- compute-tag2
replicas: 0
Expand All @@ -51,62 +51,65 @@ networking:
hostPrefix: 23
machineNetwork:
- cidr: 10.0.0.0/16
networkType: OVNKubernetes <6>
networkType: OVNKubernetes
serviceNetwork:
- 172.30.0.0/16
platform:
gcp:
defaultMachinePlatform:
tags: <5>
tags:
- global-tag1
- global-tag2
projectID: openshift-production <7>
region: us-central1 <8>
projectID: openshift-production
region: us-central1
pullSecret: '{"auths": ...}'
ifndef::openshift-origin[]
fips: false <9>
sshKey: ssh-ed25519 AAAA... <10>
publish: Internal <11>
endif::openshift-origin[]
ifdef::openshift-origin[]
sshKey: ssh-ed25519 AAAA... <9>
publish: Internal <10>
fips: false
endif::openshift-origin[]
sshKey: ssh-ed25519 AAAA...
publish: Internal
----
<1> Specify the public DNS on the host project.
<2> If you do not provide these parameters and values, the installation program provides the default value.
<3> The `controlPlane` section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the `compute` section must begin with a hyphen, `-`, and the first line of the `controlPlane` section must not. Although both sections currently define a single machine pool, it is possible that future versions of {product-title} will support defining multiple compute pools during installation. Only one control plane pool is used.
<4> Whether to enable or disable simultaneous multithreading, or `hyperthreading`. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to `Disabled`. If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines.

where:

`baseDomain`:: Specifies the public DNS on the host project.

`controlPlane`:: Specifies the configuration for the machines that form the control plane. The `controlPlane` section is a single mapping and the first line of the `controlPlane` section must not begin with a hyphen (`-`). Only one control plane pool is used. If you do not provide parameters and values for this sections, the installation program provides the default values.

`controlPlane.hyperthreading`:: Specifies whether to enable or disable simultaneous multithreading, or `hyperthreading`. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to `Disabled`. If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines.
+
[IMPORTANT]
====
If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger machine types, such as `n1-standard-8`, for your machines if you disable simultaneous multithreading.
====
<5> Optional: A set of network tags to apply to the control plane or compute machine sets. The `platform.gcp.defaultMachinePlatform.tags` parameter applies to both control plane and compute machines. If the `compute.platform.gcp.tags` or `controlPlane.platform.gcp.tags` parameters are set, they override the `platform.gcp.defaultMachinePlatform.tags` parameter.
<6> The cluster network plugin to install. The default value `OVNKubernetes` is the only supported value.
<7> Specify the main project where the VM instances reside.
<8> Specify the region that your VPC network is in.

`controlPlane.platform.gcp.tags`:: Specifies a set of network tags to apply to the control plane machine sets. If the `controlPlane.platform.gcp.tags` parameter is set, it overrides the `platform.gcp.defaultMachinePlatform.tags` parameter. This value is optional.

`compute`:: Specifies the configuration for the machines that comprise the compute nodes. The `compute` section is a sequence of mappings and the first line of the `compute` section must begin with a hyphen (`-`). Although this section currently defines a single machine pool, it is possible that future versions of {product-title} will support defining multiple compute pools during installation. If you do not provide parameters and values for this section, the installation program provides the default values.

`compute.platform.gcp.tags`:: Specifies a set of network tags to apply to the compute machine sets. If the `compute.platform.gcp.tags` parameter is set, it overrides the `platform.gcp.defaultMachinePlatform.tags` parameter. This value is optional.

`networking.networkType`:: Specifies the cluster network plugin to install. The default value `OVNKubernetes` is the only supported value.

`platform.gcp.defaultMachinePlatform.tags`:: Specifies a default set of network tags to apply to the control plane or compute machine sets. The `platform.gcp.defaultMachinePlatform.tags` parameter applies to both control plane and compute machines. If the `compute.platform.gcp.tags` or `controlPlane.platform.gcp.tags` parameters are set, they override the `platform.gcp.defaultMachinePlatform.tags` parameter. This value is optional.

`platform.gcp.projectID`:: Specifies the main project where the VM instances reside.

`platform.gcp.region`:: Specifies the region that your VPC network is in.

ifndef::openshift-origin[]
<9> Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the {op-system-first} machines that {product-title} runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with {op-system} instead.
`fips`:: Specifies whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the {op-system-first} machines that {product-title} runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with {op-system} instead.
+
--
include::snippets/fips-snippet.adoc[]
--
<10> You can optionally provide the `sshKey` value that you use to access the machines in your cluster.
endif::openshift-origin[]
ifdef::openshift-origin[]
<9> You can optionally provide the `sshKey` value that you use to access the machines in your cluster.
endif::openshift-origin[]

`sshKey`:: Specifies the `sshKey` value that you use to access the machines in your cluster. This value is optional.
+
[NOTE]
====
For production {product-title} clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your `ssh-agent` process uses.
====
ifndef::openshift-origin[]
<11> How to publish the user-facing endpoints of your cluster. Set `publish` to `Internal` to deploy a private cluster, which cannot be accessed from the internet. The default value is `External`.
To use a shared VPC in a cluster that uses infrastructure that you provision, you must set `publish` to `Internal`. The installation program will no longer be able to access the public DNS zone for the base domain in the host project.
endif::openshift-origin[]
ifdef::openshift-origin[]
<10> How to publish the user-facing endpoints of your cluster. Set `publish` to `Internal` to deploy a private cluster, which cannot be accessed from the internet. The default value is `External`.
To use a shared VPC in a cluster that uses infrastructure that you provision, you must set `publish` to `Internal`. The installation program will no longer be able to access the public DNS zone for the base domain in the host project.
endif::openshift-origin[]

`publish`:: Specifies to publish the user-facing endpoints of your cluster. Set `publish` to `Internal` to deploy a private cluster, which cannot be accessed from the internet. The default value is `External`. To use a shared VPC in a cluster that uses infrastructure that you provision, you must set `publish` to `Internal`. The installation program will no longer be able to access the public DNS zone for the base domain in the host project.
Loading