diff --git a/modules/installation-adding-registry-pull-secret.adoc b/modules/installation-adding-registry-pull-secret.adoc index b02a0a561f01..3d59b8ddcd71 100644 --- a/modules/installation-adding-registry-pull-secret.adoc +++ b/modules/installation-adding-registry-pull-secret.adoc @@ -56,6 +56,7 @@ endif::[] . Log in to your registry by using the following command: + +[source,terminal] ---- $ oc registry login --to ./pull-secret.json --registry "" --auth-basic=: ---- diff --git a/modules/installation-mirror-repository.adoc b/modules/installation-mirror-repository.adoc index b43451b078f1..b2803bdc4f24 100644 --- a/modules/installation-mirror-repository.adoc +++ b/modules/installation-mirror-repository.adoc @@ -30,38 +30,84 @@ link:https://access.redhat.com/downloads/content/290/[{product-title} downloads to determine the version of {product-title} that you want to install and determine the corresponding tag on the link:https://quay.io/repository/openshift-release-dev/ocp-release?tab=tags[Repository Tags] page. . Set the required environment variables: +.. Export the release version: + +[source,terminal] ---- -$ export OCP_RELEASE= <1> -$ export LOCAL_REGISTRY=':' <2> -$ export LOCAL_REPOSITORY='' <3> -$ export PRODUCT_REPO='openshift-release-dev' <4> -$ export LOCAL_SECRET_JSON='' <5> -$ export RELEASE_NAME="ocp-release" <6> -$ export ARCHITECTURE= <7> -$ REMOVABLE_MEDIA_PATH= <8> +$ OCP_RELEASE= ---- -<1> For ``, specify the tag that corresponds to the version of {product-title} to ++ +For ``, specify the tag that corresponds to the version of {product-title} to install, such as `4.5.4`. -<2> For ``, specify the registry domain name for your mirror + +.. Export the local registry name and host port: ++ +[source,terminal] +---- +$ LOCAL_REGISTRY=':' +---- ++ +For ``, specify the registry domain name for your mirror repository, and for ``, specify the port that it serves content on. -<3> For ``, specify the name of the repository to create in your + +.. Export the local repository name: ++ +[source,terminal] +---- +$ LOCAL_REPOSITORY='' +---- ++ +For ``, specify the name of the repository to create in your registry, such as `ocp4/openshift4`. -<4> The repository to mirror. For a production release, you must specify -`openshift-release-dev`. -<5> For ``, specify the absolute path to and file name of -the pull secret for your mirror registry that you created. -<6> The release mirror. For a production release, you must specify -`ocp-release`. -<7> For `server_architecture`, specify the architecture of the server, such as `x86_64`. -<8> For ``, specify the path to the directory to host the mirrored images. + +.. Export the name of the repository to mirror: ++ +[source,terminal] +---- +$ PRODUCT_REPO='openshift-release-dev' +---- ++ +For a production release, you must specify `openshift-release-dev`. + +.. Export the path to your registry pull secret: ++ +[source,terminal] +---- +$ LOCAL_SECRET_JSON='' +---- ++ +For ``, specify the absolute path to and file name of the pull secret for your mirror registry that you created. + +.. Export the release mirror: ++ +[source,terminal] +---- +$ RELEASE_NAME="ocp-release" +---- ++ +For a production release, you must specify `ocp-release`. + +.. Export the type of architecture for your server, such as `x86_64`.: ++ +[source,terminal] +---- +$ ARCHITECTURE= +---- + +.. Export the path to the directory to host the mirrored images: ++ +[source,terminal] +---- +$ REMOVABLE_MEDIA_PATH= +---- . Mirror the version images to the internal container registry: ** If your mirror host does not have internet access, take the following actions: ... Connect the removable media to a system that is connected to the internet. ... Review the images and configuration manifests to mirror: + +[source,terminal] ---- $ oc adm -a ${LOCAL_SECRET_JSON} release mirror \ --from=quay.io/${PRODUCT_REPO}/${RELEASE_NAME}:${OCP_RELEASE}-${ARCHITECTURE} \ @@ -72,11 +118,13 @@ $ oc adm -a ${LOCAL_SECRET_JSON} release mirror \ command. The information about your mirrors is unique to your mirrored repository, and you must add the `imageContentSources` section to the `install-config.yaml` file during installation. ... Mirror the images to a directory on the removable media: + +[source,terminal] ---- $ oc adm release mirror -a ${LOCAL_SECRET_JSON} --to-dir=${REMOVABLE_MEDIA_PATH}/mirror quay.io/${PRODUCT_REPO}/${RELEASE_NAME}:${OCP_RELEASE}-${ARCHITECTURE} ---- ... Take the media to the restricted network environment and upload the images to the local container registry. + +[source,terminal] ---- $ oc image mirror -a ${LOCAL_SECRET_JSON} --from-dir=${REMOVABLE_MEDIA_PATH}/mirror 'file://openshift/release:${OCP_RELEASE}*' ${LOCAL_REGISTRY}/${LOCAL_REPOSITORY} ---- @@ -84,6 +132,7 @@ $ oc image mirror -a ${LOCAL_SECRET_JSON} --from-dir=${REMOVABLE_MEDIA_PATH}/mir ** If the local container registry is connected to the mirror host, take the following actions: ... Directly push the release images to the local registry by using following command: + +[source,terminal] ---- $ oc adm -a ${LOCAL_SECRET_JSON} release mirror \ --from=quay.io/${PRODUCT_REPO}/${RELEASE_NAME}:${OCP_RELEASE}-${ARCHITECTURE} \ @@ -100,6 +149,7 @@ command. The information about your mirrors is unique to your mirrored repositor . To create the installation program that is based on the content that you mirrored, extract it and pin it to the release: + +[source,terminal] ---- $ oc adm -a ${LOCAL_SECRET_JSON} release extract --command=openshift-install "${LOCAL_REGISTRY}/${LOCAL_REPOSITORY}:${OCP_RELEASE}-${ARCHITECTURE}" ---- diff --git a/modules/installation-preparing-restricted-cluster-to-gather-support-data.adoc b/modules/installation-preparing-restricted-cluster-to-gather-support-data.adoc index b61857f2b324..4c1794f0fea6 100644 --- a/modules/installation-preparing-restricted-cluster-to-gather-support-data.adoc +++ b/modules/installation-preparing-restricted-cluster-to-gather-support-data.adoc @@ -11,6 +11,7 @@ Clusters using a restricted network must imporat the default must-gather image i . Import the default must-gather image from your installation payload: + +[source,terminal] ---- $ oc import-image is/must-gather -n openshift ---- diff --git a/modules/installation-restricted-network-samples.adoc b/modules/installation-restricted-network-samples.adoc index 8501a9551912..37a39b8ff757 100644 --- a/modules/installation-restricted-network-samples.adoc +++ b/modules/installation-restricted-network-samples.adoc @@ -57,6 +57,7 @@ not addressed in this procedure. . Access the images of a specific imagestream to mirror, for example: + +[source,terminal] ---- $ oc get is -n openshift -o json | jq .spec.tags[].from.name | grep registry.redhat.io ---- @@ -69,22 +70,31 @@ ifdef::configsamplesoperator[] into your defined preferred registry, for example: endif::[] + +[source,terminal] ---- $ oc image mirror registry.redhat.io/rhscl/ruby-25-rhel7:latest ${MIRROR_ADDR}/rhscl/ruby-25-rhel7:latest ---- + +. Create the cluster’s image configuration object: + +[source,terminal] +---- +$ oc create configmap registry-config --from-file=${MIRROR_ADDR_HOSTNAME}..5000=$path/ca.crt -n openshift-config +---- + . Add the required trusted CAs for the mirror in the cluster’s image configuration object: + +[source,terminal] ---- -$ oc create configmap registry-config --from-file=${MIRROR_ADDR_HOSTNAME}..5000=$path/ca.crt -n openshift-config $ oc patch image.config.openshift.io/cluster --patch '{"spec":{"additionalTrustedCA":{"name":"registry-config"}}}' --type=merge ---- -+ + . Update the `samplesRegistry` field in the Samples Operator configuration object to contain the `hostname` portion of the mirror location defined in the mirror configuration: + +[source,terminal] ---- $ oc get configs.samples.operator.openshift.io -n openshift-cluster-samples-operator ---- diff --git a/modules/installation-special-config-crony.adoc b/modules/installation-special-config-crony.adoc index 5aed5e973d69..2d2377a765e3 100644 --- a/modules/installation-special-config-crony.adoc +++ b/modules/installation-special-config-crony.adoc @@ -13,6 +13,7 @@ to your nodes as a MachineConfig. . Create the contents of the `chrony.conf` file and encode it as base64. For example: + +[source,terminal] ---- $ cat << EOF | base64 server clock.redhat.com iburst @@ -21,7 +22,11 @@ $ cat << EOF | base64 rtcsync logdir /var/log/chrony EOF - +---- ++ +.Example output +[source,terminal] +---- ICAgIHNlcnZlciBjbG9jay5yZWRoYXQuY29tIGlidXJzdAogICAgZHJpZnRmaWxlIC92YXIvbGli L2Nocm9ueS9kcmlmdAogICAgbWFrZXN0ZXAgMS4wIDMKICAgIHJ0Y3N5bmMKICAgIGxvZ2RpciAv dmFyL2xvZy9jaHJvbnkK @@ -31,6 +36,7 @@ dmFyL2xvZy9jaHJvbnkK This example adds the file to `master` nodes. You can change it to `worker` or make an additional MachineConfig for the `worker` role: + +[source,terminal] ---- $ cat << EOF > ./99-masters-chrony-configuration.yaml apiVersion: machineconfiguration.openshift.io/v1 @@ -68,6 +74,7 @@ directory, then continue to create the cluster. . If the cluster is already running, apply the file as follows: + +[source,terminal] ---- - $ oc apply -f ./masters-chrony-configuration.yaml + $ oc apply -f ./masters-chrony-configuration.yaml ---- diff --git a/modules/installation-special-config-encrypt-disk-tang.adoc b/modules/installation-special-config-encrypt-disk-tang.adoc index 203e37774246..9ea51570280b 100644 --- a/modules/installation-special-config-encrypt-disk-tang.adoc +++ b/modules/installation-special-config-encrypt-disk-tang.adoc @@ -19,25 +19,43 @@ If you miss this step, the second boot will fail. For example, to configure DHCP networking, identify `ip=dhcp` or set static networking when you add parameters to the kernel command line. -. Generate the thumbprint. Install the clevis package, it is not already -installed, and generate a thumbprint -from the Tang server. Replace the value of `url` with the Tang server URL: +. Install the clevis package, if it is not already installed: + +[source,terminal] ---- $ sudo yum install clevis -y +---- + +. Generate a thumbprint from the Tang server. + +.. In the following command, replace the value of `url` with the Tang server URL: ++ +[source,terminal] +---- $ echo nifty random wordwords \ | clevis-encrypt-tang \ '{"url":"https://tang.example.org"}' - +---- ++ +.Example output +[source,terminal] +---- The advertisement contains the following signing keys: PLjNyRdGw03zlRoGjQYMahSZGu9 +---- -Do you wish to trust these keys? [ynYN] y +.. When the `Do you wish to trust these keys? [ynYN]` prompt displays, type `Y`, and the thumbprint is displayed: ++ +.Example output +[source,terminal] +---- eyJhbmc3SlRyMXpPenc3ajhEQ01tZVJiTi1oM... ---- + . Create a Base64 encoded file, replacing the URL of the Tang server (`url`) and thumbprint (`thp`) you just generated: + +[source,terminal] ---- $ (cat < ./99-openshift-worker-tang-encryption.yaml apiVersion: machineconfiguration.openshift.io/v1 @@ -75,7 +99,9 @@ spec: EOF ---- +** For master nodes, use the following command: + +[source,terminal] ---- $ cat << EOF > ./99-openshift-master-encryption.yaml apiVersion: machineconfiguration.openshift.io/v1 @@ -88,7 +114,7 @@ spec: config: ignition: version: 2.2.0 - storage: + storage: files: - contents: source: data:text/plain;base64,e30K diff --git a/modules/installation-special-config-encrypt-disk-tpm2.adoc b/modules/installation-special-config-encrypt-disk-tpm2.adoc index 1e732546985c..6979228c2420 100644 --- a/modules/installation-special-config-encrypt-disk-tpm2.adoc +++ b/modules/installation-special-config-encrypt-disk-tpm2.adoc @@ -18,8 +18,9 @@ This is required on most Dell systems. Check the manual for your computer. $ ./openshift-install create manifests --dir= ---- -. In the `openshift` directory, create a master and/or worker file to encrypt -disks for those nodes. Here are examples of those two files: +. In the `openshift` directory, create master or worker files to encrypt +disks for those nodes. +** To create a worker file, run the following command: + [source,terminal] ---- @@ -43,7 +44,7 @@ spec: path: /etc/clevis.json EOF ---- - +** To create a master file, run the following command: + [source,terminal] ---- diff --git a/modules/installation-special-config-kmod.adoc b/modules/installation-special-config-kmod.adoc index 407a05a2ef14..be502b5c2216 100644 --- a/modules/installation-special-config-kmod.adoc +++ b/modules/installation-special-config-kmod.adoc @@ -50,34 +50,63 @@ that on a RHEL 8 system, do the following: .Procedure -. Get a RHEL 8 system, then register and subscribe it: +. Register a RHEL 8 system: + +[source,terminal] ---- # subscription-manager register -Username: yourname -Password: *************** +---- + +. Attach a subscription to the RHEL 8 system: ++ +[source,terminal] +---- # subscription-manager attach --auto ---- -. Install software needed to build the software and container: +. Install software that is requried to build the software and container: + +[source,terminal] ---- # yum install podman make git -y ---- -. Clone the kmod-via-containers repository: +. Clone the `kmod-via-containers` repository: +.. Create a folder for the repository: + +[source,terminal] ---- $ mkdir kmods; cd kmods +---- + +.. Clone the repository: ++ +[source,terminal] +---- $ git clone https://github.com/kmods-via-containers/kmods-via-containers ---- . Install a KVC framework instance on your RHEL 8 build host to test the module. -This adds a kmods-via-container systemd service and loads it: +This adds a `kmods-via-container` systemd service and loads it: + +.. Change to the `kmod-via-containers` directory: + +[source,terminal] ---- $ cd kmods-via-containers/ +---- + +.. Install the KVC framework instance: ++ +[source,terminal] +---- $ sudo make install +---- + +.. Reload the systemd manager configuration: ++ +[source,terminal] +---- $ sudo systemctl daemon-reload ---- @@ -87,19 +116,31 @@ have control over, but is supplied by others. You will need content similar to the content shown in the `kvc-simple-kmod` example that can be cloned to your system as follows: + +[source,terminal] ---- -$ cd .. -$ git clone https://github.com/kmods-via-containers/kvc-simple-kmod +$ cd .. ; git clone https://github.com/kmods-via-containers/kvc-simple-kmod ---- -. Edit the configuration file, `simple-kmod.conf`, in his example, and -change the name of the Dockerfile to `Dockerfile.rhel` so the file appears as -shown here: +. Edit the configuration file, `simple-kmod.conf` file, in this example, and +change the name of the Dockerfile to `Dockerfile.rhel`: + +.. Change to the `kvc-simple-kmod` directory: + +[source,terminal] ---- $ cd kvc-simple-kmod -$ cat simple-kmod.conf +---- +.. Rename the Dockerfile: ++ +[source,terminal] +---- +$ cat simple-kmod.conf +---- ++ +.Example Dockerfile +[source,terminal] +---- KMOD_CONTAINER_BUILD_CONTEXT="https://github.com/kmods-via-containers/kvc-simple-kmod.git" KMOD_CONTAINER_BUILD_FILE=Dockerfile.rhel KMOD_SOFTWARE_VERSION=dd1a7d4 @@ -107,18 +148,45 @@ KMOD_NAMES="simple-kmod simple-procfs-kmod" ---- . Create an instance of `kmods-via-containers@.service` for your kernel module, -`simple-kmod` in this example, and enable it: +`simple-kmod` in this example: + +[source,terminal] ---- $ sudo make install +---- + +. Enable the `kmods-via-containers@.service` instance: ++ +[source,terminal] +---- $ sudo kmods-via-containers build simple-kmod $(uname -r) ---- -. Enable and start the systemd service, then check the status: + +. Enable and start the systemd service: +.. Enable the service: + +[source,terminal] ---- $ sudo systemctl enable kmods-via-containers@simple-kmod.service +---- + +.. Start the service: ++ +[source,terminal] +---- $ sudo systemctl start kmods-via-containers@simple-kmod.service +---- + +.. Review the service status: ++ +[source,terminal] +---- $ sudo systemctl status kmods-via-containers@simple-kmod.service +---- ++ +.Example output +[source,terminal] +---- ● kmods-via-containers@simple-kmod.service - Kmods Via Containers - simple-kmod Loaded: loaded (/etc/systemd/system/kmods-via-containers@.service; enabled; vendor preset: disabled) @@ -127,31 +195,55 @@ $ sudo systemctl status kmods-via-containers@simple-kmod.service . To confirm that the kernel modules are loaded, use the `lsmod` command to list the modules: + +[source,terminal] ---- $ lsmod | grep simple_ +---- ++ +.Example output +[source,terminal] +---- simple_procfs_kmod 16384 0 simple_kmod 16384 0 ---- -. The simple-kmod example has a few other ways to test that it is working. -Look for a "Hello world" message in the kernel ring buffer with `dmesg`: +. Optional. Use other methods to check that the `simple-kmod` example is working: +** Look for a "Hello world" message in the kernel ring buffer with `dmesg`: + +[source,terminal] ---- $ dmesg | grep 'Hello world' -[ 6420.761332] Hello world from simple_kmod. ---- + -Check the value of `simple-procfs-kmod` in `/proc`: +.Example output +[source,terminal] +---- +[ 6420.761332] Hello world from simple_kmod. +---- + +** Check the value of `simple-procfs-kmod` in `/proc`: + +[source,terminal] ---- $ sudo cat /proc/simple-procfs-kmod -simple-procfs-kmod number = 0 ---- + -Run the spkut command to get more information from the module: +.Example output +[source,terminal] +---- +simple-procfs-kmod number = 0 +---- + +** Run the `spkut` command to get more information from the module: + +[source,terminal] ---- $ sudo spkut 44 +---- ++ +.Example output +[source,terminal] +---- KVC: wrapper simple-kmod for 4.18.0-147.3.1.el8_1.x86_64 Running userspace wrapper using the kernel module container... + podman run -i --rm --privileged @@ -211,25 +303,39 @@ and makes sure that the `kmods-via-containers@simple-kmod.service` gets started on boot: ==== -. Get a RHEL 8 system, then register and subscribe it: +. Register a RHEL 8 system: + +[source,terminal] ---- # subscription-manager register -Username: yourname -Password: *************** +---- + +. Attach a subscription to the RHEL 8 system: ++ +[source,terminal] +---- # subscription-manager attach --auto ---- . Install software needed to build the software: + +[source,terminal] ---- # yum install podman make git -y ---- . Create an Ignition config file that creates a systemd unit file: +.. Create a directory to host the Ignition config file: + +[source,terminal] ---- $ mkdir kmods; cd kmods +---- + +.. Create the Ignition config file that creates a systemd unit file: ++ +[source,terminal] +---- $ cat < ./baseconfig.ign { "ignition": { "version": "2.2.0" }, @@ -262,8 +368,9 @@ to use the file during `openshift-install`. The public SSH key is not needed if you create the MachineConfig via the MCO. ==== -. Create a base MCO YAML snippet that looks like: - +. Create a base MCO YAML snippet that uses the following configuration: ++ +[source,terminal] ---- $ cat < mc-base.yaml apiVersion: machineconfiguration.openshift.io/v1 @@ -286,9 +393,18 @@ for the two types of deployments. ==== . Get the `kmods-via-containers` software: + +.. Clone the `kmods-via-containers` repository: + +[source,terminal] ---- $ git clone https://github.com/kmods-via-containers/kmods-via-containers +---- + +.. Clone the `kvc-simple-kmod` repository: ++ +[source,terminal] +---- $ git clone https://github.com/kmods-via-containers/kvc-simple-kmod ---- @@ -296,20 +412,47 @@ $ git clone https://github.com/kmods-via-containers/kvc-simple-kmod . Create a fakeroot directory and populate it with files that you want to deliver via Ignition, using the repositories cloned earlier: + +.. Create the directory: + +[source,terminal] ---- $ FAKEROOT=$(mktemp -d) +---- + +.. Change to the `kmod-via-containers` directory: ++ +[source,terminal] +---- $ cd kmods-via-containers +---- + +.. Install the KVC framework instance: ++ +[source,terminal] +---- $ make install DESTDIR=${FAKEROOT}/usr/local CONFDIR=${FAKEROOT}/etc/ +---- + +.. Change to the `kvc-simple-kmod` directory: ++ +[source,terminal] +---- $ cd ../kvc-simple-kmod +---- + +.. Create the instance: ++ +[source,terminal] +---- $ make install DESTDIR=${FAKEROOT}/usr/local CONFDIR=${FAKEROOT}/etc/ ---- . Get a tool called `filetranspiler` and dependent software: + +[source,terminal] ---- -$ cd .. -$ sudo yum install -y python3 +$ cd .. ; sudo yum install -y python3 git clone https://github.com/ashcrow/filetranspiler.git ---- @@ -317,6 +460,7 @@ git clone https://github.com/ashcrow/filetranspiler.git and have it include the base Ignition config, base MachineConfig, and the fakeroot directory with files you would like to deliver: + +[source,terminal] ---- $ ./filetranspiler/filetranspile -i ./baseconfig.ign \ -f ${FAKEROOT} --format=yaml --dereference-symlinks \ @@ -326,6 +470,7 @@ $ ./filetranspiler/filetranspile -i ./baseconfig.ign \ . If the cluster is not up yet, generate manifest files and add this file to the `openshift` directory. If the cluster is already running, apply the file as follows: + +[source,terminal] ---- $ oc create -f 99-simple-kmod.yaml ---- @@ -337,8 +482,14 @@ service and the kernel modules will be loaded. (using `oc debug node/`, then `chroot /host`). To list the modules, use the `lsmod` command: + +[source,terminal] ---- $ lsmod | grep simple_ +---- ++ +.Example output +[source,terminal] +---- simple_procfs_kmod 16384 0 simple_kmod 16384 0 ---- diff --git a/modules/manually-gathering-logs-with-ssh.adoc b/modules/manually-gathering-logs-with-ssh.adoc index 3acb0cb70f23..aebf3eca474f 100644 --- a/modules/manually-gathering-logs-with-ssh.adoc +++ b/modules/manually-gathering-logs-with-ssh.adoc @@ -17,6 +17,7 @@ methods do not work. . Collect the `bootkube.service` service logs from the bootstrap host using the `journalctl` command by running: + +[source,terminal] ---- $ journalctl -b -f -u bootkube.service ---- @@ -24,6 +25,7 @@ $ journalctl -b -f -u bootkube.service . Collect the bootstrap host's container logs using the Podman logs. This is shown as a loop to get all of the container logs from the host: + +[source,terminal] ---- $ for pod in $(sudo podman ps -a -q); do sudo podman logs $pod; done ---- @@ -31,6 +33,7 @@ $ for pod in $(sudo podman ps -a -q); do sudo podman logs $pod; done . Alternatively, collect the host's container logs using the `tail` command by running: + +[source,terminal] ---- # tail -f /var/lib/containers/storage/overlay-containers/*/userdata/ctr.log ---- @@ -38,6 +41,7 @@ running: . Collect the `kubelet.service` and `crio.service` service logs from the master and worker hosts using the `journalctl` command by running: + +[source,terminal] ---- $ journalctl -b -f -u kubelet.service -u crio.service ---- @@ -45,6 +49,7 @@ $ journalctl -b -f -u kubelet.service -u crio.service . Collect the master and worker host container logs using the `tail` command by running: + +[source,terminal] ---- $ sudo tail -f /var/log/containers/* ---- diff --git a/modules/manually-gathering-logs-without-ssh.adoc b/modules/manually-gathering-logs-without-ssh.adoc index 3db15d73afb1..2e40bed32de0 100644 --- a/modules/manually-gathering-logs-without-ssh.adoc +++ b/modules/manually-gathering-logs-without-ssh.adoc @@ -21,12 +21,14 @@ to investigate what is happening on your host. . Access `journald` unit logs under `/var/log` by running: + +[source,terminal] ---- $ oc adm node-logs --role=master -u kubelet ---- . Access host file paths under `/var/log` by running: + +[source,terminal] ---- $ oc adm node-logs --role=master --path=openshift-apiserver ---- diff --git a/modules/private-clusters-setting-api-private.adoc b/modules/private-clusters-setting-api-private.adoc index 4d615edfc330..1917c06ef285 100644 --- a/modules/private-clusters-setting-api-private.adoc +++ b/modules/private-clusters-setting-api-private.adoc @@ -24,8 +24,14 @@ After you deploy a cluster to Amazon Web Services (AWS) or Microsoft Azure, you . From your terminal, list the cluster machines: + +[source,terminal] ---- $ oc get machine -n openshift-machine-api +---- ++ +.Example output +[source,terminal] +---- NAME STATE TYPE REGION ZONE AGE lk4pj-master-0 running m4.xlarge us-east-1 us-east-1a 17m lk4pj-master-1 running m4.xlarge us-east-1 us-east-1b 17m @@ -40,6 +46,7 @@ You modify the control plane machines, which contain `master` in the name, in th . Remove the external load balancer from each control plane machine. .. Edit a `master` Machine object to remove the reference to the external load balancer. + +[source,terminal] ---- $ oc edit machines -n openshift-machine-api <1> ---- diff --git a/modules/private-clusters-setting-dns-private.adoc b/modules/private-clusters-setting-dns-private.adoc index d5b9b9d1d6ad..b31207e42d92 100644 --- a/modules/private-clusters-setting-dns-private.adoc +++ b/modules/private-clusters-setting-dns-private.adoc @@ -11,8 +11,14 @@ After you deploy a cluster, you can modify its DNS to use only a private zone. . Review the DNS custom resource for your cluster: + +[source,terminal] ---- $ oc get dnses.config.openshift.io/cluster -o yaml +---- ++ +.Example output +[source,yaml] +---- apiVersion: config.openshift.io/v1 kind: DNS metadata: @@ -37,6 +43,7 @@ Note that the `spec` section contains both a private and a public zone. . Patch the DNS custom resource to remove the public zone: + +[source,terminal] ---- $ oc patch dnses.config.openshift.io/cluster --type=merge --patch='{"spec": {"publicZone": null}}' dns.config.openshift.io/cluster patched @@ -51,8 +58,14 @@ DNS records for the existing Ingress objects are not modified when you remove th . Optional: Review the DNS custom resource for your cluster and confirm that the public zone was removed: + +[source,terminal] ---- $ oc get dnses.config.openshift.io/cluster -o yaml +---- ++ +.Example output +[source,yaml] +---- apiVersion: config.openshift.io/v1 kind: DNS metadata: diff --git a/modules/private-clusters-setting-ingress-private.adoc b/modules/private-clusters-setting-ingress-private.adoc index a44d89aad0b7..38f3fd129142 100644 --- a/modules/private-clusters-setting-ingress-private.adoc +++ b/modules/private-clusters-setting-ingress-private.adoc @@ -11,6 +11,7 @@ After you deploy a cluster, you can modify its Ingress Controller to use only a . Modify the default Ingress Controller to use only an internal endpoint: + +[source,terminal] ---- $ oc replace --force --wait --filename - <