From 782b2c5ee4e4d4a38117a0ede927f82d2b015e4f Mon Sep 17 00:00:00 2001 From: dfitzmau Date: Mon, 28 Jul 2025 10:49:29 +0100 Subject: [PATCH] OSDOCS-15470-security: Removed unneccessary code blocks for net security docs --- .../ipi/installing-aws-localzone.adoc | 12 +++---- ...ng-end-to-end-tests-junit-test-output.adoc | 4 +-- modules/nw-egressnetworkpolicy-create.adoc | 13 +------- ...licy-allow-application-all-namespaces.adoc | 17 +++------- ...llow-application-particular-namespace.adoc | 24 +++----------- ...-networkpolicy-allow-external-clients.adoc | 14 ++------- modules/nw-networkpolicy-audit-configure.adoc | 19 +++--------- modules/nw-networkpolicy-audit-disable.adoc | 8 ++--- modules/nw-networkpolicy-audit-enable.adoc | 8 ++--- modules/nw-networkpolicy-create-cli.adoc | 31 +++++++------------ modules/nw-networkpolicy-delete-cli.adoc | 15 ++------- .../nw-networkpolicy-deny-all-allowed.adoc | 13 ++------ .../nw-networkpolicy-project-defaults.adoc | 13 +++++--- modules/nw-ovn-ipsec-enable.adoc | 15 +++++---- modules/nw-ovn-ipsec-north-south-enable.adoc | 26 +++++----------- 15 files changed, 67 insertions(+), 165 deletions(-) diff --git a/installing/installing_aws/ipi/installing-aws-localzone.adoc b/installing/installing_aws/ipi/installing-aws-localzone.adoc index c176cb3db7b9..cd7c6f29ae40 100644 --- a/installing/installing_aws/ipi/installing-aws-localzone.adoc +++ b/installing/installing_aws/ipi/installing-aws-localzone.adoc @@ -21,13 +21,13 @@ AWS {zone-type} is an infrastructure that place Cloud Resources close to metropo + [WARNING] ==== -If you have an AWS profile stored on your computer, it must not use a temporary session token that you generated while using a multi-factor authentication device. The cluster continues to use your current AWS credentials to create AWS resources for the entire life of the cluster, so you must use key-based, long-term credentials. To generate appropriate keys, see link:https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html[Managing Access Keys for IAM Users] in the AWS documentation. You can supply the keys when you run the installation program. +If you have an AWS profile stored on your computer, it must not use a temporary session token that you generated while using a multifactor authentication device. The cluster continues to use your current AWS credentials to create AWS resources for the entire life of the cluster, so you must use key-based, long-term credentials. To generate appropriate keys, see link:https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html[Managing Access Keys for IAM Users] in the AWS documentation. You can supply the keys when you run the installation program. ==== * You downloaded the AWS CLI and installed it on your computer. See link:https://docs.aws.amazon.com/cli/latest/userguide/install-bundle.html[Install the AWS CLI Using the Bundled Installer (Linux, macOS, or UNIX)] in the AWS documentation. * If you use a firewall, you xref:../../../installing/install_config/configuring-firewall.adoc#configuring-firewall[configured it to allow the sites] that your cluster must access. * You noted the region and supported link:https://aws.amazon.com/about-aws/global-infrastructure/localzones/locations[AWS Local Zones locations] to create the network resources in. * You read the link:https://aws.amazon.com/about-aws/global-infrastructure/localzones/features/[AWS Local Zones features] in the AWS documentation. -* You added permissions for creating network resources that support AWS Local Zones to the Identity and Access Management (IAM) user or role. The following example enables a zone group that can provide a user or role access for creating network network resources that support AWS {zone-type}. +* You added permissions for creating network resources that support AWS Local Zones to the Identity and Access Management (IAM) user or role. The following example enables a zone group that can give a user or role access for creating network resources that support AWS {zone-type}. + .Example of an additional IAM policy with the `ec2:ModifyAvailabilityZoneGroup` permission attached to an IAM user or role. + @@ -137,7 +137,7 @@ include::modules/install-creating-install-config-aws-edge-zones.adoc[leveloffset [id="creating-aws-local-zone-environment-existing_{context}"] == Installing a cluster in an existing VPC that has Local Zone subnets -You can install a cluster into an existing Amazon Virtual Private Cloud (VPC) on Amazon Web Services (AWS). The installation program provisions the rest of the required infrastructure, which you can further customize. To customize the installation, modify parameters in the `install-config.yaml` file before you install the cluster. +You can install a cluster into an existing Amazon Virtual Private Cloud (VPC) on Amazon Web Services (AWS). The installation program provisions the rest of the required infrastructure, which you can further customize. To customize the installation, change parameters in the `install-config.yaml` file before you install the cluster. Installing a cluster on AWS into an existing VPC requires extending compute nodes to the edge of the Cloud Infrastructure by using AWS {zone-type}. @@ -145,14 +145,14 @@ Local Zone subnets extend regular compute nodes to edge networks. Each edge comp [NOTE] ==== -If you want to create private subnets, you must either modify the provided CloudFormation template or create your own template. +If you want to create private subnets, you must either change the provided CloudFormation template or create your own template. ==== -You can use a provided CloudFormation template to create network resources. Additionally, you can modify a template to customize your infrastructure or use the information that they contain to create AWS resources according to your company's policies. +You can use a provided CloudFormation template to create network resources. Additionally, you can change a template to customize your infrastructure or use the information that they contain to create AWS resources according to your company's policies. [IMPORTANT] ==== -The steps for performing an installer-provisioned infrastructure installation are provided for example purposes only. Installing a cluster in an existing VPC requires that you have knowledge of the cloud provider and the installation process of {product-title}. You can use a CloudFormation template to assist you with completing these steps or to help model your own cluster installation. Instead of using the CloudFormation template to create resources, you can decide to use other methods for generating these resources. +The documentation provides the steps for performing an installer-provisioned infrastructure installation for example purposes only. Installing a cluster in an existing VPC requires that you have knowledge of the cloud provider and the installation process of {product-title}. You can use a CloudFormation template to assist you with completing these steps or to help model your own cluster installation. Instead of using the CloudFormation template to create resources, you can decide to use other methods for generating these resources. ==== // Creating a VPC in AWS diff --git a/modules/cnf-performing-end-to-end-tests-junit-test-output.adoc b/modules/cnf-performing-end-to-end-tests-junit-test-output.adoc index 95e08dcc2bac..4feafef5b5bd 100644 --- a/modules/cnf-performing-end-to-end-tests-junit-test-output.adoc +++ b/modules/cnf-performing-end-to-end-tests-junit-test-output.adoc @@ -27,11 +27,11 @@ You must create the `junit` folder before running this command. ---- $ podman run -v $(pwd)/:/kubeconfig:Z -v $(pwd)/junit:/junit \ -e KUBECONFIG=/kubeconfig/kubeconfig registry.redhat.io/openshift4/cnf-tests-rhel9:v{product-version} \ -/usr/bin/test-run.sh --ginkgo.junit-report junit/.xml --ginkgo.v +/usr/bin/test-run.sh --ginkgo.junit-report junit/.xml --ginkgo.v ---- + where: + -- -`junit` :: Is the folder where the junit report is stored. +`file_name` :: The name of the XML report file. -- diff --git a/modules/nw-egressnetworkpolicy-create.adoc b/modules/nw-egressnetworkpolicy-create.adoc index 1106c2679ee4..93ea6e2023e6 100644 --- a/modules/nw-egressnetworkpolicy-create.adoc +++ b/modules/nw-egressnetworkpolicy-create.adoc @@ -44,18 +44,7 @@ policy rules. $ oc create -f .yaml -n ---- + -In the following example, a new {kind} object is created in a project named `project1`: -+ -[source,terminal] ----- -$ oc create -f default.yaml -n project1 ----- -+ -.Example output -[source,terminal,subs="attributes"] ----- -{obj} created ----- +Successful output lists the {obj} name and the `created` status. . Optional: Save the `.yaml` file so that you can make changes later. diff --git a/modules/nw-networkpolicy-allow-application-all-namespaces.adoc b/modules/nw-networkpolicy-allow-application-all-namespaces.adoc index 1539b403af6d..fed51f9741e2 100644 --- a/modules/nw-networkpolicy-allow-application-all-namespaces.adoc +++ b/modules/nw-networkpolicy-allow-application-all-namespaces.adoc @@ -31,7 +31,7 @@ ifndef::microshift[] endif::microshift[] * You installed the OpenShift CLI (`oc`). ifndef::microshift[] -* You are logged in to the cluster with a user with `{role}` privileges. +* You logged in to the cluster with a user with `{role}` privileges. endif::microshift[] * You are working in the namespace that the {name} policy applies to. @@ -71,7 +71,7 @@ spec: + [NOTE] ==== -By default, if you omit specifying a `namespaceSelector` it does not select any namespaces, which means the policy allows traffic only from the namespace the network policy is deployed to. +By default, if you do not specify a `namespaceSelector` parameter in the policy object, no namespaces get selected. This means the policy allows traffic only from the namespace where the network policy deployes. ==== . Apply the policy by entering the following command: @@ -81,16 +81,7 @@ By default, if you omit specifying a `namespaceSelector` it does not select any $ oc apply -f web-allow-all-namespaces.yaml ---- + -.Example output -[source,terminal] ----- -ifndef::multi[] -networkpolicy.networking.k8s.io/web-allow-all-namespaces created -endif::multi[] -ifdef::multi[] -multinetworkpolicy.k8s.cni.cncf.io/web-allow-all-namespaces created -endif::multi[] ----- +Successful output lists the name of the policy object and the `created` status. .Verification @@ -108,7 +99,7 @@ $ oc run web --namespace=default --image=nginx --labels="app=web" --expose --por $ oc run test-$RANDOM --namespace=secondary --rm -i -t --image=alpine -- sh ---- -. Run the following command in the shell and observe that the request is allowed: +. Run the following command in the shell and observe that the service allows the request: + [source,terminal] ---- diff --git a/modules/nw-networkpolicy-allow-application-particular-namespace.adoc b/modules/nw-networkpolicy-allow-application-particular-namespace.adoc index 643c78a958dc..4a472245b8c1 100644 --- a/modules/nw-networkpolicy-allow-application-particular-namespace.adoc +++ b/modules/nw-networkpolicy-allow-application-particular-namespace.adoc @@ -25,7 +25,7 @@ endif::microshift[] Follow this procedure to configure a policy that allows traffic to a pod with the label `app=web` from a particular namespace. You might want to do this to: -* Restrict traffic to a production database only to namespaces where production workloads are deployed. +* Restrict traffic to a production database only to namespaces that have production workloads deployed. * Enable monitoring tools deployed to a particular namespace to scrape metrics from the current namespace. .Prerequisites @@ -34,7 +34,7 @@ ifndef::microshift[] endif::microshift[] * You installed the OpenShift CLI (`oc`). ifndef::microshift[] -* You are logged in to the cluster with a user with `{role}` privileges. +* You logged in to the cluster with a user with `{role}` privileges. endif::microshift[] * You are working in the namespace that the {name} policy applies to. @@ -81,16 +81,7 @@ spec: $ oc apply -f web-allow-prod.yaml ---- + -.Example output -[source,terminal] ----- -ifndef::multi[] -networkpolicy.networking.k8s.io/web-allow-prod created -endif::multi[] -ifdef::multi[] -multinetworkpolicy.k8s.cni.cncf.io/web-allow-prod created -endif::multi[] ----- +Successful output lists the name of the policy object and the `created` status. .Verification @@ -136,19 +127,12 @@ $ oc label namespace/dev purpose=testing $ oc run test-$RANDOM --namespace=dev --rm -i -t --image=alpine -- sh ---- -. Run the following command in the shell and observe that the request is blocked: +. Run the following command in the shell and observe the reason for the blocked request. For example, expected output states `wget: download timed out`. + [source,terminal] ---- # wget -qO- --timeout=2 http://web.default ---- -+ -.Expected output -+ -[source,terminal] ----- -wget: download timed out ----- . Run the following command to deploy an `alpine` image in the `prod` namespace and start a shell: + diff --git a/modules/nw-networkpolicy-allow-external-clients.adoc b/modules/nw-networkpolicy-allow-external-clients.adoc index f4619fe8ba1f..e22fd2b98c30 100644 --- a/modules/nw-networkpolicy-allow-external-clients.adoc +++ b/modules/nw-networkpolicy-allow-external-clients.adoc @@ -38,7 +38,7 @@ ifndef::microshift[] endif::microshift[] * You installed the OpenShift CLI (`oc`). ifndef::microshift[] -* You are logged in to the cluster with a user with `{role}` privileges. +* You logged in to the cluster with a user with `{role}` privileges. endif::microshift[] * You are working in the namespace that the {name} policy applies to. @@ -80,17 +80,7 @@ spec: $ oc apply -f web-allow-external.yaml ---- + -.Example output -+ -[source,terminal] ----- -ifndef::multi[] -networkpolicy.networking.k8s.io/web-allow-external created -endif::multi[] -ifdef::multi[] -multinetworkpolicy.k8s.cni.cncf.io/web-allow-external created -endif::multi[] ----- +Successful output lists the name of the policy object and the `created` status. ifndef::microshift[] This policy allows traffic from all resources, including external traffic as illustrated in the following diagram: diff --git a/modules/nw-networkpolicy-audit-configure.adoc b/modules/nw-networkpolicy-audit-configure.adoc index 2527f4163198..55a2dcb9ebe5 100644 --- a/modules/nw-networkpolicy-audit-configure.adoc +++ b/modules/nw-networkpolicy-audit-configure.adoc @@ -24,7 +24,7 @@ $ oc edit network.operator.openshift.io/cluster + [TIP] ==== -You can alternatively customize and apply the following YAML to configure audit logging: +You can also customize and apply the following YAML to configure audit logging: [source,yaml] ---- @@ -60,11 +60,7 @@ metadata: EOF ---- + -.Example output -[source,text] ----- -namespace/verify-audit-logging created ----- +Successful output lists the namespace with the network policy and the `created` status. .. Create network policies for the namespace: + @@ -150,12 +146,7 @@ EOF done ---- + -.Example output -[source,text] ----- -pod/client created -pod/server created ----- +Successful output lists the two pods, such as `pod/client` and `pod/server`, and the `created` status. . To generate traffic and produce network policy audit log entries, complete the following steps: @@ -166,7 +157,7 @@ pod/server created $ POD_IP=$(oc get pods server -n verify-audit-logging -o jsonpath='{.status.podIP}') ---- -.. Ping the IP address from the previous command from the pod named `client` in the `default` namespace and confirm that all packets are dropped: +.. Ping the IP address from an earlier command from the pod named `client` in the `default` namespace and confirm the all packets are dropped: + [source,terminal] ---- @@ -182,7 +173,7 @@ PING 10.128.2.55 (10.128.2.55) 56(84) bytes of data. 2 packets transmitted, 0 received, 100% packet loss, time 2041ms ---- -.. Ping the IP address saved in the `POD_IP` shell environment variable from the pod named `client` in the `verify-audit-logging` namespace and confirm that all packets are allowed: +.. From the client pod in the `verify-audit-logging` namespace, ping the IP address stored in the `POD_IP shell` environment variable and confirm the system allows all packets. + [source,terminal] ---- diff --git a/modules/nw-networkpolicy-audit-disable.adoc b/modules/nw-networkpolicy-audit-disable.adoc index 5b2cf2ccfe49..6dce0a712a67 100644 --- a/modules/nw-networkpolicy-audit-disable.adoc +++ b/modules/nw-networkpolicy-audit-disable.adoc @@ -30,7 +30,7 @@ where: + [TIP] ==== -You can alternatively apply the following YAML to disable audit logging: +You can also apply the following YAML to disable audit logging: [source,yaml] ---- @@ -43,8 +43,4 @@ metadata: ---- ==== + -.Example output -[source,terminal] ----- -namespace/verify-audit-logging annotated ----- +Successful output lists the audit logging name and the `annotated` status. diff --git a/modules/nw-networkpolicy-audit-enable.adoc b/modules/nw-networkpolicy-audit-enable.adoc index 5d632718033d..8af944487a11 100644 --- a/modules/nw-networkpolicy-audit-enable.adoc +++ b/modules/nw-networkpolicy-audit-enable.adoc @@ -31,7 +31,7 @@ where: + [TIP] ==== -You can alternatively apply the following YAML to enable audit logging: +You can also apply the following YAML to enable audit logging: [source,yaml] ---- @@ -48,11 +48,7 @@ metadata: ---- ==== + -.Example output -[source,terminal] ----- -namespace/verify-audit-logging annotated ----- +Successful output lists the audit logging name and the `annotated` status. .Verification diff --git a/modules/nw-networkpolicy-create-cli.adoc b/modules/nw-networkpolicy-create-cli.adoc index 59d2672b7746..7e7de126f8ca 100644 --- a/modules/nw-networkpolicy-create-cli.adoc +++ b/modules/nw-networkpolicy-create-cli.adoc @@ -32,7 +32,7 @@ ifndef::microshift[] endif::microshift[] * You installed the OpenShift CLI (`oc`). ifndef::microshift[] -* You are logged in to the cluster with a user with `{role}` privileges. +* You logged in to the cluster with a user with `{role}` privileges. endif::microshift[] * You are working in the namespace that the {name} policy applies to. @@ -123,7 +123,7 @@ endif::multi[] + .Allow ingress traffic to one pod from a particular namespace + -This policy allows traffic to pods labelled `pod-a` from pods running in `namespace-y`. +This policy allows traffic to pods that have the `pod-a` label from pods running in `namespace-y`. + [source,yaml] ---- @@ -221,29 +221,20 @@ $ oc apply -f .yaml -n where: ``:: Specifies the {name} policy file name. -``:: Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace. +``:: Optional parameter. If you defined the object in a different namespace than the current namespace, the parameter specifices the namespace. -- + -.Example output -[source,terminal] ----- -ifndef::multi[] -networkpolicy.networking.k8s.io/deny-by-default created -endif::multi[] -ifdef::multi[] -multinetworkpolicy.k8s.cni.cncf.io/deny-by-default created -endif::multi[] ----- - -ifdef::multi[] -:!multi: -endif::multi[] -:!name: -:!role: +Successful output lists the name of the policy object and the `created` status. ifndef::microshift[] [NOTE] ==== If you log in to the web console with `cluster-admin` privileges, you have a choice of creating a network policy in any namespace in the cluster directly in YAML or from a form in the web console. ==== -endif::microshift[] \ No newline at end of file +endif::microshift[] + +ifdef::multi[] +:!multi: +endif::multi[] +:!name: +:!role: \ No newline at end of file diff --git a/modules/nw-networkpolicy-delete-cli.adoc b/modules/nw-networkpolicy-delete-cli.adoc index 0672e54747d8..7889e9969192 100644 --- a/modules/nw-networkpolicy-delete-cli.adoc +++ b/modules/nw-networkpolicy-delete-cli.adoc @@ -31,7 +31,7 @@ ifndef::microshift[] endif::microshift[] * You installed the OpenShift CLI (`oc`). ifndef::microshift[] -* You are logged in to the cluster with a user with `{role}` privileges. +* You logged in to the cluster with a user with `{role}` privileges. endif::microshift[] * You are working in the namespace where the {name} policy exists. @@ -48,19 +48,10 @@ $ oc delete {name}policy -n where: ``:: Specifies the name of the {name} policy. -``:: Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace. +``:: Optional parameter. If you defined the object in a different namespace than the current namespace, the parameter specifices the namespace. -- + -.Example output -[source,text] ----- -ifndef::multi[] -networkpolicy.networking.k8s.io/default-deny deleted -endif::multi[] -ifdef::multi[] -multinetworkpolicy.k8s.cni.cncf.io/default-deny deleted -endif::multi[] ----- +Successful output lists the name of the policy object and the `deleted` status. ifdef::multi[] :!multi: diff --git a/modules/nw-networkpolicy-deny-all-allowed.adoc b/modules/nw-networkpolicy-deny-all-allowed.adoc index a03a0ae89f8b..b377f8ff1592 100644 --- a/modules/nw-networkpolicy-deny-all-allowed.adoc +++ b/modules/nw-networkpolicy-deny-all-allowed.adoc @@ -29,7 +29,7 @@ ifndef::microshift[] endif::microshift[] * You installed the OpenShift CLI (`oc`). ifndef::microshift[] -* You are logged in to the cluster with a user with `{role}` privileges. +* You logged in to the cluster with a user with `{role}` privileges. endif::microshift[] * You are working in the namespace that the {name} policy applies to. @@ -85,16 +85,7 @@ endif::multi[] $ oc apply -f deny-by-default.yaml ---- + -.Example output -[source,terminal] ----- -ifndef::multi[] -networkpolicy.networking.k8s.io/deny-by-default created -endif::multi[] -ifdef::multi[] -multinetworkpolicy.k8s.cni.cncf.io/deny-by-default created -endif::multi[] ----- +Successful output lists the name of the policy object and the `created` status. ifdef::multi[] :!multi: diff --git a/modules/nw-networkpolicy-project-defaults.adoc b/modules/nw-networkpolicy-project-defaults.adoc index 51dca1d9064f..ec73acea9af5 100644 --- a/modules/nw-networkpolicy-project-defaults.adoc +++ b/modules/nw-networkpolicy-project-defaults.adoc @@ -13,7 +13,7 @@ As a cluster administrator, you can add network policies to the default template .Prerequisites -* Your cluster uses a default CNI network plugin that supports `NetworkPolicy` objects, such as the OVN-Kubernetes. +* Your cluster uses a default container network interface (CNI) network plugin that supports `NetworkPolicy` objects, such as the OVN-Kubernetes. * You installed the OpenShift CLI (`oc`). * You must log in to the cluster with a user with `cluster-admin` privileges. * You must have created a custom default project template for new projects. @@ -27,8 +27,7 @@ As a cluster administrator, you can add network policies to the default template $ oc edit template -n openshift-config ---- + -Replace `` with the name of the default template that you -configured for your cluster. The default template name is `project-request`. +Replace `` with the name of the default template that you configured for your cluster. The default template name is `project-request`. . In the template, add each `NetworkPolicy` object as an element to the `objects` parameter. The `objects` parameter accepts a collection of one or more objects. + @@ -77,7 +76,7 @@ objects: ... ---- -. Optional: Create a new project to confirm that your network policy objects are created successfully by running the following commands: +. Optional: Create a new project and confirm the successful creation of your network policy objects. .. Create a new project: + @@ -92,6 +91,12 @@ $ oc new-project <1> [source,terminal] ---- $ oc get networkpolicy +---- ++ +Expected output: ++ +[source,terminal] +---- NAME POD-SELECTOR AGE allow-from-openshift-ingress 7s allow-from-same-namespace 7s diff --git a/modules/nw-ovn-ipsec-enable.adoc b/modules/nw-ovn-ipsec-enable.adoc index 4a57a920e02d..1a5a56d3f21c 100644 --- a/modules/nw-ovn-ipsec-enable.adoc +++ b/modules/nw-ovn-ipsec-enable.adoc @@ -65,21 +65,20 @@ ovnkube-node-zfvcl 8/8 Running 0 34m ... ---- -. Verify that IPsec is enabled on your cluster by running the following command: +. Verify that you enabled IPsec on your cluster by running the following command: + [NOTE] ==== -As a cluster administrator, you can verify that IPsec is enabled between pods on your cluster when IPsec is configured in `Full` mode. This step does not verify whether IPsec is working between your cluster and external hosts. +As a cluster administrator, you can verify that you enabled IPsec between pods on your cluster when you configured IPsec in `Full` mode. This step does not verify whether IPsec is working between your cluster and external hosts. ==== + [source,terminal] ---- $ oc -n openshift-ovn-kubernetes rsh ovnkube-node- ovn-nbctl --no-leader-only get nb_global . ipsec <1> ---- -<1> Where `` specifies the random sequence of letters for a pod from the previous step. + -.Example output -[source,text] ----- -true ----- \ No newline at end of file +-- +where: `` specifies the random sequence of letters for a pod from an earlier step. +-- ++ +Successful output from the command shows the status as `true`. \ No newline at end of file diff --git a/modules/nw-ovn-ipsec-north-south-enable.adoc b/modules/nw-ovn-ipsec-north-south-enable.adoc index 2e3a1c6b05ef..47c515a0fe1a 100644 --- a/modules/nw-ovn-ipsec-north-south-enable.adoc +++ b/modules/nw-ovn-ipsec-north-south-enable.adoc @@ -18,7 +18,7 @@ After you apply the machine config, the Machine Config Operator reboots affected * Install the {oc-first}. * You have installed the `butane` utility on your local computer. * You have installed the NMState Operator on the cluster. -* You are logged in to the cluster as a user with `cluster-admin` privileges. +* You logged in to the cluster as a user with `cluster-admin` privileges. * You have an existing PKCS#12 certificate for the IPsec endpoint and a CA cert in PEM format. * You enabled IPsec in either `Full` or `External` mode on your cluster. * The OVN-Kubernetes network plugin must be configured in local gateway mode, where `ovnKubernetesConfig.gatewayConfig.routingViaHost=true`. @@ -110,7 +110,7 @@ spec: $ oc create -f ipsec-config.yaml ---- -. Provide the following certificate files to add to the Network Security Services (NSS) database on each host. These files are imported as part of the Butane configuration in subsequent steps. +. Provide the following certificate files to add to the Network Security Services (NSS) database on each host. These files are imported as part of the Butane configuration in later steps. + -- * `left_server.p12`: The certificate bundle for the IPsec endpoints @@ -179,7 +179,7 @@ EOF done ---- -.. To transform the Butane files that you created in the previous step into machine configs, enter the following command: +.. To transform the Butane files that you created in an earlier step into machine configs, enter the following command: + [source,terminal] ---- @@ -199,7 +199,7 @@ done + [IMPORTANT] ==== -As the Machine Config Operator (MCO) updates machines in each machine config pool, it reboots each node one by one. You must wait until all the nodes are updated before external IPsec connectivity is available. +As the Machine Config Operator (MCO) updates machines in each machine config pool, it reboots each node one by one. You must wait until all the nodes to update before external IPsec connectivity is available. ==== . Check the machine config pool status by entering the following command: @@ -217,7 +217,7 @@ By default, the MCO updates one machine per pool at a time, causing the total ti ==== . To confirm that IPsec machine configs rolled out successfully, enter the following commands: -.. Confirm that the IPsec machine configs were created: +.. Confirm the creation of the IPsec machine configs: + [source,terminal] ---- @@ -231,31 +231,19 @@ $ oc get mc | grep ipsec 80-ipsec-worker-extensions 3.2.0 6d15h ---- -.. Confirm that the that the IPsec extension are applied to control plane nodes: +.. Confirm the application of the IPsec extension to control plane nodes. Example output would show `2`. + [source,terminal] ---- $ oc get mcp master -o yaml | grep 80-ipsec-master-extensions -c ---- -+ -.Expected output -[source,text] ----- -2 ----- -.. Confirm that the that the IPsec extension are applied to worker nodes: +.. Confirm the application of the IPsec extension to compute nodes. Example output would show `2`. + [source,terminal] ---- $ oc get mcp worker -o yaml | grep 80-ipsec-worker-extensions -c ---- -+ -.Expected output -[source,text] ----- -2 ----- [role="_additional-resources"] .Additional resources