diff --git a/_topic_maps/_topic_map.yml b/_topic_maps/_topic_map.yml index 48f4d41c3ac2..25e04e82f3c2 100644 --- a/_topic_maps/_topic_map.yml +++ b/_topic_maps/_topic_map.yml @@ -1606,8 +1606,8 @@ Topics: File: converting-to-dual-stack - Name: Configuring internal subnets File: configure-ovn-kubernetes-subnets - - Name: Configuring gateway mode - File: configuring-gateway-mode + - Name: Configuring a gateway + File: configuring-gateway - Name: Configure an external gateway on the default network File: configuring-secondary-external-gateway - Name: Configuring an egress IP address diff --git a/modules/nw-egressnetworkpolicy-about.adoc b/modules/nw-egressnetworkpolicy-about.adoc index 65d69b91db41..956e81f836b8 100644 --- a/modules/nw-egressnetworkpolicy-about.adoc +++ b/modules/nw-egressnetworkpolicy-about.adoc @@ -8,16 +8,15 @@ ifeval::["{context}" == "configuring-egress-firewall-ovn"] :api: k8s.ovn.org/v1 endif::[] +:_mod-docs-content-type: CONCEPT [id="nw-egressnetworkpolicy-about_{context}"] = How an egress firewall works in a project -As a cluster administrator, you can use an _egress firewall_ to -limit the external hosts that some or all pods can access from within the -cluster. An egress firewall supports the following scenarios: +As a cluster administrator, you can use an _egress firewall_ to limit the external hosts that some or all pods can access from within the cluster. An egress firewall supports the following scenarios: -- A pod can only connect to internal hosts and cannot initiate connections to +- A pod can only connect to internal hosts and cannot start connections to the public internet. -- A pod can only connect to the public internet and cannot initiate connections +- A pod can only connect to the public internet and cannot start connections to internal hosts that are outside the {product-title} cluster. - A pod cannot reach specified internal subnets or hosts outside the {product-title} cluster. - A pod can connect to only specific external hosts. @@ -26,7 +25,7 @@ For example, you can allow one project access to a specified IP range but deny t [NOTE] ==== -Egress firewall does not apply to the host network namespace. Pods with host networking enabled are unaffected by egress firewall rules. +Egress firewall does not apply to the host network namespace. Egress firewall rules do not impact any pods that have host networking enabled. ==== You configure an egress firewall policy by creating an {kind} custom resource (CR) object. The egress firewall matches network traffic that meets any of the following criteria: @@ -40,7 +39,7 @@ endif::ovn[] [IMPORTANT] ==== -If your egress firewall includes a deny rule for `0.0.0.0/0`, access to your {product-title} API servers is blocked. You must either add allow rules for each IP address or use the `nodeSelector` type allow rule in your egress policy rules to connect to API servers. +If your egress firewall includes a deny rule for `0.0.0.0/0`, the rule blocks access to your {product-title} API servers. You must either add allow rules for each IP address or use the `nodeSelector` type allow rule in your egress policy rules to connect to API servers. The following example illustrates the order of the egress firewall rules necessary to ensure API server access: @@ -85,20 +84,20 @@ An egress firewall has the following limitations: ifdef::ovn[] * A maximum of one {kind} object with a maximum of 8,000 rules can be defined per project. -* If you are using the OVN-Kubernetes network plugin with shared gateway mode in Red Hat OpenShift Networking, return ingress replies are affected by egress firewall rules. If the egress firewall rules drop the ingress reply destination IP, the traffic is dropped. +* If you use the OVN-Kubernetes network plugin and you configured `false` for the `routingViaHost` parameter in the `Network` custom resource for your cluster, egress firewall rules impact the return ingress replies. If the egress firewall rules drop the ingress reply destination IP, the traffic is dropped. endif::ovn[] -Violating any of these restrictions results in a broken egress firewall for the project. Consequently, all external network traffic is dropped, which can cause security risks for your organization. +Violating any of these restrictions results in a broken egress firewall for the project. As a result, all external network traffic drops, which can cause security risks for your organization. -An Egress Firewall resource can be created in the `kube-node-lease`, `kube-public`, `kube-system`, `openshift` and `openshift-` projects. +You can create an Egress Firewall resource in the `kube-node-lease`, `kube-public`, `kube-system`, `openshift` and `openshift-` projects. [id="policy-rule-order_{context}"] == Matching order for egress firewall policy rules -The egress firewall policy rules are evaluated in the order that they are defined, from first to last. The first rule that matches an egress connection from a pod applies. Any subsequent rules are ignored for that connection. +The OVN-Kubernetes network plugin evaluates egress firewall policy rules based on the first-to-last order of how you defined the rules. The first rule that matches an egress connection from a pod applies. The plugin ignores any subsequent rules for that connection. [id="domain-name-server-resolution_{context}"] -== How Domain Name Server (DNS) resolution works +== Domain Name Server (DNS) resolution If you use DNS names in any of your egress firewall policy rules, proper resolution of the domain names is subject to the following restrictions: @@ -106,7 +105,7 @@ ifdef::ovn[] * Domain name updates are polled based on a time-to-live (TTL) duration. By default, the duration is 30 minutes. When the egress firewall controller queries the local name servers for a domain name, if the response includes a TTL and the TTL is less than 30 minutes, the controller sets the duration for that DNS name to the returned value. Each DNS name is queried after the TTL for the DNS record expires. endif::ovn[] -* The pod must resolve the domain from the same local name servers when necessary. Otherwise the IP addresses for the domain known by the egress firewall controller and the pod can be different. If the IP addresses for a hostname differ, the egress firewall might not be enforced consistently. +* The pod must resolve the domain from the same local name servers when necessary. Otherwise the IP addresses for the domain known by the egress firewall controller and the pod can be different. If the IP addresses for a hostname differ, consistent enforcement of the egress firewall does not apply. * Because the egress firewall controller and pods asynchronously poll the same local name server, the pod might obtain the updated IP address before the egress controller does, which causes a race condition. Due to this current limitation, domain name usage in {kind} objects is only recommended for domains with infrequent IP address changes. @@ -114,7 +113,7 @@ endif::ovn[] ==== Using DNS names in your egress firewall policy does not affect local DNS resolution through CoreDNS. -However, if your egress firewall policy uses domain names, and an external DNS server handles DNS resolution for an affected pod, you must include egress firewall rules that permit access to the IP addresses of your DNS server. +If your egress firewall policy uses domain names, and an external DNS server handles DNS resolution for an affected pod, you must include egress firewall rules that allow access to the IP addresses of your DNS server. ==== ifdef::ovn[] diff --git a/modules/nw-ovn-ipsec-north-south-enable.adoc b/modules/nw-ovn-ipsec-north-south-enable.adoc index 47c515a0fe1a..d8eea0759bb3 100644 --- a/modules/nw-ovn-ipsec-north-south-enable.adoc +++ b/modules/nw-ovn-ipsec-north-south-enable.adoc @@ -21,7 +21,7 @@ After you apply the machine config, the Machine Config Operator reboots affected * You logged in to the cluster as a user with `cluster-admin` privileges. * You have an existing PKCS#12 certificate for the IPsec endpoint and a CA cert in PEM format. * You enabled IPsec in either `Full` or `External` mode on your cluster. -* The OVN-Kubernetes network plugin must be configured in local gateway mode, where `ovnKubernetesConfig.gatewayConfig.routingViaHost=true`. +* You must set the `routingViaHost` parameter to `true` in the `ovnKubernetesConfig.gatewayConfig` specification of the OVN-Kubernetes network plugin. .Procedure diff --git a/modules/nwt-gateway-mode.adoc b/modules/nwt-configure-egress-routing-policies.adoc similarity index 60% rename from modules/nwt-gateway-mode.adoc rename to modules/nwt-configure-egress-routing-policies.adoc index b2f7694075d7..e525df9b2717 100644 --- a/modules/nwt-gateway-mode.adoc +++ b/modules/nwt-configure-egress-routing-policies.adoc @@ -2,12 +2,12 @@ // :_mod-docs-content-type: PROCEDURE -[id="nwt-gateway-mode_{context}"] -= Setting local and shared gateway modes +[id="nwt-configure-egress-routing-policies_{context}"] += Configuring egress routing policies -As a cluster administrator you can configure the gateway mode using the `gatewayConfig` spec in the Cluster Network Operator. The following procedure can be used to set the `routingViaHost` field to `true` for local mode or `false` for shared mode. +As a cluster administrator you can configure egress routing policies by using the `gatewayConfig` specification in the Cluster Network Operator (CNO). You can use the following procedure to set the `routingViaHost` field to `true` or `false`. -You can follow the optional step 4 to enable IP forwarding alongside local gateway mode if you need the host network of the node to act as a router for traffic not related to OVN-Kubernetes. For example, possible use cases for combining local gateway mode with IP forwarding include: +You can follow the optional step in the procedure to enable IP forwarding alongside the `routingViaHost=true` configuration if you need the host network of the node to act as a router for traffic not related to OVN-Kubernetes. For example, possible use cases for combining local gateway with IP forwarding include: * Configuring all pod egress traffic to be forwarded via the node's IP @@ -28,14 +28,14 @@ You can follow the optional step 4 to enable IP forwarding alongside local gatew $ oc get network.operator cluster -o yaml > network-config-backup.yaml ---- -. Set the `routingViaHost` paramemter to `true` for local gateway mode by running the following command: +. Set the `routingViaHost` parameter to `true` by entering the following command. Egress traffic then gets routed through a specific gateway according to the routes that you configured on the node. + [source,terminal] ---- $ oc patch networks.operator.openshift.io cluster --type=merge -p '{"spec":{"defaultNetwork":{"ovnKubernetesConfig":{"gatewayConfig":{"routingViaHost": true}}}}}' ---- -. Verify that local gateway mode has been set by running the following command: +. Verify the correct application of the `routingViaHost=true` configuration by running the following command: + [source,terminal] ---- @@ -58,7 +58,7 @@ gatewayConfig: ipsecConfig: # ... ---- -<1> A value of `true` sets local gateway mode and a value of `false` sets shared gateway mode. In local gateway mode, traffic is routed through the host. In shared gateway mode, traffic is not routed through the host. +<1> A value of `true` means that egress traffic gets routed through a specific local gateway on the node that hosts the pod. A value of `false` for the parameter means that a group of nodes share a single gateway so traffic does not get routed through a single host. . Optional: Enable IP forwarding globally by running the following command: + @@ -66,6 +66,7 @@ gatewayConfig: ---- $ oc patch network.operator cluster --type=merge -p '{"spec":{"defaultNetwork":{"ovnKubernetesConfig":{"gatewayConfig":{"ipForwarding": "Global"}}}}}' ---- ++ .. Verify that the `ipForwarding` spec has been set to `Global` by running the following command: + [source,terminal] diff --git a/modules/telco-core-cluster-network-operator.adoc b/modules/telco-core-cluster-network-operator.adoc index e03ebc26a5d4..e15902eea0c4 100644 --- a/modules/telco-core-cluster-network-operator.adoc +++ b/modules/telco-core-cluster-network-operator.adoc @@ -12,13 +12,10 @@ New in this release:: Description:: + -- -The Cluster Network Operator (CNO) deploys and manages the cluster network components including the default OVN-Kubernetes network plugin during cluster installation. -The CNO allows for configuring primary interface MTU settings, OVN gateway modes to use node routing tables for pod egress, and additional secondary networks such as MACVLAN. +The Cluster Network Operator (CNO) deploys and manages the cluster network components including the default OVN-Kubernetes network plugin during cluster installation. The CNO allows for configuring primary interface MTU settings, OVN gateway configurations to use node routing tables for pod egress, and additional secondary networks such as MACVLAN. In support of network traffic separation, multiple network interfaces are configured through the CNO. -Traffic steering to these interfaces is configured through static routes applied by using the NMState Operator. -To ensure that pod traffic is properly routed, OVN-K is configured with the `routingViaHost` option enabled. -This setting uses the kernel routing table and the applied static routes rather than OVN for pod egress traffic. +Traffic steering to these interfaces is configured through static routes applied by using the NMState Operator. To ensure that pod traffic is properly routed, OVN-K is configured with the `routingViaHost` option enabled. This setting uses the kernel routing table and the applied static routes rather than OVN for pod egress traffic. The Whereabouts CNI plugin is used to provide dynamic IPv4 and IPv6 addressing for additional pod network interfaces without the use of a DHCP server. -- diff --git a/modules/telco-core-load-balancer.adoc b/modules/telco-core-load-balancer.adoc index 50d50c0097f4..189c0838b409 100644 --- a/modules/telco-core-load-balancer.adoc +++ b/modules/telco-core-load-balancer.adoc @@ -31,7 +31,7 @@ An alternate load balancer implementation must be used if this is a requirement Engineering considerations:: * MetalLB is used in BGP mode only for telco core use models. -* For telco core use models, MetalLB is supported only with the OVN-Kubernetes network provider used in local gateway mode. +* For telco core use models, MetalLB is supported only when you set `routingViaHost=true` in the `ovnKubernetesConfig.gatewayConfig` specification of the OVN-Kubernetes network plugin. See `routingViaHost` in "Cluster Network Operator". * BGP configuration in MetalLB is expected to vary depending on the requirements of the network and peers. ** You can configure address pools with variations in addresses, aggregation length, auto assignment, and so on. diff --git a/networking/network_security/configuring-ipsec-ovn.adoc b/networking/network_security/configuring-ipsec-ovn.adoc index 3b713166e04e..1a7e2ce8f78c 100644 --- a/networking/network_security/configuring-ipsec-ovn.adoc +++ b/networking/network_security/configuring-ipsec-ovn.adoc @@ -43,10 +43,10 @@ include::modules/nw-own-ipsec-modes.adoc[leveloffset=+1] [id="{context}-prerequisites"] == Prerequisites -For IPsec support for encrypting traffic to external hosts, ensure that the following prerequisites are met: +For IPsec support for encrypting traffic to external hosts, ensure that you meet the following prerequisites: -* The OVN-Kubernetes network plugin must be configured in local gateway mode, where `ovnKubernetesConfig.gatewayConfig.routingViaHost=true`. -* The NMState Operator is installed. This Operator is required for specifying the IPsec configuration. For more information, see xref:../../networking/networking_operators/k8s-nmstate-about-the-k8s-nmstate-operator.adoc#k8s-nmstate-about-the-k8s-nmstate-operator[Kubernetes NMState Operator]. +* Set `routingViaHost=true` in the `ovnKubernetesConfig.gatewayConfig` specification of the OVN-Kubernetes network plugin. +* Install the NMState Operator. This Operator is required for specifying the IPsec configuration. For more information, see xref:../../networking/networking_operators/k8s-nmstate-about-the-k8s-nmstate-operator.adoc#k8s-nmstate-about-the-k8s-nmstate-operator[Kubernetes NMState Operator]. + -- [NOTE] diff --git a/networking/ovn_kubernetes_network_provider/configuring-gateway-mode.adoc b/networking/ovn_kubernetes_network_provider/configuring-gateway-mode.adoc deleted file mode 100644 index 483fb215b43e..000000000000 --- a/networking/ovn_kubernetes_network_provider/configuring-gateway-mode.adoc +++ /dev/null @@ -1,13 +0,0 @@ -:_mod-docs-content-type: ASSEMBLY -[id="configuring-gateway-mode"] -= Configuring gateway mode -include::_attributes/common-attributes.adoc[] -:context: configuring-gateway-mode - -toc::[] - -As a cluster administrator you can configure the `gatewayConfig` object to manage how external traffic leaves the cluster. You do so by setting the `routingViaHost` spec to `true` for local mode or `false` for shared mode. - -In local gateway mode, traffic is routed through the host and is consequently applied to the routing table of the host. In shared gateway mode, traffic is not routed through the host. Instead, traffic the Open vSwitch (OVS) outputs traffic directly to the node IP interface. - -include::modules/nwt-gateway-mode.adoc[leveloffset=+1] \ No newline at end of file diff --git a/networking/ovn_kubernetes_network_provider/configuring-gateway.adoc b/networking/ovn_kubernetes_network_provider/configuring-gateway.adoc new file mode 100644 index 000000000000..1dd3f3a0727e --- /dev/null +++ b/networking/ovn_kubernetes_network_provider/configuring-gateway.adoc @@ -0,0 +1,14 @@ +:_mod-docs-content-type: ASSEMBLY +[id="configuring-gateway"] += Configuring a gateway +include::_attributes/common-attributes.adoc[] +:context: configuring-gateway-mode + +toc::[] + +As a cluster administrator you can configure the `gatewayConfig` object to manage how external traffic leaves the cluster. You do so by setting the `routingViaHost` parameter to one of the following values: + +* `true` means that egress traffic routes through a specific local gateway on the node that hosts the pod. Egress traffic routes through the host and this traffic applies to the routing table of the host. +* `false` means that egress traffic routes through a dedicated node but a group of nodes share the same gateway. Egress traffic does not route through the host. The Open vSwitch (OVS) outputs traffic directly to the node IP interface. + +include::modules/nwt-configure-egress-routing-policies.adoc[leveloffset=+1] \ No newline at end of file