diff --git a/_topic_maps/_topic_map.yml b/_topic_maps/_topic_map.yml index 8125604c0953..323d3b026f3b 100644 --- a/_topic_maps/_topic_map.yml +++ b/_topic_maps/_topic_map.yml @@ -1513,8 +1513,8 @@ Topics: File: converting-to-dual-stack - Name: Configuring internal subnets File: configure-ovn-kubernetes-subnets - - Name: Configuring gateway mode - File: configuring-gateway-mode + - Name: Configuring a gateway + File: configuring-gateway - Name: Configure an external gateway on the default network File: configuring-secondary-external-gateway - Name: Configuring an egress IP address diff --git a/modules/nw-egressnetworkpolicy-about.adoc b/modules/nw-egressnetworkpolicy-about.adoc index 58aeb4d21928..efe8886db04e 100644 --- a/modules/nw-egressnetworkpolicy-about.adoc +++ b/modules/nw-egressnetworkpolicy-about.adoc @@ -14,16 +14,15 @@ ifeval::["{context}" == "openshift-sdn-egress-firewall"] :api: network.openshift.io/v1 endif::[] +:_mod-docs-content-type: CONCEPT [id="nw-egressnetworkpolicy-about_{context}"] = How an egress firewall works in a project -As a cluster administrator, you can use an _egress firewall_ to -limit the external hosts that some or all pods can access from within the -cluster. An egress firewall supports the following scenarios: +As a cluster administrator, you can use an _egress firewall_ to limit the external hosts that some or all pods can access from within the cluster. An egress firewall supports the following scenarios: -- A pod can only connect to internal hosts and cannot initiate connections to +- A pod can only connect to internal hosts and cannot start connections to the public internet. -- A pod can only connect to the public internet and cannot initiate connections +- A pod can only connect to the public internet and cannot start connections to internal hosts that are outside the {product-title} cluster. - A pod cannot reach specified internal subnets or hosts outside the {product-title} cluster. - A pod can connect to only specific external hosts. @@ -32,7 +31,7 @@ For example, you can allow one project access to a specified IP range but deny t [NOTE] ==== -Egress firewall does not apply to the host network namespace. Pods with host networking enabled are unaffected by egress firewall rules. +Egress firewall does not apply to the host network namespace. Egress firewall rules do not impact any pods that have host networking enabled. ==== You configure an egress firewall policy by creating an {kind} custom resource (CR) object. The egress firewall matches network traffic that meets any of the following criteria: @@ -46,7 +45,7 @@ ifdef::ovn[] [IMPORTANT] ==== -If your egress firewall includes a deny rule for `0.0.0.0/0`, access to your {product-title} API servers is blocked. You must either add allow rules for each IP address or use the `nodeSelector` type allow rule in your egress policy rules to connect to API servers. +If your egress firewall includes a deny rule for `0.0.0.0/0`, the rule blocks access to your {product-title} API servers. You must either add allow rules for each IP address or use the `nodeSelector` type allow rule in your egress policy rules to connect to API servers. The following example illustrates the order of the egress firewall rules necessary to ensure API server access: @@ -108,7 +107,7 @@ endif::openshift-sdn[] ifdef::ovn[] * A maximum of one {kind} object with a maximum of 8,000 rules can be defined per project. -* If you are using the OVN-Kubernetes network plugin with shared gateway mode in Red Hat OpenShift Networking, return ingress replies are affected by egress firewall rules. If the egress firewall rules drop the ingress reply destination IP, the traffic is dropped. +* If you use the OVN-Kubernetes network plugin and you configured `false` for the `routingViaHost` parameter in the `Network` custom resource for your cluster, egress firewall rules impact the return ingress replies. If the egress firewall rules drop the ingress reply destination IP, the traffic is dropped. endif::ovn[] ifdef::openshift-sdn[] * A maximum of one {kind} object with a maximum of 1,000 rules can be defined per project. @@ -124,17 +123,17 @@ ifdef::openshift-sdn[] * If you create a selectorless service and manually define endpoints or `EndpointSlices` that point to external IPs, traffic to the service IP might still be allowed, even if your `EgressNetworkPolicy` is configured to deny all egress traffic. This occurs because OpenShift SDN does not fully enforce egress network policies for these external endpoints. Consequently, this might result in unexpected access to external services. endif::openshift-sdn[] -Violating any of these restrictions results in a broken egress firewall for the project. Consequently, all external network traffic is dropped, which can cause security risks for your organization. +Violating any of these restrictions results in a broken egress firewall for the project. As a result, all external network traffic drops, which can cause security risks for your organization. -An Egress Firewall resource can be created in the `kube-node-lease`, `kube-public`, `kube-system`, `openshift` and `openshift-` projects. +You can create an Egress Firewall resource in the `kube-node-lease`, `kube-public`, `kube-system`, `openshift` and `openshift-` projects. [id="policy-rule-order_{context}"] == Matching order for egress firewall policy rules -The egress firewall policy rules are evaluated in the order that they are defined, from first to last. The first rule that matches an egress connection from a pod applies. Any subsequent rules are ignored for that connection. +The OVN-Kubernetes network plugin evaluates egress firewall policy rules based on the first-to-last order of how you defined the rules. The first rule that matches an egress connection from a pod applies. The plugin ignores any subsequent rules for that connection. [id="domain-name-server-resolution_{context}"] -== How Domain Name Server (DNS) resolution works +== Domain Name Server (DNS) resolution If you use DNS names in any of your egress firewall policy rules, proper resolution of the domain names is subject to the following restrictions: @@ -145,7 +144,7 @@ ifdef::ovn[] * Domain name updates are polled based on a time-to-live (TTL) duration. By default, the duration is 30 minutes. When the egress firewall controller queries the local name servers for a domain name, if the response includes a TTL and the TTL is less than 30 minutes, the controller sets the duration for that DNS name to the returned value. Each DNS name is queried after the TTL for the DNS record expires. endif::ovn[] -* The pod must resolve the domain from the same local name servers when necessary. Otherwise the IP addresses for the domain known by the egress firewall controller and the pod can be different. If the IP addresses for a hostname differ, the egress firewall might not be enforced consistently. +* The pod must resolve the domain from the same local name servers when necessary. Otherwise the IP addresses for the domain known by the egress firewall controller and the pod can be different. If the IP addresses for a hostname differ, consistent enforcement of the egress firewall does not apply. * Because the egress firewall controller and pods asynchronously poll the same local name server, the pod might obtain the updated IP address before the egress controller does, which causes a race condition. Due to this current limitation, domain name usage in {kind} objects is only recommended for domains with infrequent IP address changes. @@ -153,7 +152,7 @@ endif::ovn[] ==== Using DNS names in your egress firewall policy does not affect local DNS resolution through CoreDNS. -However, if your egress firewall policy uses domain names, and an external DNS server handles DNS resolution for an affected pod, you must include egress firewall rules that permit access to the IP addresses of your DNS server. +If your egress firewall policy uses domain names, and an external DNS server handles DNS resolution for an affected pod, you must include egress firewall rules that allow access to the IP addresses of your DNS server. ==== ifdef::ovn[] diff --git a/modules/nw-ovn-ipsec-north-south-enable.adoc b/modules/nw-ovn-ipsec-north-south-enable.adoc index 47c515a0fe1a..d8eea0759bb3 100644 --- a/modules/nw-ovn-ipsec-north-south-enable.adoc +++ b/modules/nw-ovn-ipsec-north-south-enable.adoc @@ -21,7 +21,7 @@ After you apply the machine config, the Machine Config Operator reboots affected * You logged in to the cluster as a user with `cluster-admin` privileges. * You have an existing PKCS#12 certificate for the IPsec endpoint and a CA cert in PEM format. * You enabled IPsec in either `Full` or `External` mode on your cluster. -* The OVN-Kubernetes network plugin must be configured in local gateway mode, where `ovnKubernetesConfig.gatewayConfig.routingViaHost=true`. +* You must set the `routingViaHost` parameter to `true` in the `ovnKubernetesConfig.gatewayConfig` specification of the OVN-Kubernetes network plugin. .Procedure diff --git a/modules/nw-routeadvertisements-about.adoc b/modules/nw-routeadvertisements-about.adoc new file mode 100644 index 000000000000..032de021ad7b --- /dev/null +++ b/modules/nw-routeadvertisements-about.adoc @@ -0,0 +1,105 @@ +// Module included in the following assemblies: +// +// * networking/route_advertisements/about-route-advertisements.adoc + +//// +Terminology - +Cluster network routes: Both pod network routes and/or EgressIP routes +Pod network routes: Routes to pod IPs +EgressIP routes: Routes to EgressIPs +//// + +:_mod-docs-content-type: CONCEPT +[id="nw-routeadvertisements-about_{context}"] += Advertise cluster network routes with Border Gateway Protocol + +With route advertisements enabled, the OVN-Kubernetes network plugin supports advertising network routes for the default pod network and cluster user-defined (CUDN) networks to the provider network, including EgressIPs, and importing routes from the provider network to the default pod network and CUDNs. From the provider network, IP addresses advertised from the default pod network and CUDNs can be reached directly. + +For example, you can import routes to the default pod network so you no longer need to manually configure routes on each node. Previously, you might have been setting the `routingViaHost` parameter to `true` and manually configuring routes on each node to approximate a similar configuration. With route advertisements you can accomplish this task seamlessly with `routingViaHost` parameter set to `false`. + +You could also set the `routingViaHost` parameter to `true` in the `Network` custom resource CR for your cluster, but you must then manually configure routes on each node to simulate a similar configuration. When you enable route advertisements, you can set `routingViaHost=false` in the `Network` CR without having to then manually configure routes one each node. + +Route reflectors on the provider network are supported and can reduce the number of BGP connections required to advertise routes on large networks. + +If you use EgressIPs with route advertisements enabled, the layer 3 provider network is aware of EgressIP failovers. This means that you can locate cluster nodes that host EgressIPs on different layer 2 segments whereas before only the layer 2 provider network was aware so that required all the egress nodes to be on the same layer 2 segment. + +[id="supported-platforms_{context}"] +== Supported platforms + +Advertising routes with border gateway protocol (BGP) is supported on the bare-metal infrastructure type. + +[id="infrastructure-requirements_{context}"] +== Infrastructure requirements + +To use route advertisements, you must have configured BGP for your network infrastructure. Outages or misconfigurations of your network infrastructure might cause disruptions to your cluster network. + +[id="compatibility-with-other-networking-features_{context}"] +== Compatibility with other networking features + +Route advertisements support the following {product-title} Networking features: + +Multiple external gateways (MEG):: +MEG is not supported with this feature. + +EgressIPs:: +-- +Supports the use and advertisement of EgressIPs. The node where an egress IP address resides advertises the EgressIP. An egress IP address must be on the same layer 2 network subnet as the egress node. The following limitations apply: + +- Advertising EgressIPs from a user-defined network (CUDN) operating in layer 2 mode are not supported. +- Advertising EgressIPs for a network that has both egress IP addresses assigned to the primary network interface and egress IP addresses assigned to additional network interfaces is impractical. All EgressIPs are advertised on all of the BGP sessions of the selected FRRConfiguration instances, regardless of whether these sessions are established over the same interface that the EgressIP is assigned to or not, potentially leading to unwanted advertisements. + +-- + +Services:: +Works with the MetalLB Operator to advertise services to the provider network. + +Egress service:: +Full support. + +Egress firewall:: +Full support. + +Egress QoS:: +Full support. + +Network policies:: +Full support. + +Direct pod ingress:: +Full support for the default cluster network and cluster user-defined (CUDN) networks. + +[id="considerations-for-use-with-the-metallb-operator_{context}"] +== Considerations for use with the MetalLB Operator + +The MetalLB Operator is installed as an add-on to the cluster. Deployment of the MetalLB Operator automatically enables FRR-K8s as an additional routing capability provider. This feature and the MetalLB Operator use the same FRR-K8s deployment. + +[id="considerations-for-naming-cluster-user-defined-networks_{context}"] +== Considerations for naming cluster user-defined networks (CUDNs) + +When referencing a VRF device in a `FRRConfiguration` CR, the VRF name is the same as the CUDN name for VRF names that are less than or equal to 15 characters. It is recommended to use a VRF name no longer than 15 characters so that the VRF name can be inferred from the CUDN name. + +[id="bgp-routing-custom-resources_{context}"] +== BGP routing custom resources + +The following custom resources (CRs) are used to configure route advertisements with BGP: + +`RouteAdvertisements`:: +This CR defines the advertisements for the BGP routing. From this CR, the OVN-Kubernetes controller generates a `FRRConfiguration` object that configures the FRR daemon to advertise cluster network routes. This CR is cluster scoped. + +`FRRConfiguration`:: +This CR is used to define BGP peers and to configure route imports from the provider network into the cluster network. Before applying `RouteAdvertisements` objects, at least one FRRConfiguration object must be initially defined to configure the BGP peers. This CR is namespaced. + +[id="ovn-kubernetes-controller-generation-of-frrconfiguration-objects_{context}"] +== OVN-Kubernetes controller generation of `FRRConfiguration` objects + +An `FRRConfiguration` object is generated for each network and node selected by a `RouteAdvertisements` CR with the appropriate advertised prefixes that apply to each node. The OVN-Kubernetes controller checks whether the `RouteAdvertisements`-CR-selected nodes are a subset of the nodes that are selected by the `RouteAdvertisements`-CR-selected FRR configurations. + +Any filtering or selection of prefixes to receive are not considered in `FRRConfiguration` objects that are generated from the `RouteAdvertisement` CRs. Configure any prefixes to receive on other `FRRConfiguration` objects. OVN-Kubernetes imports routes from the VRF into the appropriate network. + +[id="cluster-network-operator_{context}"] +== Cluster Network Operator configuration + +The Cluster Network Operator (CNO) API exposes several fields to configure route advertisements: + +- `spec.additionalRoutingCapabilities.providers`: Specifies an additional routing provider, which is required to advertise routes. The only supported value is `FRR`, which enables deployment of the FRR-K8S daemon for the cluster. When enabled, the FRR-K8S daemon is deployed on all nodes. +- `spec.defaultNetwork.ovnKubernetesConfig.routeAdvertisements`: Enables route advertisements for the default cluster network and CUDN networks. The `spec.additionalRoutingCapabilities` field must be set to `FRR` to enable this feature. diff --git a/modules/nwt-gateway-mode.adoc b/modules/nwt-configure-egress-routing-policies.adoc similarity index 60% rename from modules/nwt-gateway-mode.adoc rename to modules/nwt-configure-egress-routing-policies.adoc index b2f7694075d7..e525df9b2717 100644 --- a/modules/nwt-gateway-mode.adoc +++ b/modules/nwt-configure-egress-routing-policies.adoc @@ -2,12 +2,12 @@ // :_mod-docs-content-type: PROCEDURE -[id="nwt-gateway-mode_{context}"] -= Setting local and shared gateway modes +[id="nwt-configure-egress-routing-policies_{context}"] += Configuring egress routing policies -As a cluster administrator you can configure the gateway mode using the `gatewayConfig` spec in the Cluster Network Operator. The following procedure can be used to set the `routingViaHost` field to `true` for local mode or `false` for shared mode. +As a cluster administrator you can configure egress routing policies by using the `gatewayConfig` specification in the Cluster Network Operator (CNO). You can use the following procedure to set the `routingViaHost` field to `true` or `false`. -You can follow the optional step 4 to enable IP forwarding alongside local gateway mode if you need the host network of the node to act as a router for traffic not related to OVN-Kubernetes. For example, possible use cases for combining local gateway mode with IP forwarding include: +You can follow the optional step in the procedure to enable IP forwarding alongside the `routingViaHost=true` configuration if you need the host network of the node to act as a router for traffic not related to OVN-Kubernetes. For example, possible use cases for combining local gateway with IP forwarding include: * Configuring all pod egress traffic to be forwarded via the node's IP @@ -28,14 +28,14 @@ You can follow the optional step 4 to enable IP forwarding alongside local gatew $ oc get network.operator cluster -o yaml > network-config-backup.yaml ---- -. Set the `routingViaHost` paramemter to `true` for local gateway mode by running the following command: +. Set the `routingViaHost` parameter to `true` by entering the following command. Egress traffic then gets routed through a specific gateway according to the routes that you configured on the node. + [source,terminal] ---- $ oc patch networks.operator.openshift.io cluster --type=merge -p '{"spec":{"defaultNetwork":{"ovnKubernetesConfig":{"gatewayConfig":{"routingViaHost": true}}}}}' ---- -. Verify that local gateway mode has been set by running the following command: +. Verify the correct application of the `routingViaHost=true` configuration by running the following command: + [source,terminal] ---- @@ -58,7 +58,7 @@ gatewayConfig: ipsecConfig: # ... ---- -<1> A value of `true` sets local gateway mode and a value of `false` sets shared gateway mode. In local gateway mode, traffic is routed through the host. In shared gateway mode, traffic is not routed through the host. +<1> A value of `true` means that egress traffic gets routed through a specific local gateway on the node that hosts the pod. A value of `false` for the parameter means that a group of nodes share a single gateway so traffic does not get routed through a single host. . Optional: Enable IP forwarding globally by running the following command: + @@ -66,6 +66,7 @@ gatewayConfig: ---- $ oc patch network.operator cluster --type=merge -p '{"spec":{"defaultNetwork":{"ovnKubernetesConfig":{"gatewayConfig":{"ipForwarding": "Global"}}}}}' ---- ++ .. Verify that the `ipForwarding` spec has been set to `Global` by running the following command: + [source,terminal] diff --git a/modules/telco-core-cluster-network-operator.adoc b/modules/telco-core-cluster-network-operator.adoc index 036ecbc6df1b..26c544cdf07d 100644 --- a/modules/telco-core-cluster-network-operator.adoc +++ b/modules/telco-core-cluster-network-operator.adoc @@ -1,27 +1,44 @@ // Module included in the following assemblies: // -// * telco_ref_design_specs/core/telco-core-ref-design-components.adoc +// * scalability_and_performance/telco_ref_design_specs/core/telco-core-ref-design-components.adoc :_mod-docs-content-type: REFERENCE [id="telco-core-cluster-network-operator_{context}"] -= Cluster Network Operator (CNO) += Cluster Network Operator New in this release:: - * No reference design updates in this release Description:: - -The CNO deploys and manages the cluster network components including the default OVN-Kubernetes network plugin during {product-title} cluster installation. It allows configuring primary interface MTU settings, OVN gateway modes to use node routing tables for pod egress, and additional secondary networks such as MACVLAN. -+ -In support of network traffic separation, multiple network interfaces are configured through the CNO. Traffic steering to these interfaces is configured through static routes applied by using the NMState Operator. To ensure that pod traffic is properly routed, OVN-K is configured with the `routingViaHost` option enabled. This setting uses the kernel routing table and the applied static routes rather than OVN for pod egress traffic. + +-- +The Cluster Network Operator (CNO) deploys and manages the cluster network components including the default OVN-Kubernetes network plugin during cluster installation. The CNO allows for configuring primary interface MTU settings, OVN gateway configurations to use node routing tables for pod egress, and additional secondary networks such as MACVLAN. + +In support of network traffic separation, multiple network interfaces are configured through the CNO. +Traffic steering to these interfaces is configured through static routes applied by using the NMState Operator. To ensure that pod traffic is properly routed, OVN-K is configured with the `routingViaHost` option enabled. This setting uses the kernel routing table and the applied static routes rather than OVN for pod egress traffic. + The Whereabouts CNI plugin is used to provide dynamic IPv4 and IPv6 addressing for additional pod network interfaces without the use of a DHCP server. +-- Limits and requirements:: - * OVN-Kubernetes is required for IPv6 support. + * Large MTU cluster support requires connected network equipment to be set to the same or larger value. +* MACVLAN and IPVLAN cannot co-locate on the same main interface due to their reliance on the same underlying kernel mechanism, specifically the `rx_handler`. +This handler allows a third-party module to process incoming packets before the host processes them, and only one such handler can be registered per network interface. +Since both MACVLAN and IPVLAN need to register their own `rx_handler` to function, they conflict and cannot coexist on the same interface. +See link:https://elixir.bootlin.com/linux/v6.10.2/source/drivers/net/ipvlan/ipvlan_main.c#L82[ipvlan/ipvlan_main.c#L82] and link:https://elixir.bootlin.com/linux/v6.10.2/source/drivers/net/macvlan.c#L1260[net/macvlan.c#L1260] for details. + +* Alternative NIC configurations include splitting the shared NIC into multiple NICs or using a single dual-port NIC. ++ +[IMPORTANT] +==== +Splitting the shared NIC into multiple NICs or using a single dual-port NIC has not been validated with the telco core reference design. +==== + +* Single-stack IP cluster not validated. + + Engineering considerations:: * Pod egress traffic is handled by kernel routing table with the `routingViaHost` option. Appropriate static routes must be configured in the host. diff --git a/modules/telco-core-load-balancer.adoc b/modules/telco-core-load-balancer.adoc index c991136014bc..cc98bfd21614 100644 --- a/modules/telco-core-load-balancer.adoc +++ b/modules/telco-core-load-balancer.adoc @@ -1,20 +1,27 @@ // Module included in the following assemblies: // -// * telco_ref_design_specs/core/telco-core-ref-design-components.adoc +// * scalability_and_performance/telco_ref_design_specs/core/telco-core-ref-design-components.adoc :_mod-docs-content-type: REFERENCE [id="telco-core-load-balancer_{context}"] -= Load Balancer += Load balancer New in this release:: - -* No reference design updates in this release +//CNF-11914 +* In {product-title} 4.17, `frr-k8s` is now the default and fully supported Border Gateway Protocol (BGP) backend. +The deprecated `frr` BGP mode is still available. +You should upgrade clusters to use the `frr-k8s` backend. Description:: - -MetalLB is a load-balancer implementation for bare metal Kubernetes clusters using standard routing protocols. It enables a Kubernetes service to get an external IP address which is also added to the host network for the cluster. +MetalLB is a load-balancer implementation that uses standard routing protocols for bare-metal clusters. It enables a Kubernetes service to get an external IP address which is also added to the host network for the cluster. + -Some use cases might require features not available in MetalLB, for example stateful load balancing. Where necessary, you can use an external third party load balancer. Selection and configuration of an external load balancer is outside the scope of this specification. When an external third party load balancer is used, the integration effort must include enough analysis to ensure all performance and resource utilization requirements are met. +[NOTE] +==== +Some use cases might require features not available in MetalLB, for example stateful load balancing. +Where necessary, use an external third party load balancer. +Selection and configuration of an external load balancer is outside the scope of this document. +When you use an external third party load balancer, ensure that it meets all performance and resource utilization requirements. +==== Limits and requirements:: @@ -23,7 +30,9 @@ Limits and requirements:: Engineering considerations:: * MetalLB is used in BGP mode only for core use case models. -* For core use models, MetalLB is supported with only the OVN-Kubernetes network provider used in local gateway mode. See `routingViaHost` in the "Cluster Network Operator" section. +* For core use models, MetalLB is supported with only when you set `routingViaHost=true` in the `ovnKubernetesConfig.gatewayConfig` specification of the OVN-Kubernetes network plugin. * BGP configuration in MetalLB varies depending on the requirements of the network and peers. * Address pools can be configured as needed, allowing variation in addresses, aggregation length, auto assignment, and other relevant parameters. -* The values of parameters in the Bi-Directional Forwarding Detection (BFD) profile should remain close to the defaults. Shorter values might lead to false negatives and impact performance. +* MetalLB uses BGP for announcing routes only. +Only the `transmitInterval` and `minimumTtl` parameters are relevant in this mode. +Other parameters in the BFD profile should remain close to the default settings. Shorter values might lead to errors and impact performance. diff --git a/networking/network_security/configuring-ipsec-ovn.adoc b/networking/network_security/configuring-ipsec-ovn.adoc index b4d5765f6d15..a92441957fbc 100644 --- a/networking/network_security/configuring-ipsec-ovn.adoc +++ b/networking/network_security/configuring-ipsec-ovn.adoc @@ -43,10 +43,10 @@ include::modules/nw-own-ipsec-modes.adoc[leveloffset=+1] [id="{context}-prerequisites"] == Prerequisites -For IPsec support for encrypting traffic to external hosts, ensure that the following prerequisites are met: +For IPsec support for encrypting traffic to external hosts, ensure that you meet the following prerequisites: -* The OVN-Kubernetes network plugin must be configured in local gateway mode, where `ovnKubernetesConfig.gatewayConfig.routingViaHost=true`. -* The NMState Operator is installed. This Operator is required for specifying the IPsec configuration. For more information, see xref:../../networking/networking_operators/k8s-nmstate-about-the-k8s-nmstate-operator.adoc#k8s-nmstate-about-the-k8s-nmstate-operator[Kubernetes NMState Operator]. +* Set `routingViaHost=true` in the `ovnKubernetesConfig.gatewayConfig` specification of the OVN-Kubernetes network plugin. +* Install the NMState Operator. This Operator is required for specifying the IPsec configuration. For more information, see xref:../../networking/networking_operators/k8s-nmstate-about-the-k8s-nmstate-operator.adoc#k8s-nmstate-about-the-k8s-nmstate-operator[Kubernetes NMState Operator]. + -- [NOTE] diff --git a/networking/ovn_kubernetes_network_provider/configuring-gateway-mode.adoc b/networking/ovn_kubernetes_network_provider/configuring-gateway-mode.adoc deleted file mode 100644 index 483fb215b43e..000000000000 --- a/networking/ovn_kubernetes_network_provider/configuring-gateway-mode.adoc +++ /dev/null @@ -1,13 +0,0 @@ -:_mod-docs-content-type: ASSEMBLY -[id="configuring-gateway-mode"] -= Configuring gateway mode -include::_attributes/common-attributes.adoc[] -:context: configuring-gateway-mode - -toc::[] - -As a cluster administrator you can configure the `gatewayConfig` object to manage how external traffic leaves the cluster. You do so by setting the `routingViaHost` spec to `true` for local mode or `false` for shared mode. - -In local gateway mode, traffic is routed through the host and is consequently applied to the routing table of the host. In shared gateway mode, traffic is not routed through the host. Instead, traffic the Open vSwitch (OVS) outputs traffic directly to the node IP interface. - -include::modules/nwt-gateway-mode.adoc[leveloffset=+1] \ No newline at end of file diff --git a/networking/ovn_kubernetes_network_provider/configuring-gateway.adoc b/networking/ovn_kubernetes_network_provider/configuring-gateway.adoc new file mode 100644 index 000000000000..1dd3f3a0727e --- /dev/null +++ b/networking/ovn_kubernetes_network_provider/configuring-gateway.adoc @@ -0,0 +1,14 @@ +:_mod-docs-content-type: ASSEMBLY +[id="configuring-gateway"] += Configuring a gateway +include::_attributes/common-attributes.adoc[] +:context: configuring-gateway-mode + +toc::[] + +As a cluster administrator you can configure the `gatewayConfig` object to manage how external traffic leaves the cluster. You do so by setting the `routingViaHost` parameter to one of the following values: + +* `true` means that egress traffic routes through a specific local gateway on the node that hosts the pod. Egress traffic routes through the host and this traffic applies to the routing table of the host. +* `false` means that egress traffic routes through a dedicated node but a group of nodes share the same gateway. Egress traffic does not route through the host. The Open vSwitch (OVS) outputs traffic directly to the node IP interface. + +include::modules/nwt-configure-egress-routing-policies.adoc[leveloffset=+1] \ No newline at end of file