Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions _topic_maps/_topic_map.yml
Original file line number Diff line number Diff line change
Expand Up @@ -1620,8 +1620,8 @@ Topics:
File: converting-to-dual-stack
- Name: Configuring internal subnets
File: configure-ovn-kubernetes-subnets
- Name: Configuring gateway mode
File: configuring-gateway-mode
- Name: Configuring a gateway
File: configuring-gateway
- Name: Configure an external gateway on the default network
File: configuring-secondary-external-gateway
- Name: Configuring an egress IP address
Expand Down
27 changes: 13 additions & 14 deletions modules/nw-egressnetworkpolicy-about.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -8,16 +8,15 @@ ifeval::["{context}" == "configuring-egress-firewall-ovn"]
:api: k8s.ovn.org/v1
endif::[]

:_mod-docs-content-type: CONCEPT
[id="nw-egressnetworkpolicy-about_{context}"]
= How an egress firewall works in a project

As a cluster administrator, you can use an _egress firewall_ to
limit the external hosts that some or all pods can access from within the
cluster. An egress firewall supports the following scenarios:
As a cluster administrator, you can use an _egress firewall_ to limit the external hosts that some or all pods can access from within the cluster. An egress firewall supports the following scenarios:

- A pod can only connect to internal hosts and cannot initiate connections to
- A pod can only connect to internal hosts and cannot start connections to
the public internet.
- A pod can only connect to the public internet and cannot initiate connections
- A pod can only connect to the public internet and cannot start connections
to internal hosts that are outside the {product-title} cluster.
- A pod cannot reach specified internal subnets or hosts outside the {product-title} cluster.
- A pod can connect to only specific external hosts.
Expand All @@ -26,7 +25,7 @@ For example, you can allow one project access to a specified IP range but deny t

[NOTE]
====
Egress firewall does not apply to the host network namespace. Pods with host networking enabled are unaffected by egress firewall rules.
Egress firewall does not apply to the host network namespace. Egress firewall rules do not impact any pods that have host networking enabled.
====

You configure an egress firewall policy by creating an {kind} custom resource (CR) object. The egress firewall matches network traffic that meets any of the following criteria:
Expand All @@ -40,7 +39,7 @@ endif::ovn[]

[IMPORTANT]
====
If your egress firewall includes a deny rule for `0.0.0.0/0`, access to your {product-title} API servers is blocked. You must either add allow rules for each IP address or use the `nodeSelector` type allow rule in your egress policy rules to connect to API servers.
If your egress firewall includes a deny rule for `0.0.0.0/0`, the rule blocks access to your {product-title} API servers. You must either add allow rules for each IP address or use the `nodeSelector` type allow rule in your egress policy rules to connect to API servers.

The following example illustrates the order of the egress firewall rules necessary to ensure API server access:

Expand Down Expand Up @@ -85,36 +84,36 @@ An egress firewall has the following limitations:
ifdef::ovn[]
* A maximum of one {kind} object with a maximum of 8,000 rules can be defined per project.

* If you are using the OVN-Kubernetes network plugin with shared gateway mode in Red Hat OpenShift Networking, return ingress replies are affected by egress firewall rules. If the egress firewall rules drop the ingress reply destination IP, the traffic is dropped.
* If you use the OVN-Kubernetes network plugin and you configured `false` for the `routingViaHost` parameter in the `Network` custom resource for your cluster, egress firewall rules impact the return ingress replies. If the egress firewall rules drop the ingress reply destination IP, the traffic is dropped.
endif::ovn[]

Violating any of these restrictions results in a broken egress firewall for the project. Consequently, all external network traffic is dropped, which can cause security risks for your organization.
Violating any of these restrictions results in a broken egress firewall for the project. As a result, all external network traffic drops, which can cause security risks for your organization.

An Egress Firewall resource can be created in the `kube-node-lease`, `kube-public`, `kube-system`, `openshift` and `openshift-` projects.
You can create an Egress Firewall resource in the `kube-node-lease`, `kube-public`, `kube-system`, `openshift` and `openshift-` projects.

[id="policy-rule-order_{context}"]
== Matching order for egress firewall policy rules

The egress firewall policy rules are evaluated in the order that they are defined, from first to last. The first rule that matches an egress connection from a pod applies. Any subsequent rules are ignored for that connection.
The OVN-Kubernetes network plugin evaluates egress firewall policy rules based on the first-to-last order of how you defined the rules. The first rule that matches an egress connection from a pod applies. The plugin ignores any subsequent rules for that connection.

[id="domain-name-server-resolution_{context}"]
== How Domain Name Server (DNS) resolution works
== Domain Name Server (DNS) resolution

If you use DNS names in any of your egress firewall policy rules, proper resolution of the domain names is subject to the following restrictions:

ifdef::ovn[]
* Domain name updates are polled based on a time-to-live (TTL) duration. By default, the duration is 30 minutes. When the egress firewall controller queries the local name servers for a domain name, if the response includes a TTL and the TTL is less than 30 minutes, the controller sets the duration for that DNS name to the returned value. Each DNS name is queried after the TTL for the DNS record expires.
endif::ovn[]

* The pod must resolve the domain from the same local name servers when necessary. Otherwise the IP addresses for the domain known by the egress firewall controller and the pod can be different. If the IP addresses for a hostname differ, the egress firewall might not be enforced consistently.
* The pod must resolve the domain from the same local name servers when necessary. Otherwise the IP addresses for the domain known by the egress firewall controller and the pod can be different. If the IP addresses for a hostname differ, consistent enforcement of the egress firewall does not apply.

* Because the egress firewall controller and pods asynchronously poll the same local name server, the pod might obtain the updated IP address before the egress controller does, which causes a race condition. Due to this current limitation, domain name usage in {kind} objects is only recommended for domains with infrequent IP address changes.

[NOTE]
====
Using DNS names in your egress firewall policy does not affect local DNS resolution through CoreDNS.

However, if your egress firewall policy uses domain names, and an external DNS server handles DNS resolution for an affected pod, you must include egress firewall rules that permit access to the IP addresses of your DNS server.
If your egress firewall policy uses domain names, and an external DNS server handles DNS resolution for an affected pod, you must include egress firewall rules that allow access to the IP addresses of your DNS server.
====

ifdef::ovn[]
Expand Down
2 changes: 1 addition & 1 deletion modules/nw-ovn-ipsec-north-south-enable.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ After you apply the machine config, the Machine Config Operator reboots affected
* You logged in to the cluster as a user with `cluster-admin` privileges.
* You have an existing PKCS#12 certificate for the IPsec endpoint and a CA cert in PEM format.
* You enabled IPsec in either `Full` or `External` mode on your cluster.
* The OVN-Kubernetes network plugin must be configured in local gateway mode, where `ovnKubernetesConfig.gatewayConfig.routingViaHost=true`.
* You must set the `routingViaHost` parameter to `true` in the `ovnKubernetesConfig.gatewayConfig` specification of the OVN-Kubernetes network plugin.

.Procedure

Expand Down
11 changes: 5 additions & 6 deletions modules/nw-routeadvertisements-about.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -15,19 +15,18 @@ EgressIP routes: Routes to EgressIPs

With route advertisements enabled, the OVN-Kubernetes network plugin supports advertising network routes for the default pod network and cluster user-defined (CUDN) networks to the provider network, including EgressIPs, and importing routes from the provider network to the default pod network and CUDNs. From the provider network, IP addresses advertised from the default pod network and CUDNs can be reached directly.

For example, you can import routes to the default pod network so you no longer need to manually configure routes on each node. Previously, you might have been using local gateway mode (`RoutingViaHost=true`) and manually configuring routes on each node to approximate a similar configuration. With route advertisements you can accomplish this seamlessly and you can use shared gateway mode (`RoutingViaHost=false`) as well.
For example, you can import routes to the default pod network so you no longer need to manually configure routes on each node. Previously, you might have been setting the `routingViaHost` parameter to `true` and manually configuring routes on each node to approximate a similar configuration. With route advertisements you can accomplish this task seamlessly with `routingViaHost` parameter set to `false`.

You could also set the `routingViaHost` parameter to `true` in the `Network` custom resource CR for your cluster, but you must then manually configure routes on each node to simulate a similar configuration. When you enable route advertisements, you can set `routingViaHost=false` in the `Network` CR without having to then manually configure routes one each node.

Route reflectors on the provider network are supported and can reduce the number of BGP connections required to advertise routes on large networks.

If you use EgressIPs with route advertisements enabled, the layer 3 provider network is aware of EgressIP failovers. This allows you to locate cluster nodes that host EgressIPs on different layer 2 segments whereas before only the layer 2 provider network was aware so that required all the egress nodes to be on the same layer 2 segment.
If you use EgressIPs with route advertisements enabled, the layer 3 provider network is aware of EgressIP failovers. This means that you can locate cluster nodes that host EgressIPs on different layer 2 segments whereas before only the layer 2 provider network was aware so that required all the egress nodes to be on the same layer 2 segment.

[id="supported-platforms_{context}"]
== Supported platforms

Advertising routes with border gateway protocol (BGP) is supported on the following infrastructure types:

- Bare-metal
//- {vmw-full} on-premise
Advertising routes with border gateway protocol (BGP) is supported on the bare-metal infrastructure type.

[id="infrastructure-requirements_{context}"]
== Infrastructure requirements
Expand Down
4 changes: 2 additions & 2 deletions modules/nw-routeadvertisements-example.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -144,7 +144,7 @@ spec:
nodeSelector: {}
----

When the OVN-Kubernetes controller sees this `RouteAdvertisements` CR, it generates generates further `FRRConfiguration` objects based on the selected ones that configure the FRR daemon to advertise the routes. The following example is of one such configuration object, with the number of `FRRConfiguration` objects created depending on the node and networks selected.
When the OVN-Kubernetes controller sees this `RouteAdvertisements` CR, it generates further `FRRConfiguration` objects based on the selected ones that configure the FRR daemon to advertise the routes. The following example is of one such configuration object, with the number of `FRRConfiguration` objects created depending on the node and networks selected.

.An example of a `FRRConfiguration` CR generated by OVN-Kubernetes
[source,yaml]
Expand Down Expand Up @@ -211,7 +211,7 @@ Blue CUDN::

[NOTE]
====
This approach is available only when you use OVN-Kubernetes in local gateway mode by setting `routingViaHost=true`.
This approach is available only when you set `routingViaHost=true` in the `ovnKubernetesConfig.gatewayConfig` specification of the OVN-Kubernetes network plugin.
====

In the following configuration, an additional `FRRConfiguration` CR configures peering with the PE router on the blue and red VLANs:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2,12 +2,12 @@
//

:_mod-docs-content-type: PROCEDURE
[id="nwt-gateway-mode_{context}"]
= Setting local and shared gateway modes
[id="nwt-configure-egress-routing-policies_{context}"]
= Configuring egress routing policies

As a cluster administrator you can configure the gateway mode using the `gatewayConfig` spec in the Cluster Network Operator. The following procedure can be used to set the `routingViaHost` field to `true` for local mode or `false` for shared mode.
As a cluster administrator you can configure egress routing policies by using the `gatewayConfig` specification in the Cluster Network Operator (CNO). You can use the following procedure to set the `routingViaHost` field to `true` or `false`.

You can follow the optional step 4 to enable IP forwarding alongside local gateway mode if you need the host network of the node to act as a router for traffic not related to OVN-Kubernetes. For example, possible use cases for combining local gateway mode with IP forwarding include:
You can follow the optional step in the procedure to enable IP forwarding alongside the `routingViaHost=true` configuration if you need the host network of the node to act as a router for traffic not related to OVN-Kubernetes. For example, possible use cases for combining local gateway with IP forwarding include:

* Configuring all pod egress traffic to be forwarded via the node's IP

Expand All @@ -28,14 +28,14 @@ You can follow the optional step 4 to enable IP forwarding alongside local gatew
$ oc get network.operator cluster -o yaml > network-config-backup.yaml
----

. Set the `routingViaHost` parameter to `true` for local gateway mode by running the following command:
. Set the `routingViaHost` parameter to `true` by entering the following command. Egress traffic then gets routed through a specific gateway according to the routes that you configured on the node.
+
[source,terminal]
----
$ oc patch networks.operator.openshift.io cluster --type=merge -p '{"spec":{"defaultNetwork":{"ovnKubernetesConfig":{"gatewayConfig":{"routingViaHost": true}}}}}'
----

. Verify that local gateway mode has been set by running the following command:
. Verify the correct application of the `routingViaHost=true` configuration by running the following command:
+
[source,terminal]
----
Expand All @@ -58,14 +58,15 @@ gatewayConfig:
ipsecConfig:
# ...
----
<1> A value of `true` sets local gateway mode and a value of `false` sets shared gateway mode. In local gateway mode, traffic is routed through the host. In shared gateway mode, traffic is not routed through the host.
<1> A value of `true` means that egress traffic gets routed through a specific local gateway on the node that hosts the pod. A value of `false` for the parameter means that a group of nodes share a single gateway so traffic does not get routed through a single host.

. Optional: Enable IP forwarding globally by running the following command:
+
[source,terminal]
----
$ oc patch network.operator cluster --type=merge -p '{"spec":{"defaultNetwork":{"ovnKubernetesConfig":{"gatewayConfig":{"ipForwarding": "Global"}}}}}'
----
+
.. Verify that the `ipForwarding` spec has been set to `Global` by running the following command:
+
[source,terminal]
Expand Down
7 changes: 2 additions & 5 deletions modules/telco-core-cluster-network-operator.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -12,13 +12,10 @@ New in this release::
Description::
+
--
The Cluster Network Operator (CNO) deploys and manages the cluster network components including the default OVN-Kubernetes network plugin during cluster installation.
The CNO allows for configuring primary interface MTU settings, OVN gateway modes to use node routing tables for pod egress, and additional secondary networks such as MACVLAN.
The Cluster Network Operator (CNO) deploys and manages the cluster network components including the default OVN-Kubernetes network plugin during cluster installation. The CNO allows for configuring primary interface MTU settings, OVN gateway configurations to use node routing tables for pod egress, and additional secondary networks such as MACVLAN.

In support of network traffic separation, multiple network interfaces are configured through the CNO.
Traffic steering to these interfaces is configured through static routes applied by using the NMState Operator.
To ensure that pod traffic is properly routed, OVN-K is configured with the `routingViaHost` option enabled.
This setting uses the kernel routing table and the applied static routes rather than OVN for pod egress traffic.
Traffic steering to these interfaces is configured through static routes applied by using the NMState Operator. To ensure that pod traffic is properly routed, OVN-K is configured with the `routingViaHost` option enabled. This setting uses the kernel routing table and the applied static routes rather than OVN for pod egress traffic.

The Whereabouts CNI plugin is used to provide dynamic IPv4 and IPv6 addressing for additional pod network interfaces without the use of a DHCP server.
--
Expand Down
2 changes: 1 addition & 1 deletion modules/telco-core-load-balancer.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ An alternate load balancer implementation must be used if this is a requirement

Engineering considerations::
* MetalLB is used in BGP mode only for telco core use models.
* For telco core use models, MetalLB is supported only with the OVN-Kubernetes network provider used in local gateway mode.
* For telco core use models, MetalLB is supported only when you set `routingViaHost=true` in the `ovnKubernetesConfig.gatewayConfig` specification of the OVN-Kubernetes network plugin.
See `routingViaHost` in "Cluster Network Operator".
* BGP configuration in MetalLB is expected to vary depending on the requirements of the network and peers.
** You can configure address pools with variations in addresses, aggregation length, auto assignment, and so on.
Expand Down
6 changes: 3 additions & 3 deletions networking/network_security/configuring-ipsec-ovn.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -42,10 +42,10 @@ include::modules/nw-own-ipsec-modes.adoc[leveloffset=+1]
[id="{context}-prerequisites"]
== Prerequisites

For IPsec support for encrypting traffic to external hosts, ensure that the following prerequisites are met:
For IPsec support for encrypting traffic to external hosts, ensure that you meet the following prerequisites:

* The OVN-Kubernetes network plugin must be configured in local gateway mode, where `ovnKubernetesConfig.gatewayConfig.routingViaHost=true`.
* The NMState Operator is installed. This Operator is required for specifying the IPsec configuration. For more information, see xref:../../networking/networking_operators/k8s-nmstate-about-the-k8s-nmstate-operator.adoc#k8s-nmstate-about-the-k8s-nmstate-operator[Kubernetes NMState Operator].
* Set `routingViaHost=true` in the `ovnKubernetesConfig.gatewayConfig` specification of the OVN-Kubernetes network plugin.
* Install the NMState Operator. This Operator is required for specifying the IPsec configuration. For more information, see xref:../../networking/networking_operators/k8s-nmstate-about-the-k8s-nmstate-operator.adoc#k8s-nmstate-about-the-k8s-nmstate-operator[Kubernetes NMState Operator].
+
--
[NOTE]
Expand Down

This file was deleted.

Loading