Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions _topic_maps/_topic_map.yml
Original file line number Diff line number Diff line change
Expand Up @@ -1513,8 +1513,8 @@ Topics:
File: converting-to-dual-stack
- Name: Configuring internal subnets
File: configure-ovn-kubernetes-subnets
- Name: Configuring gateway mode
File: configuring-gateway-mode
- Name: Configuring a gateway
File: configuring-gateway
- Name: Configure an external gateway on the default network
File: configuring-secondary-external-gateway
- Name: Configuring an egress IP address
Expand Down
27 changes: 13 additions & 14 deletions modules/nw-egressnetworkpolicy-about.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -14,16 +14,15 @@ ifeval::["{context}" == "openshift-sdn-egress-firewall"]
:api: network.openshift.io/v1
endif::[]

:_mod-docs-content-type: CONCEPT
[id="nw-egressnetworkpolicy-about_{context}"]
= How an egress firewall works in a project

As a cluster administrator, you can use an _egress firewall_ to
limit the external hosts that some or all pods can access from within the
cluster. An egress firewall supports the following scenarios:
As a cluster administrator, you can use an _egress firewall_ to limit the external hosts that some or all pods can access from within the cluster. An egress firewall supports the following scenarios:

- A pod can only connect to internal hosts and cannot initiate connections to
- A pod can only connect to internal hosts and cannot start connections to
the public internet.
- A pod can only connect to the public internet and cannot initiate connections
- A pod can only connect to the public internet and cannot start connections
to internal hosts that are outside the {product-title} cluster.
- A pod cannot reach specified internal subnets or hosts outside the {product-title} cluster.
- A pod can connect to only specific external hosts.
Expand All @@ -32,7 +31,7 @@ For example, you can allow one project access to a specified IP range but deny t

[NOTE]
====
Egress firewall does not apply to the host network namespace. Pods with host networking enabled are unaffected by egress firewall rules.
Egress firewall does not apply to the host network namespace. Egress firewall rules do not impact any pods that have host networking enabled.
====

You configure an egress firewall policy by creating an {kind} custom resource (CR) object. The egress firewall matches network traffic that meets any of the following criteria:
Expand All @@ -46,7 +45,7 @@ ifdef::ovn[]

[IMPORTANT]
====
If your egress firewall includes a deny rule for `0.0.0.0/0`, access to your {product-title} API servers is blocked. You must either add allow rules for each IP address or use the `nodeSelector` type allow rule in your egress policy rules to connect to API servers.
If your egress firewall includes a deny rule for `0.0.0.0/0`, the rule blocks access to your {product-title} API servers. You must either add allow rules for each IP address or use the `nodeSelector` type allow rule in your egress policy rules to connect to API servers.

The following example illustrates the order of the egress firewall rules necessary to ensure API server access:

Expand Down Expand Up @@ -108,7 +107,7 @@ endif::openshift-sdn[]
ifdef::ovn[]
* A maximum of one {kind} object with a maximum of 8,000 rules can be defined per project.

* If you are using the OVN-Kubernetes network plugin with shared gateway mode in Red Hat OpenShift Networking, return ingress replies are affected by egress firewall rules. If the egress firewall rules drop the ingress reply destination IP, the traffic is dropped.
* If you use the OVN-Kubernetes network plugin and you configured `false` for the `routingViaHost` parameter in the `Network` custom resource for your cluster, egress firewall rules impact the return ingress replies. If the egress firewall rules drop the ingress reply destination IP, the traffic is dropped.
endif::ovn[]
ifdef::openshift-sdn[]
* A maximum of one {kind} object with a maximum of 1,000 rules can be defined per project.
Expand All @@ -124,17 +123,17 @@ ifdef::openshift-sdn[]
* If you create a selectorless service and manually define endpoints or `EndpointSlices` that point to external IPs, traffic to the service IP might still be allowed, even if your `EgressNetworkPolicy` is configured to deny all egress traffic. This occurs because OpenShift SDN does not fully enforce egress network policies for these external endpoints. Consequently, this might result in unexpected access to external services.
endif::openshift-sdn[]

Violating any of these restrictions results in a broken egress firewall for the project. Consequently, all external network traffic is dropped, which can cause security risks for your organization.
Violating any of these restrictions results in a broken egress firewall for the project. As a result, all external network traffic drops, which can cause security risks for your organization.

An Egress Firewall resource can be created in the `kube-node-lease`, `kube-public`, `kube-system`, `openshift` and `openshift-` projects.
You can create an Egress Firewall resource in the `kube-node-lease`, `kube-public`, `kube-system`, `openshift` and `openshift-` projects.

[id="policy-rule-order_{context}"]
== Matching order for egress firewall policy rules

The egress firewall policy rules are evaluated in the order that they are defined, from first to last. The first rule that matches an egress connection from a pod applies. Any subsequent rules are ignored for that connection.
The OVN-Kubernetes network plugin evaluates egress firewall policy rules based on the first-to-last order of how you defined the rules. The first rule that matches an egress connection from a pod applies. The plugin ignores any subsequent rules for that connection.

[id="domain-name-server-resolution_{context}"]
== How Domain Name Server (DNS) resolution works
== Domain Name Server (DNS) resolution

If you use DNS names in any of your egress firewall policy rules, proper resolution of the domain names is subject to the following restrictions:

Expand All @@ -145,15 +144,15 @@ ifdef::ovn[]
* Domain name updates are polled based on a time-to-live (TTL) duration. By default, the duration is 30 minutes. When the egress firewall controller queries the local name servers for a domain name, if the response includes a TTL and the TTL is less than 30 minutes, the controller sets the duration for that DNS name to the returned value. Each DNS name is queried after the TTL for the DNS record expires.
endif::ovn[]

* The pod must resolve the domain from the same local name servers when necessary. Otherwise the IP addresses for the domain known by the egress firewall controller and the pod can be different. If the IP addresses for a hostname differ, the egress firewall might not be enforced consistently.
* The pod must resolve the domain from the same local name servers when necessary. Otherwise the IP addresses for the domain known by the egress firewall controller and the pod can be different. If the IP addresses for a hostname differ, consistent enforcement of the egress firewall does not apply.

* Because the egress firewall controller and pods asynchronously poll the same local name server, the pod might obtain the updated IP address before the egress controller does, which causes a race condition. Due to this current limitation, domain name usage in {kind} objects is only recommended for domains with infrequent IP address changes.

[NOTE]
====
Using DNS names in your egress firewall policy does not affect local DNS resolution through CoreDNS.

However, if your egress firewall policy uses domain names, and an external DNS server handles DNS resolution for an affected pod, you must include egress firewall rules that permit access to the IP addresses of your DNS server.
If your egress firewall policy uses domain names, and an external DNS server handles DNS resolution for an affected pod, you must include egress firewall rules that allow access to the IP addresses of your DNS server.
====

ifdef::ovn[]
Expand Down
2 changes: 1 addition & 1 deletion modules/nw-ovn-ipsec-north-south-enable.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ After you apply the machine config, the Machine Config Operator reboots affected
* You logged in to the cluster as a user with `cluster-admin` privileges.
* You have an existing PKCS#12 certificate for the IPsec endpoint and a CA cert in PEM format.
* You enabled IPsec in either `Full` or `External` mode on your cluster.
* The OVN-Kubernetes network plugin must be configured in local gateway mode, where `ovnKubernetesConfig.gatewayConfig.routingViaHost=true`.
* You must set the `routingViaHost` parameter to `true` in the `ovnKubernetesConfig.gatewayConfig` specification of the OVN-Kubernetes network plugin.

.Procedure

Expand Down
105 changes: 105 additions & 0 deletions modules/nw-routeadvertisements-about.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,105 @@
// Module included in the following assemblies:
//
// * networking/route_advertisements/about-route-advertisements.adoc

////
Terminology -
Cluster network routes: Both pod network routes and/or EgressIP routes
Pod network routes: Routes to pod IPs
EgressIP routes: Routes to EgressIPs
////

:_mod-docs-content-type: CONCEPT
[id="nw-routeadvertisements-about_{context}"]
= Advertise cluster network routes with Border Gateway Protocol

With route advertisements enabled, the OVN-Kubernetes network plugin supports advertising network routes for the default pod network and cluster user-defined (CUDN) networks to the provider network, including EgressIPs, and importing routes from the provider network to the default pod network and CUDNs. From the provider network, IP addresses advertised from the default pod network and CUDNs can be reached directly.

For example, you can import routes to the default pod network so you no longer need to manually configure routes on each node. Previously, you might have been setting the `routingViaHost` parameter to `true` and manually configuring routes on each node to approximate a similar configuration. With route advertisements you can accomplish this task seamlessly with `routingViaHost` parameter set to `false`.

You could also set the `routingViaHost` parameter to `true` in the `Network` custom resource CR for your cluster, but you must then manually configure routes on each node to simulate a similar configuration. When you enable route advertisements, you can set `routingViaHost=false` in the `Network` CR without having to then manually configure routes one each node.

Route reflectors on the provider network are supported and can reduce the number of BGP connections required to advertise routes on large networks.

If you use EgressIPs with route advertisements enabled, the layer 3 provider network is aware of EgressIP failovers. This means that you can locate cluster nodes that host EgressIPs on different layer 2 segments whereas before only the layer 2 provider network was aware so that required all the egress nodes to be on the same layer 2 segment.

[id="supported-platforms_{context}"]
== Supported platforms

Advertising routes with border gateway protocol (BGP) is supported on the bare-metal infrastructure type.

[id="infrastructure-requirements_{context}"]
== Infrastructure requirements

To use route advertisements, you must have configured BGP for your network infrastructure. Outages or misconfigurations of your network infrastructure might cause disruptions to your cluster network.

[id="compatibility-with-other-networking-features_{context}"]
== Compatibility with other networking features

Route advertisements support the following {product-title} Networking features:

Multiple external gateways (MEG)::
MEG is not supported with this feature.

EgressIPs::
--
Supports the use and advertisement of EgressIPs. The node where an egress IP address resides advertises the EgressIP. An egress IP address must be on the same layer 2 network subnet as the egress node. The following limitations apply:

- Advertising EgressIPs from a user-defined network (CUDN) operating in layer 2 mode are not supported.
- Advertising EgressIPs for a network that has both egress IP addresses assigned to the primary network interface and egress IP addresses assigned to additional network interfaces is impractical. All EgressIPs are advertised on all of the BGP sessions of the selected FRRConfiguration instances, regardless of whether these sessions are established over the same interface that the EgressIP is assigned to or not, potentially leading to unwanted advertisements.

--

Services::
Works with the MetalLB Operator to advertise services to the provider network.

Egress service::
Full support.

Egress firewall::
Full support.

Egress QoS::
Full support.

Network policies::
Full support.

Direct pod ingress::
Full support for the default cluster network and cluster user-defined (CUDN) networks.

[id="considerations-for-use-with-the-metallb-operator_{context}"]
== Considerations for use with the MetalLB Operator

The MetalLB Operator is installed as an add-on to the cluster. Deployment of the MetalLB Operator automatically enables FRR-K8s as an additional routing capability provider. This feature and the MetalLB Operator use the same FRR-K8s deployment.

[id="considerations-for-naming-cluster-user-defined-networks_{context}"]
== Considerations for naming cluster user-defined networks (CUDNs)

When referencing a VRF device in a `FRRConfiguration` CR, the VRF name is the same as the CUDN name for VRF names that are less than or equal to 15 characters. It is recommended to use a VRF name no longer than 15 characters so that the VRF name can be inferred from the CUDN name.

[id="bgp-routing-custom-resources_{context}"]
== BGP routing custom resources

The following custom resources (CRs) are used to configure route advertisements with BGP:

`RouteAdvertisements`::
This CR defines the advertisements for the BGP routing. From this CR, the OVN-Kubernetes controller generates a `FRRConfiguration` object that configures the FRR daemon to advertise cluster network routes. This CR is cluster scoped.

`FRRConfiguration`::
This CR is used to define BGP peers and to configure route imports from the provider network into the cluster network. Before applying `RouteAdvertisements` objects, at least one FRRConfiguration object must be initially defined to configure the BGP peers. This CR is namespaced.

[id="ovn-kubernetes-controller-generation-of-frrconfiguration-objects_{context}"]
== OVN-Kubernetes controller generation of `FRRConfiguration` objects

An `FRRConfiguration` object is generated for each network and node selected by a `RouteAdvertisements` CR with the appropriate advertised prefixes that apply to each node. The OVN-Kubernetes controller checks whether the `RouteAdvertisements`-CR-selected nodes are a subset of the nodes that are selected by the `RouteAdvertisements`-CR-selected FRR configurations.

Any filtering or selection of prefixes to receive are not considered in `FRRConfiguration` objects that are generated from the `RouteAdvertisement` CRs. Configure any prefixes to receive on other `FRRConfiguration` objects. OVN-Kubernetes imports routes from the VRF into the appropriate network.

[id="cluster-network-operator_{context}"]
== Cluster Network Operator configuration

The Cluster Network Operator (CNO) API exposes several fields to configure route advertisements:

- `spec.additionalRoutingCapabilities.providers`: Specifies an additional routing provider, which is required to advertise routes. The only supported value is `FRR`, which enables deployment of the FRR-K8S daemon for the cluster. When enabled, the FRR-K8S daemon is deployed on all nodes.
- `spec.defaultNetwork.ovnKubernetesConfig.routeAdvertisements`: Enables route advertisements for the default cluster network and CUDN networks. The `spec.additionalRoutingCapabilities` field must be set to `FRR` to enable this feature.
Original file line number Diff line number Diff line change
Expand Up @@ -2,12 +2,12 @@
//

:_mod-docs-content-type: PROCEDURE
[id="nwt-gateway-mode_{context}"]
= Setting local and shared gateway modes
[id="nwt-configure-egress-routing-policies_{context}"]
= Configuring egress routing policies

As a cluster administrator you can configure the gateway mode using the `gatewayConfig` spec in the Cluster Network Operator. The following procedure can be used to set the `routingViaHost` field to `true` for local mode or `false` for shared mode.
As a cluster administrator you can configure egress routing policies by using the `gatewayConfig` specification in the Cluster Network Operator (CNO). You can use the following procedure to set the `routingViaHost` field to `true` or `false`.

You can follow the optional step 4 to enable IP forwarding alongside local gateway mode if you need the host network of the node to act as a router for traffic not related to OVN-Kubernetes. For example, possible use cases for combining local gateway mode with IP forwarding include:
You can follow the optional step in the procedure to enable IP forwarding alongside the `routingViaHost=true` configuration if you need the host network of the node to act as a router for traffic not related to OVN-Kubernetes. For example, possible use cases for combining local gateway with IP forwarding include:

* Configuring all pod egress traffic to be forwarded via the node's IP

Expand All @@ -28,14 +28,14 @@ You can follow the optional step 4 to enable IP forwarding alongside local gatew
$ oc get network.operator cluster -o yaml > network-config-backup.yaml
----

. Set the `routingViaHost` paramemter to `true` for local gateway mode by running the following command:
. Set the `routingViaHost` parameter to `true` by entering the following command. Egress traffic then gets routed through a specific gateway according to the routes that you configured on the node.
+
[source,terminal]
----
$ oc patch networks.operator.openshift.io cluster --type=merge -p '{"spec":{"defaultNetwork":{"ovnKubernetesConfig":{"gatewayConfig":{"routingViaHost": true}}}}}'
----

. Verify that local gateway mode has been set by running the following command:
. Verify the correct application of the `routingViaHost=true` configuration by running the following command:
+
[source,terminal]
----
Expand All @@ -58,14 +58,15 @@ gatewayConfig:
ipsecConfig:
# ...
----
<1> A value of `true` sets local gateway mode and a value of `false` sets shared gateway mode. In local gateway mode, traffic is routed through the host. In shared gateway mode, traffic is not routed through the host.
<1> A value of `true` means that egress traffic gets routed through a specific local gateway on the node that hosts the pod. A value of `false` for the parameter means that a group of nodes share a single gateway so traffic does not get routed through a single host.

. Optional: Enable IP forwarding globally by running the following command:
+
[source,terminal]
----
$ oc patch network.operator cluster --type=merge -p '{"spec":{"defaultNetwork":{"ovnKubernetesConfig":{"gatewayConfig":{"ipForwarding": "Global"}}}}}'
----
+
.. Verify that the `ipForwarding` spec has been set to `Global` by running the following command:
+
[source,terminal]
Expand Down
Loading