Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
37 changes: 11 additions & 26 deletions modules/accessing-hosts-on-aws.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -6,46 +6,31 @@
[id="accessing-hosts-on-aws_{context}"]
= Accessing hosts on Amazon Web Services in an installer-provisioned infrastructure cluster

The {product-title} installer does not create any public IP addresses for any of
the Amazon Elastic Compute Cloud (Amazon EC2) instances that it provisions for
your {product-title} cluster. To be able to SSH to your {product-title}
hosts, you must follow this procedure.
[role="_abstract"]
To establish Secure Shell (SSH) access to {product-title} hosts on Amazon EC2 instances that lack public IP addresses, configure a bastion host or secure gateway. Defining this access path ensures that you can safely manage and troubleshoot your private infrastructure within an installer-provisioned environment.

.Procedure

. Create a security group that allows SSH access into the virtual private cloud
(VPC) created by the `openshift-install` command.
. Create a security group that allows SSH access into the virtual private cloud (VPC) that the `openshift-install` command-line interface creates.

. Create an Amazon EC2 instance on one of the public subnets the installer
created.
. Create an Amazon EC2 instance on one of the public subnets the installation program created.

. Associate a public IP address with the Amazon EC2 instance that you created.
+
Unlike with the {product-title} installation, you should associate the Amazon
EC2 instance you created with an SSH keypair. It does not matter what operating
system you choose for this instance, as it will simply serve as an SSH bastion
to bridge the internet into your {product-title} cluster's VPC. The Amazon
Machine Image (AMI) you use does matter. With {op-system-first},
for example, you can provide keys via Ignition, like the installer does.

. After you provisioned your Amazon EC2 instance and can SSH into it, you must add
the SSH key that you associated with your {product-title} installation. This key
can be different from the key for the bastion instance, but does not have to be.
Unlike with the {product-title} installation, associate the Amazon EC2 instance you created with an SSH keypair. The operating system selection is not important for this instance, because the instanace serves as an SSH bastion to bridge the internet into the VPC of your {product-title} cluster. The Amazon Machine Image (AMI) you use does matter. With {op-system-first}, for example, you can provide keys through Ignition by using a similar method to the installation program.

. After you provisioned your Amazon EC2 instance and can SSH into the instance, add the SSH key that you associated with your {product-title} installation. This key can be different from the key for the bastion instance, but this is not a strict requirement.
+
[NOTE]
====
Direct SSH access is only recommended for disaster recovery. When the Kubernetes
API is responsive, run privileged pods instead.
Use direct SSH access only for disaster recovery. When the Kubernetes API is responsive, run privileged pods instead.
====

. Run `oc get nodes`, inspect the output, and choose one of the nodes that is a
master. The hostname looks similar to `ip-10-0-1-163.ec2.internal`.
. Run `oc get nodes`, inspect the output, and choose one of the nodes that is a control plane. The hostname looks similar to `ip-10-0-1-163.ec2.internal`.

. From the bastion SSH host you manually deployed into Amazon EC2, SSH into that
control plane host. Ensure that you use the same SSH key you specified during the
installation:
. From the bastion SSH host that you manually deployed into Amazon EC2, SSH into that control plane host by entering the following command. Ensure that you use the same SSH key that you specified during installation:
+
[source,terminal]
----
$ ssh -i <ssh-key-path> core@<master-hostname>
$ ssh -i <ssh-key-path> core@<control_plane_hostname>
----
7 changes: 4 additions & 3 deletions modules/hcp-cidr-ranges.adoc
Original file line number Diff line number Diff line change
@@ -1,16 +1,17 @@
// Module included in the following assemblies:
//
// * hosted_control_planes/hcp-prepare/hcp-requirements.adoc
// * /networking/networking_overview/cidr-range-definitions.adoc

:_mod-docs-content-type: REFERENCE
[id="hcp-cidr-ranges_{context}"]
= CIDR ranges for {hcp}

[role="_abstract"]
To successfully deploy {hcp} on {product-title}, define the network environment by using specific Classless Inter-Domain Routing (CIDR) subnet ranges. Establishing these nonoverlapping ranges ensures reliable communication between cluster components and prevents internal IP address conflicts.

For deploying {hcp} on {product-title}, use the following required Classless Inter-Domain Routing (CIDR) subnet ranges:

* `v4InternalSubnet`: 100.65.0.0/16 (OVN-Kubernetes)
* `clusterNetwork`: 10.132.0.0/14 (pod network)
* `serviceNetwork`: 172.31.0.0/16


For more information about {product-title} CIDR range definitions, see "CIDR range definitions".
27 changes: 27 additions & 0 deletions modules/host-prefix-description.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
// Module included in the following assemblies:
//
// * /networking/networking_overview/cidr-range-definitions.adoc

:_mod-docs-content-type: CONCEPT
[id="host-prefix-description_{context}"]
= Host prefix

[role="_abstract"]
To allocate a dedicated pool of IP addresses for pods on each node in {product-title}, specify the subnet prefix length in the hostPrefix parameter. Defining an appropriate prefix ensures that every machine has sufficient unique addresses to support its scheduled workloads without exhausting the cluster's network resources.

ifdef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[]
For example, if you set the `hostPrefix` parameter to `/23`, each machine is assigned a `/23` subnet from the pod CIDR address range. The default is `/23`, allowing 512 cluster nodes and 512 pods per node. Note that where 512 cluster nodes and 512 pods are beyond the maximum supported.
endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[]

ifdef::openshift-enterprise,openshift-rosa,openshift-rosa-hcp,openshift-dedicated[]
For example, if the host prefix is set to `/23`, each machine is assigned a `/23` subnet from the pod CIDR address range. The default is `/23`, allowing 510 cluster nodes and 510 pod IP addresses per node.

Consider another example where you set the `clusterNetwork.cidr` parameter to `10.128.0.0/16`, you define the complete address space for the cluster. This assigns a pool of 65,536 IP addresses to your cluster. If you then set the `hostPrefix` parameter to `/23`, you define a subnet slice to each node in the cluster, where the `/23` slice becomes a subnet of the `/16` subnet network. This assigns 512 IP addresses to each node, where 2 IP addresses get reserved for networking and broadcasting purposes. The following example calculation uses these IP address figures to determine the maximum number of nodes that you can create for your cluster:

[source,text]
----
65536 / 512 = 128
----

You can use the link:https://access.redhat.com/labs/ocpnc/[Red Hat OpenShift Network Calculator] to calculate the maximum number of nodes for your cluster.
endif::openshift-enterprise,openshift-rosa,openshift-rosa-hcp,openshift-dedicated[]
28 changes: 28 additions & 0 deletions modules/machine-cidr-description.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
// Module included in the following assemblies:
//
// * /networking/networking_overview/cidr-range-definitions.adoc

:_mod-docs-content-type: CONCEPT
[id="machine-cidr-description_{context}"]
= Machine CIDR

[role="_abstract"]
To establish the network scope for cluster nodes in {product-title}, specify an IP address range in the Machine Classless Inter-Domain Routing (CIDR) parameter. Defining this range ensures that all machines within the environment have valid, routable addresses for internal cluster communication.

[NOTE]
====
You cannot change Machine CIDR ranges after you create your cluster.
====

ifdef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[]
This range must encompass all CIDR address ranges for your virtual private cloud (VPC) subnets. Subnets must be contiguous. A minimum range of 128 IP addresses, using the subnet prefix `/25`, is supported for single availability zone deployments. A minimum address range of 256 addresses, using the subnet prefix `/24`, is supported for deployments that use multiple availability zones.
endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[]

The default is `10.0.0.0/16`. This range must not conflict with any connected networks.

ifdef::openshift-rosa-hcp[]
[NOTE]
====
When using {product-title}, the static IP address `172.20.0.1` is reserved for the internal Kubernetes API address. The machine, pod, and service CIDR ranges must not conflict with this IP address.
====
endif::openshift-rosa-hcp[]
Original file line number Diff line number Diff line change
Expand Up @@ -6,18 +6,17 @@
[id="nw-understanding-networking-core-layers-and-components_{context}"]
= Core network layers and components

{openshift-networking} is built on two fundamental layers: the `pod network` and the `service network`. The pod network is where your applications live. The service network makes your applications reliably accessible.
[role="_abstract"]
To build and expose resilient applications in {product-title}, configure the pod and service network layers. Defining these foundational layers ensures that your application workloads have a secure environment to run and remain reliably accessible to other services.

[id="the-pod-network_{context}"]
== The pod network
The pod network::

The pod network is a flat network space where every pod in the cluster receives its own unique IP address. This network is managed by the Container Network Interface (CNI) plugin. The CNI plugin is responsible for wiring each pod into the cluster network.
+
This design allows pods to communicate directly with each other using their IP addresses, regardless of which node they are running on. However, these pod IP addresses are ephemeral. This means the IP addresses are destroyed when the pod is destroyed and a new IP address is assigned when a new pod is created. Because of this, you should never rely on pod IP addresses directly for long-lived communication.
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if you want this content to align with the paragraph above it, you'll want to add a + between them


This design allows pods to communicate directly with each other using their IP addresses, regardless of which node they are running on. However, these pod IPs are ephemeral. This means the IPs are destroyed when the pod is destroyed and a new IP address is assigned when a new pod is created. Because of this, you should never rely on pod IP addresses directly for long-lived communication.

[id="the-service-network_{context}"]
== The service network
The service network::

A service is a networking object that provides a single, stable virtual IP address, called a ClusterIP, and a DNS name for a logical group of pods.

When a request is sent to a service's ClusterIP, {product-title} automatically load-balances the traffic to one of the healthy pods backing that service. It uses Kubernetes labels and selectors to keep track of which pods belong to which service. This abstraction makes your applications resilient because individual pods can be created or destroyed without affecting the applications trying to reach them.
+
When a request is sent to a the ClusterIP of the service, {product-title} automatically load balances the traffic to one of the healthy pods backing that service. {product-title} uses Kubernetes labels and selectors to keep track of which pods belong to which service. This abstraction makes your applications resilient because individual pods can be created or destroyed without affecting the applications trying to reach them.
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same here: if you want this content to align with the paragraph above it, you'll want to add a + between them

Original file line number Diff line number Diff line change
Expand Up @@ -6,31 +6,27 @@
[id="nw-understanding-networking-managing-traffic-entering-leaving_{context}"]
= Managing traffic entering and leaving the cluster

You need a way for external users to access your applications and for your applications to securely access external services. {product-title} provides several tools to manage this flow of traffic into and out of your cluster.
[role="_abstract"]
To enable external access and securely manage traffic flow into and out of your {product-title} cluster, configure ingress and egress mechanisms. Establishing these traffic rules ensures that external users can reach your applications reliably while maintaining secure communication with external services.

[id="exposing-applications-with-ingress-and-route-objects_{context}"]
== Exposing applications with Ingress and Route objects
Exposing applications with Ingress and Route objects::

To allow external traffic to reach services inside your cluster, you use an Ingress Controller. This component acts as the front door that directs incoming requests to the correct application. You define the traffic rules using one of two primary resources:
To allow external traffic to reach services inside your cluster, you use an Ingress Controller. The Ingress Controller acts as the front door that directs incoming requests to the correct application. You define the traffic rules using one of two primary resources:

* Ingress: The standard Kubernetes resource for managing external access to services, typically for HTTP and HTTPS traffic.

* `Route` object: A resource that provides the same functionality as Ingress but includes additional features like more advanced TLS termination options and traffic splitting. `Route` objects are specific to {product-title}.

[id="distributing-traffic-with-load-balancers_{context}"]
== Distributing traffic with Load Balancers
Distributing traffic with load balancers::

A Load Balancer provides a single, highly available IP address for directing traffic to your cluster. It typically runs outside the cluster on a cloud provider or using MetalLB on bare-metal infrastructure and distributes incoming requests across multiple nodes that are running the Ingress Controller.
A load balancer provides a single, highly available IP address for directing traffic to your cluster. A load balancer typically runs outside the cluster on a cloud provider or can use MetalLB on bare-metal infrastructure to distribute incoming requests across multiple nodes that are running the Ingress Controller. This prevents any single node from becoming a bottleneck or a point of failure to ensure that your applications remain accessible.

This prevents any single node from becoming a bottleneck or a point of failure to ensure that your applications remain accessible.

[id="controlling-egress-traffic_{context}"]
== Controlling Egress traffic
Controlling egress traffic::

Egress refers to outbound traffic that originates from a pod inside the cluster and is destined for an external system. {product-title} provides several mechanisms to manage this:

* EgressIP: You can assign a specific, predictable source IP address to all outbound traffic from a given project. This is useful when you need to access an external service like a database that has a firewall requiring you to allow specific source IPs.
* EgressIP: You can assign a specific, predictable source IP address to all outbound traffic from a given project. Consider this configuration when you need to access an external service like a database that has a firewall where you need to allow specific source IPs.

* Egress Router: This is a dedicated pod that acts as a gateway for outbound traffic. It allows you to route connections through a single, controlled exit point.
* Egress Router: This is a dedicated pod that acts as a gateway for outbound traffic. By using an Egress Router, you can route connections through a single, controlled exit point.

* Egress Firewall: This acts as a cluster-level firewall for all outbound traffic. It enhances your security posture by allowing you to create rules that explicitly allow or deny connections from pods to specific external destinations.
* Egress Firewall: This acts as a cluster-level firewall for all outbound traffic. The Egress Firewall enhances your security posture so that you can create rules that explicitly allow or deny connections from pods to specific external destinations.
15 changes: 7 additions & 8 deletions modules/nw-understanding-networking-managing-traffic-within.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -6,16 +6,15 @@
[id="nw-understanding-networking-managing-traffic-within_{context}"]
= Managing traffic within the cluster

Your applications need to communicate with each other inside the cluster. {product-title} provides two primary mechanisms for internal traffic: direct pod-to-pod communication for simple exchanges and robust service discovery for reliable connections.
[role="_abstract"]
To ensure reliable communication between applications in {product-title}, configure pod-to-pod traffic and service discovery mechanisms. Implementing these mechanisms allows cluster workloads to exchange data efficiently through either direct connections or robust discovery rules.

[id="pod-to-pod-communication_{context}"]
== Pod-to-pod communication
Pod-to-pod communication::

Pods communicate directly using the unique IP addresses assigned by the pod network. A pod on one node can send traffic directly to a pod on another node without any network address translation (NAT). This direct communication model is efficient for services that need to exchange data quickly. Applications can simply target another pod’s IP address to establish a connection.
Pods communicate directly by using the unique IP addresses assigned by the pod network. A pod on one node can send traffic directly to a pod on another node without any network address translation (NAT). This direct communication model is efficient for services that need to exchange data quickly. Applications can simply target the IP address of another pod to establish a connection.

[id="service-discovery-with-dns_{context}"]
== Service discovery with DNS
Service discovery with DNS::

Pods need a reliable way to find each other because pod IP addresses are ephemeral. {product-title} uses `CoreDNS`, a built-in DNS server, to provide this service discovery.

Every service you create automatically receives a stable DNS name. A pod can use this DNS name to connect to the service. The DNS system resolves the name to the service's stable `ClusterIP` address. This process ensures reliable communication even when individual pod IPs change.
+
Every service you create automatically receives a stable DNS name. A pod can use this DNS name to connect to the service. The DNS system resolves the name to the service's stable `ClusterIP` address. This process ensures reliable communication even when individual pod IPs change.
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same here

21 changes: 21 additions & 0 deletions modules/pod-cidr-description.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
// Module included in the following assemblies:
//
// * /networking/networking_overview/cidr-range-definitions.adoc

:_mod-docs-content-type: CONCEPT
[id="pod-cidr-description_{context}"]
= Pod CIDR

[role="_abstract"]
To allocate internal network addresses for cluster workloads in {product-title}, specify an IP address range in the pod Classless Inter-Domain Routing (CIDR) field. Defining this range ensures that pods can communicate with each other reliably without overlapping with the node or service networks.

ifdef::openshift-enterprise[]
The pod CIDR is the same as the `clusterNetwork` CIDR and the cluster CIDR.
endif::openshift-enterprise[]
ifdef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[]
Red{nbsp}Hat recommends, but this task is not mandatory, that the address block is the same between clusters. This does not create IP address conflicts.
endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[]
The range must be large enough to accommodate your workload. The address block must not overlap with any external service accessed from within the cluster. The default is `10.128.0.0/14`.
ifdef::openshift-enterprise[]
You can expand the range after cluster installation.
endif::openshift-enterprise[]
Loading