Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions modules/nw-ne-comparing-ingress-route.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -9,3 +9,5 @@ The Kubernetes Ingress resource in {product-title} implements the Ingress Contro
The {product-title} route provides Ingress traffic to services in the cluster. Routes provide advanced features that might not be supported by standard Kubernetes Ingress Controllers, such as TLS re-encryption, TLS passthrough, and split traffic for blue-green deployments.

Ingress traffic accesses services in the cluster through a route. Routes and Ingress are the main resources for handling Ingress traffic. Ingress provides features similar to a route, such as accepting external requests and delegating them based on the route. However, with Ingress you can only allow certain types of connections: HTTP/2, HTTPS and server name identification (SNI), and TLS with certificate. In {product-title}, routes are generated to meet the conditions specified by the Ingress resource.

Ingress provides more flexibility and advanced features compared to Routes, making it suitable for complex routing scenarios. Routes are simpler to set up and use, especially for basic external access needs. Routes are often used for simpler, straightforward external access, while Ingress is used for more complex scenarios requiring advanced routing and SSL termination.
13 changes: 2 additions & 11 deletions modules/nw-ne-openshift-dns.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -5,15 +5,6 @@
[id="nw-ne-openshift-dns_{context}"]
= {product-title} DNS

If you are running multiple services, such as front-end and back-end services for
use with multiple pods, environment variables are created for user names,
service IPs, and more so the front-end pods can communicate with the back-end
services. If the service is deleted and recreated, a new IP address can be
assigned to the service, and requires the front-end pods to be recreated to pick
up the updated values for the service IP environment variable. Additionally, the
back-end service must be created before any of the front-end pods to ensure that
the service IP is generated properly, and that it can be provided to the
front-end pods as an environment variable.
Domain Name System (DNS) is a hierarchical and decentralized naming system used to translate human-friendly domain names (like www.example.com) into IP addresses that computers use to identify each other on the network. Essentially, DNS acts like the phonebook of the internet. DNS plays a crucial role in service discovery and name resolution.

For this reason, {product-title} has a built-in DNS so that the services can be
reached by the service DNS as well as the service IP/port.
OpenShift provides a built-in DNS to ensure that services can be reached by their DNS names. This helps maintain stable communication even if the underlying IP addresses change. When a pod is started, environment variables for service names, IPs, and ports are created automatically, enabling the pod to communicate with other services.
34 changes: 34 additions & 0 deletions modules/nw-ne-openshift-improves.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,34 @@
// Module included in the following assemblies:
// * understanding-networking.adoc


[id="nw-ne-openshift-improves_{context}"]
= How networking in OpenShift improves upon networking in Kubernetes

Red Hat OpenShift Container Platform builds on Kubernetes by adding several unique features and enhancements, especially in the area of networking. Here are some key differences and unique aspects:

Integrated Networking Solutions::
OpenShift SDN: OpenShift uses its own implementation of Software-Defined Networking (SDN) called OpenShift SDN. It provides a unified cluster network that enables communication between pods across the OpenShift cluster.
Open vSwitch (OVS): OpenShift SDN uses Open vSwitch to create an overlay network, which simplifies network management and provides high-performance throughput.

Built-in DNS::
OpenShift has a built-in DNS service that allows pods to resolve service names to IP addresses. This ensures that pods can communicate with services using stable DNS names, even if the underlying pod IPs change.

Ingress Operator::
OpenShift includes an Ingress Operator that implements the IngressController API. This component enables external access to cluster services by deploying and managing HAProxy-based Ingress Controllers. It allows for advanced routing configurations and load balancing.

Enhanced Security::
OpenShift provides advanced network security features, such as Network Policies and Security Context Constraints (SCCs), which help in securing communication between pods and enforcing access controls.

Role-Based Access Control (RBAC)::
OpenShift extends Kubernetes RBAC to provide more granular control over who can access and manage network resources. This helps in maintaining security and compliance within the cluster.

Multi-Tenancy Support::
OpenShift offers robust multi-tenancy support, allowing multiple users or teams to share the same cluster while keeping their resources isolated and secure.

Hybrid and Multi-Cloud Capabilities::
OpenShift is designed to work seamlessly across on-premise, cloud, and multi-cloud environments. This flexibility allows organizations to deploy and manage containerized applications across different infrastructures.

Observability and Monitoring::
OpenShift provides integrated observability and monitoring tools that help in managing and troubleshooting network issues. This includes role-based access to network metrics and logs.
These features make Red Hat OpenShift Container Platform a powerful and flexible platform for managing containerized applications, providing enhanced networking capabilities compared to standard Kubernetes setups.
18 changes: 18 additions & 0 deletions modules/nw-ne-openshift-kubernetes-openshift.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
// Module included in the following assemblies:
// * understanding-networking.adoc


[id="nw-ne-openshift-kubernetes-openshift_{context}"]
= Networking in Kubernetes and OpenShift

Networking in Kubernetes and Red Hat OpenShift Container Platform ensures seamless communication between various components within the cluster and between external clients and the cluster. Both platforms rely on several core concepts and components:
* Pod-to-Pod Communications
* Services
* DNS
* Ingress Controllers
* Network Policies
* Load Balancing

For more information these concepts and components, see Networking concepts and components <insert link>.

In OpenShift (and Kubernetes in general), services are used to expose pods internally and externally, allowing for seamless communication within the cluster and with external clients. Using Network Policies to control and secure traffic flow within the cluster. Utilizing load balancers to distribute traffic and maintain service availability.
30 changes: 0 additions & 30 deletions networking/about-networking.adoc

This file was deleted.

Original file line number Diff line number Diff line change
@@ -0,0 +1,107 @@
// Module included in the following assemblies:
//
// * networking/understanding-networking.adoc

[id="nw-ne-openshift-example-ingress-route_{context}"]
= Example: Configuring routes and ingress to expose a web application
Imagine you have a web application running in your OpenShift cluster, and you want to make it accessible to external users. The application should be accessible via a specific domain name, and the traffic should be securely encrypted using TLS. The following example shows you how to configure both Routes and Ingress to expose your web application to external traffic securely. Routes offer a straightforward way to expose applications in OpenShift, while Ingress provides more advanced routing and TLS termination features.

Configuring Routes

1. Create a New Project:
[source, terminal]
oc new-project webapp-project

2. Deploy the Web Application:
[source, terminal]
oc new-app nodejs:12~https://github.com/sclorg/nodejs-ex.git --name=webapp

3. Expose the Service with a Route:
[source, terminal]
oc expose svc/webapp --hostname=webapp.example.com

4. Secure the Route with TLS:

Create a TLS secret with your certificate and key:
[source, terminal]
oc create secret tls webapp-tls --cert=path/to/tls.crt --key=path/to/tls.key

Update the route to use the TLS secret:
[source, terminal]
oc patch route/webapp -p '{"spec":{"tls":{"termination":"edge","certificate":"path/to/tls.crt","key":"path/to/tls.key"}}}'

Configuring ingress

1. Create an Ingress Resource:

First, ensure your Ingress Controller (e.g., NGINX) is installed and running in the cluster.

2. Create a Service for the Web Application:

If not already created, expose the application as a service:
[source, yaml]
apiVersion: v1
kind: Service
metadata:
name: webapp-service
namespace: webapp-project
spec:
selector:
app: webapp
ports:
- protocol: TCP
port: 80
targetPort: 8080

3. Create the Ingress Resource:

[source, yaml]
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: webapp-ingress
namespace: webapp-project
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host: webapp.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: webapp-service
port:
number: 80

4. Secure the Ingress with TLS:

Create a TLS secret with your certificate and key:
[source, terminal]
oc create secret tls webapp-tls --cert=path/to/tls.crt --key=path/to/tls.key -n webapp-project

Update the Ingress resource to use the TLS secret:
[source, yaml]
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: webapp-ingress
namespace: webapp-project
spec:
tls:
- hosts:
- webapp.example.com
secretName: webapp-tls
rules:
- host: webapp.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: webapp-service
port:
number: 80
92 changes: 92 additions & 0 deletions networking/include::modules/nw-ne-openshift-example-dns.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,92 @@
// Module included in the following assemblies:
// * understanding-networking.adoc


[id="nw-ne-openshift-example-dns_{context}"]
= Example: DNS use case

Imagine you have a front-end application running in one set of pods and a back-end service running in another set of pods. The front-end application needs to communicate with the back-end service. You create a Kubernetes service for the back-end pods, giving it a stable IP and DNS name. The front-end pods use this DNS name to access the back-end service, regardless of changes to individual pod IP addresses.

By creating a Kubernetes service for the back-end pods, you provide a stable IP and DNS name (`backend-service.default.svc.cluster.local`) that the front-end pods can use to communicate with the back-end service. This setup would ensure that even if individual pod IP addresses change, the communication remains consistent and reliable. To do so, complete the following steps:

1. Create the back-end service.

a. Deploy the back-end pods:

Create a deployment for the back-end application.
[source, yaml]
apiVersion: apps/v1
kind: Deployment
metadata:
name: backend-deployment
labels:
app: backend
spec:
replicas: 3
selector:
matchLabels:
app: backend
template:
metadata:
labels:
app: backend
spec:
containers:
- name: backend-container
image: your-backend-image
ports:
- containerPort: 8080

b. Create the Back-End Service:

Define a service to expose the back-end pods.
[source, yaml]
apiVersion: v1
kind: Service
metadata:
name: backend-service
spec:
selector:
app: backend
ports:
- protocol: TCP
port: 80
targetPort: 8080

2. Create the front-end deployment.

Deploy the front-end pods:

Create a deployment for the front-end application.
[source, yaml]
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend-deployment
labels:
app: frontend
spec:
replicas: 3
selector:
matchLabels:
app: frontend
template:
metadata:
labels:
app: frontend
spec:
containers:
- name: frontend-container
image: your-frontend-image
ports:
- containerPort: 80

3. Configure front-end to communicate with back-end.

Use the DNS name of the back-end service:

In your front-end application code, use the DNS name of the back-end service to send requests. For example, if your front-end application needs to fetch data from the back-end, you might have code like this:
[source, JavaScript]
fetch('http://backend-service.default.svc.cluster.local/api/data')
.then(response => response.json())
.then(data => console.log(data));
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
// Module included in the following assemblies:
//
// * networking/understanding-networking.adoc

[id="nw-ne-openshift-exposing-applications_{context}"]
= Exposing applications
In Kubernetes, ClusterIP is the default service type. It exposes the service on an internal IP within the cluster, making it accessible only to other services within the cluster. The NodePort service type exposes the service on a static port on each node's IP, which allows external traffic to access the service. LoadBalancer service is typically used in cloud environments. This service type provisions an external load balancer that routes external traffic to the service.

An API object that manages external access to services, usually HTTP and HTTPS. It provides load balancing, SSL termination, and name-based virtual hosting. A controller that implements the Ingress API, such as NGINX or HAProxy, to handle the actual routing of traffic based on defined rules.
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
// Module included in the following assemblies:
//
// * networking/understanding-networking.adoc

[id="nw-ne-openshift-security-traffic_{context}"]
= Security and traffic management
Administrators can expose applications to external traffic and secure network connections using service types like node ports and load balancers, and API resources like Ingress and Route. The Ingress Operator and Cluster Network Operator play crucial roles in configuring and managing these aspects. An Ingress Operator deploys and manages one or more Ingress Controllers, which are responsible for routing external HTTP and HTTPS traffic to services within the cluster. A cluster network operator deploys and manages the cluster network components (such as pod networks, service networks, DNS, etc.)

Cluster administrators have several options for exposing applications to external traffic and securing network connections, such as service types (node ports and load balancers) and API resources (Ingress and Route).
10 changes: 10 additions & 0 deletions networking/nw-ne-openshift-clients.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
// Module included in the following assemblies:
// * understanding-networking.adoc


[id="nw-ne-openshift-nodes_{context}"]
= What is an external client?

An external client refers to any entity outside of the cluster that interacts with the services and applications running within the cluster. This can include end users, external services, and external devices. End users are people who access a web application hosted in the cluster through their browsers or mobile devices. External services are other software systems or applications that interact with the services in the cluster, often through APIs. External devices are any hardware outside the cluster network that needs to communicate with the cluster services, like IoT devices.

External clients typically access the cluster services through well-defined interfaces and endpoints, often managed by Ingress Controllers and Load Balancers to ensure secure by and efficient traffic routing.
8 changes: 8 additions & 0 deletions networking/nw-ne-openshift-clusters.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
// Module included in the following assemblies:
// * understanding-networking.adoc


[id="nw-ne-openshift-nodes_{context}"]
= What is a cluster?

A cluster in Red Hat OpenShift Container Platform (and Kubernetes in general) is a collection of nodes (which are virtual or physical machines) that work together to run containerized applications. These nodes include master nodes and worker nodes. Master nodes manage the cluster, maintaining the desired state and handling cluster-wide operations. Worker nodes run the containers (pods) that house the application workloads. Together, these nodes form a cohesive environment where resources can be efficiently managed, scaled, and distributed to meet the needs of the applications running within the cluster.
Loading