Skip to content

Commit

Permalink
OCPBUGS-9215: Updated port 1936 in install vSphere docs
Browse files Browse the repository at this point in the history
  • Loading branch information
dfitzmau committed Jun 28, 2023
1 parent 24758af commit 30fad49
Show file tree
Hide file tree
Showing 2 changed files with 36 additions and 57 deletions.
7 changes: 7 additions & 0 deletions modules/installation-infrastructure-user-infra.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -74,6 +74,13 @@ endif::ibm-z-kvm[]
. Ensure that your network infrastructure provides the required network connectivity between the cluster components. See the _Networking requirements for user-provisioned infrastructure_ section for details about the requirements.

. Configure your firewall to enable the ports required for the {product-title} cluster components to communicate. See _Networking requirements for user-provisioned infrastructure_ section for details about the ports that are required.
+
[IMPORTANT]
====
By default, port `1936` is accessible for an {product-title} cluster, because each control plane node needs access to this port.

Avoid using the Ingress load balancer to expose this port, because doing so might result in the exposure of sensitive information, such as statistics and metrics, related to Ingress Controllers.
====

. Setup the required DNS infrastructure for your cluster.
.. Configure DNS name resolution for the Kubernetes API, the application wildcard, the bootstrap machine, the control plane machines, and the compute machines.
Expand Down
86 changes: 29 additions & 57 deletions modules/installation-load-balancing-user-infra.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,7 @@ endif::[]
= Load balancing requirements for user-provisioned infrastructure

ifndef::user-managed-lb[]
Before you install {product-title}, you must provision the API and application ingress load balancing infrastructure. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation.
Before you install {product-title}, you must provision the API and application Ingress load balancing infrastructure. In production scenarios, you can deploy the API and application Ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation.
endif::user-managed-lb[]

ifdef::user-managed-lb[]
Expand All @@ -72,12 +72,12 @@ the development process.
For more information about the support scope of Red Hat Technology Preview features, see link:https://access.redhat.com/support/offerings/techpreview/[Technology Preview Features Support Scope].
====

Before you install {product-title}, you can provision your own API and application ingress load balancing infrastructure to use in place of the default, internal load balancing solution. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation.
Before you install {product-title}, you can provision your own API and application ingress load balancing infrastructure to use in place of the default, internal load balancing solution. In production scenarios, you can deploy the API and application Ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation.
endif::user-managed-lb[]

[NOTE]
====
If you want to deploy the API and application ingress load balancers with a {op-system-base-full} instance, you must purchase the {op-system-base} subscription separately.
If you want to deploy the API and application Ingress load balancers with a {op-system-base-full} instance, you must purchase the {op-system-base} subscription separately.
====

The load balancing infrastructure must meet the following requirements:
Expand Down Expand Up @@ -132,8 +132,10 @@ error or becomes healthy, the endpoint must have been removed or added. Probing
every 5 or 10 seconds, with two successful requests to become healthy and three
to become unhealthy, are well-tested values.
====

. *Application ingress load balancer*: Provides an ingress point for application traffic flowing in from outside the cluster. Configure the following conditions:
+
. *Application Ingress load balancer*: Provides an ingress point for application traffic flowing in from outside the cluster. A working configuration for the Ingress router is required for an {product-title} cluster.
+
Configure the following conditions:
+
--
** Layer 4 load balancing only. This can be referred to as Raw TCP, SSL Passthrough, or SSL Bridge mode. If you use SSL Bridge mode, you must enable Server Name Indication (SNI) for the ingress routes.
Expand All @@ -142,12 +144,12 @@ to become unhealthy, are well-tested values.
+
[TIP]
====
If the true IP address of the client can be seen by the application ingress load balancer, enabling source IP-based session persistence can improve performance for applications that use end-to-end TLS encryption.
If the true IP address of the client can be seen by the application Ingress load balancer, enabling source IP-based session persistence can improve performance for applications that use end-to-end TLS encryption.
====
+
Configure the following ports on both the front and back of the load balancers:
+
.Application ingress load balancer
.Application Ingress load balancer
[cols="2,5,^2,^2,2",options="header"]
|===

Expand All @@ -169,45 +171,34 @@ Configure the following ports on both the front and back of the load balancers:
|X
|HTTP traffic

|`1936`
|The worker nodes that run the Ingress Controller pods, by default. You must configure the `/healthz/ready` endpoint for the ingress health check probe.
|X
|X
|HTTP traffic

|===

[NOTE]
====
If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes.
====

+
[NOTE]
====
A working configuration for the Ingress router is required for an
{product-title} cluster. You must configure the Ingress router after the control
plane initializes.
If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes.
====

[id="installation-load-balancing-user-infra-example_{context}"]
ifndef::user-managed-lb[]
== Example load balancer configuration for user-provisioned clusters

This section provides an example API and application ingress load balancer configuration that meets the load balancing requirements for user-provisioned clusters. The sample is an `/etc/haproxy/haproxy.cfg` configuration for an HAProxy load balancer. The example is not meant to provide advice for choosing one load balancing solution over another.
This section provides an example API and application Ingress load balancer configuration that meets the load balancing requirements for user-provisioned clusters. The sample is an `/etc/haproxy/haproxy.cfg` configuration for an HAProxy load balancer. The example is not meant to provide advice for choosing one load balancing solution over another.
endif::user-managed-lb[]

ifdef::user-managed-lb[]
== Example load balancer configuration for clusters that are deployed with user-managed load balancers

This section provides an example API and application ingress load balancer configuration that meets the load balancing requirements for clusters that are deployed with user-managed load balancers. The sample is an `/etc/haproxy/haproxy.cfg` configuration for an HAProxy load balancer. The example is not meant to provide advice for choosing one load balancing solution over another.
This section provides an example API and application Ingress load balancer configuration that meets the load balancing requirements for clusters that are deployed with user-managed load balancers. The sample is an `/etc/haproxy/haproxy.cfg` configuration for an HAProxy load balancer. The example is not meant to provide advice for choosing one load balancing solution over another.
endif::user-managed-lb[]

In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation.

[NOTE]
====
In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation.
If you are using HAProxy as a load balancer and SELinux is set to `enforcing`, you must ensure that the HAProxy service can bind to the configured TCP port by running `setsebool -P haproxy_connect_any=1`.
====

.Sample API and application ingress load balancer configuration
.Sample API and application Ingress load balancer configuration
[%collapsible]
====
[source,text]
Expand All @@ -232,69 +223,50 @@ defaults
timeout http-keep-alive 10s
timeout check 10s
maxconn 3000
frontend stats
bind *:1936
mode http
log global
maxconn 10
stats enable
stats hide-version
stats refresh 30s
stats show-node
stats show-desc Stats for ocp4 cluster <1>
stats auth admin:ocp4
stats uri /stats
listen api-server-6443 <2>
listen api-server-6443 <1>
bind *:6443
mode tcp
server bootstrap bootstrap.ocp4.example.com:6443 check inter 1s backup <3>
server bootstrap bootstrap.ocp4.example.com:6443 check inter 1s backup <2>
server master0 master0.ocp4.example.com:6443 check inter 1s
server master1 master1.ocp4.example.com:6443 check inter 1s
server master2 master2.ocp4.example.com:6443 check inter 1s
listen machine-config-server-22623 <4>
listen machine-config-server-22623 <3>
bind *:22623
mode tcp
server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup <3>
server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup <2>
server master0 master0.ocp4.example.com:22623 check inter 1s
server master1 master1.ocp4.example.com:22623 check inter 1s
server master2 master2.ocp4.example.com:22623 check inter 1s
listen ingress-router-443 <5>
listen ingress-router-443 <4>
bind *:443
mode tcp
balance source
server worker0 worker0.ocp4.example.com:443 check inter 1s
server worker1 worker1.ocp4.example.com:443 check inter 1s
listen ingress-router-80 <6>
listen ingress-router-80 <5>
bind *:80
mode tcp
balance source
server worker0 worker0.ocp4.example.com:80 check inter 1s
server worker1 worker1.ocp4.example.com:80 check inter 1s
----

<1> In the example, the cluster name is `ocp4`.
<2> Port `6443` handles the Kubernetes API traffic and points to the control plane machines.
<3> The bootstrap entries must be in place before the {product-title} cluster installation and they must be removed after the bootstrap process is complete.
<4> Port `22623` handles the machine config server traffic and points to the control plane machines.
<5> Port `443` handles the HTTPS traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default.
<6> Port `80` handles the HTTP traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default.
<1> Port `6443` handles the Kubernetes API traffic and points to the control plane machines.
<2> The bootstrap entries must be in place before the {product-title} cluster installation and they must be removed after the bootstrap process is complete.
<3> Port `22623` handles the machine config server traffic and points to the control plane machines.
<4> Port `443` handles the HTTPS traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default.
<5> Port `80` handles the HTTP traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default.
+
[NOTE]
=====
If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes.
If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes.
=====
====

[TIP]
====
If you are using HAProxy as a load balancer, you can check that the `haproxy` process is listening on ports `6443`, `22623`, `443`, and `80` by running `netstat -nltupe` on the HAProxy node.
====

[NOTE]
====
If you are using HAProxy as a load balancer and SELinux is set to `enforcing`, you must ensure that the HAProxy service can bind to the configured TCP port by running `setsebool -P haproxy_connect_any=1`.
====

ifeval::["{context}" == "installing-ibm-z"]
:!ibm-z:
endif::[]
Expand Down

0 comments on commit 30fad49

Please sign in to comment.