Skip to content

Commit

Permalink
Remove support for Kuryr
Browse files Browse the repository at this point in the history
In 4.15 Kuryr is no longer a supported NetworkType, following its
deprecation in 4.12. This commit removes mentions of Kuryr from the
documentation and code, but also adds validation to prevent
installations from being executed when `networkType` is set to `Kuryr`.
  • Loading branch information
dulek authored and jhixson74 committed Nov 30, 2023
1 parent 5cd7ee7 commit d5e2150
Show file tree
Hide file tree
Showing 27 changed files with 22 additions and 1,109 deletions.
15 changes: 3 additions & 12 deletions docs/user/openstack/README.md
@@ -1,7 +1,7 @@
# OpenStack Platform Support

This document discusses the requirements, current expected behavior, and how to try out what exists so far.
In addition, it covers the installation with the default CNI (OVNKubernetes), as well as with the Kuryr SDN.
It covers the installation with the default CNI (OVNKubernetes).

## Table of Contents

Expand Down Expand Up @@ -57,7 +57,6 @@ In addition, it covers the installation with the default CNI (OVNKubernetes), as
- [Privileges](privileges.md)
- [Control plane machine set](control-plane-machine-set.md)
- [Known Issues and Workarounds](known-issues.md)
- [Using the OSP 4 installer with Kuryr](kuryr.md)
- [Troubleshooting your cluster](troubleshooting.md)
- [Customizing your install](customization.md)
- [Installing OpenShift on OpenStack User-Provisioned Infrastructure](install_upi.md)
Expand Down Expand Up @@ -88,10 +87,7 @@ services being available:
- Swift
- Cinder

When deploying with the [Kuryr SDN](kuryr.md), the Octavia Load Balancing
service becomes a hard requirement.

In order to run the latest version of the installer in OpenStack, at a bare minimum you need the following quota to run a *default* cluster. While it is possible to run the cluster with fewer resources than this, it is not recommended. Certain cases, such as deploying [without FIPs](#without-floating-ips), or deploying with an [external load balancer](#using-an-external-load-balancer) are documented below, and are not included in the scope of this recommendation. If you are planning on using Kuryr, or want to learn more about it, please read through the [Kuryr documentation](kuryr.md).
In order to run the latest version of the installer in OpenStack, at a bare minimum you need the following quota to run a *default* cluster. While it is possible to run the cluster with fewer resources than this, it is not recommended. Certain cases, such as deploying [without FIPs](#without-floating-ips), or deploying with an [external load balancer](#using-an-external-load-balancer) are documented below, and are not included in the scope of this recommendation.

For a successful installation it is required:

Expand Down Expand Up @@ -432,7 +428,7 @@ Even if the installer times out, the OpenShift cluster should still come up. Onc
### Running a Deployment
To run the installer, you have the option of using the interactive wizard, or providing your own `install-config.yaml` file for it. The wizard is the easier way to run the installer, but passing your own `install-config.yaml` enables you to use more fine grained customizations. If you are going to create your own `install-config.yaml`, read through the available [OpenStack customizations](customization.md). For information on running the installer with Kuryr, see the [Kuryr docs](kuryr.md).
To run the installer, you have the option of using the interactive wizard, or providing your own `install-config.yaml` file for it. The wizard is the easier way to run the installer, but passing your own `install-config.yaml` enables you to use more fine grained customizations. If you are going to create your own `install-config.yaml`, read through the available [OpenStack customizations](customization.md).
```sh
./openshift-install create cluster --dir ostest
Expand Down Expand Up @@ -603,11 +599,6 @@ In order to use Availability Zones, create one MachineSet per target
Availability Zone, and set the Availability Zone in the `availabilityZone`
property of the MachineSet.

> **Note**
> When deploying with `Kuryr` there is an Octavia API loadbalancer VM that will not fulfill the Availability Zones restrictions due to Octavia lack of support for it.
> In addition, if Octavia only has the amphora provider instead of also the OVN-Octavia provider,
> all the OpenShift services will be backed up by Octavia Load Balancer VMs which will not fulfill the Availability Zone restrictions either.

[server-group-docs]: https://docs.openstack.org/api-ref/compute/?expanded=create-server-group-detail#create-server-group

### Using a Custom External Load Balancer
Expand Down
3 changes: 0 additions & 3 deletions docs/user/openstack/customization.md
Expand Up @@ -58,9 +58,6 @@ Beyond the [platform-agnostic `install-config.yaml` properties](../customization
> **Note**
> The bootstrap node follows the `type`, `rootVolume`, `additionalNetworkIDs`, and `additionalSecurityGroupIDs` parameters from the `controlPlane` machine pool.
> **Note**
> Note when deploying with `Kuryr` there is an Octavia API loadbalancer VM that will not fulfill the Availability Zones restrictions due to Octavia lack of support for it. In addition, if Octavia only has the amphora provider instead of also the OVN-Octavia provider, all the OpenShift services will be backed up by Octavia Load Balancer VMs which will not fulfill the Availability Zone restrictions either.
> **Note**
> Note when deploying the control-plane machines with `rootVolume`, it is highly suggested to use an [additional ephemeral disk dedicated to etcd](./etcd-ephemeral-disk.md).
Expand Down
5 changes: 0 additions & 5 deletions docs/user/openstack/deploy_baremetal_workers.md
Expand Up @@ -221,11 +221,6 @@ Cluster is initially deployed with VM workers. BM workers are added to the clust
- Once the cluster is deployed and running, [create and deploy a new infrastructure MachineSet][6] using the bare-metal server flavor.

## Known issues

Bare metal nodes are not supported on clusters that use Kuryr.


[1]: <https://docs.openstack.org/nova/latest/user/flavors.html> "In OpenStack, flavors define the compute, memory, and storage capacity of nova computing instances"
[2]: <https://docs.openstack.org/ironic/latest/>
[3]: <https://docs.openstack.org/api-ref/compute/>
Expand Down
14 changes: 0 additions & 14 deletions docs/user/openstack/deploy_sriov_workers.md
Expand Up @@ -427,20 +427,6 @@ Next, create a file called `compute-nodes.yaml` with this Ansible script:
cmd: "openstack port set --tag {{ cluster_id_tag }} {{ item.1 }}-{{ item.0 }}"
with_indexed_items: "{{ [os_port_worker] * os_compute_nodes_number }}"

- name: 'List the Compute Trunks'
command:
cmd: "openstack network trunk list"
when: os_networking_type == "Kuryr"
register: compute_trunks

- name: 'Create the Compute trunks'
command:
cmd: "openstack network trunk create --parent-port {{ item.1.id }} {{ os_compute_trunk_name }}-{{ item.0 }}"
with_indexed_items: "{{ ports.results }}"
when:
- os_networking_type == "Kuryr"
- "os_compute_trunk_name|string not in compute_trunks.stdout"

- name: ‘Call additional-port processing’
include_tasks: additional-ports.yaml

Expand Down
34 changes: 3 additions & 31 deletions docs/user/openstack/install_upi.md
Expand Up @@ -30,7 +30,6 @@ of this method of installation.
- [Install Config](#install-config)
- [Configure the machineNetwork.CIDR apiVIP and ingressVIP](#configure-the-machinenetworkcidr-apivip-and-ingressvip)
- [Empty Compute Pools](#empty-compute-pools)
- [Modify NetworkType (Required for Kuryr SDN)](#modify-networktype-required-for-kuryr-sdn)
- [Edit Manifests](#edit-manifests)
- [Remove Machines and MachineSets](#remove-machines-and-machinesets)
- [Set control-plane nodes to desired schedulable state](#set-control-plane-nodes-to-desired-schedulable-state)
Expand All @@ -49,12 +48,10 @@ of this method of installation.
- [Subnet DNS (optional)](#subnet-dns-optional)
- [Bootstrap](#bootstrap)
- [Control Plane](#control-plane)
- [Control Plane Trunks (Kuryr SDN)](#control-plane-trunks-kuryr-sdn)
- [Wait for the Control Plane to Complete](#wait-for-the-control-plane-to-complete)
- [Access the OpenShift API](#access-the-openshift-api)
- [Delete the Bootstrap Resources](#delete-the-bootstrap-resources)
- [Compute Nodes](#compute-nodes)
- [Compute Nodes Trunks (Kuryr SDN)](#compute-nodes-trunks-kuryr-sdn)
- [Approve the worker CSRs](#approve-the-worker-csrs)
- [Wait for the OpenShift Installation to Complete](#wait-for-the-openshift-installation-to-complete)
- [Compute Nodes with SR-IOV NICs](#compute-nodes-with-sr-iov-nics)
Expand Down Expand Up @@ -90,13 +87,6 @@ The requirements for UPI are broadly similar to the [ones for OpenStack IPI][ipi
- it must be the resolver for the base domain, for the installer and for the end-user machines
- it will host two records: for API and apps access

For an installation with Kuryr SDN on UPI, you should also check the requirements which are the same
needed for [OpenStack IPI with Kuryr][ipi-reqs-kuryr]. Please also note that **RHEL 7 nodes are not
supported on deployments configured with Kuryr**. This is because Kuryr container images are based on
RHEL 8 and may not work properly when run on RHEL 7.

[ipi-reqs-kuryr]: ./kuryr.md#requirements-when-enabling-kuryr

## Install Ansible

This repository contains [Ansible playbooks][ansible-upi] to deploy OpenShift on OpenStack.
Expand All @@ -112,7 +102,6 @@ RELEASE="release-4.14"; xargs -n 1 curl -O <<< "
https://raw.githubusercontent.com/openshift/installer/${RELEASE}/upi/openstack/down-bootstrap.yaml
https://raw.githubusercontent.com/openshift/installer/${RELEASE}/upi/openstack/down-compute-nodes.yaml
https://raw.githubusercontent.com/openshift/installer/${RELEASE}/upi/openstack/down-control-plane.yaml
https://raw.githubusercontent.com/openshift/installer/${RELEASE}/upi/openstack/down-load-balancers.yaml
https://raw.githubusercontent.com/openshift/installer/${RELEASE}/upi/openstack/down-network.yaml
https://raw.githubusercontent.com/openshift/installer/${RELEASE}/upi/openstack/down-security-groups.yaml
https://raw.githubusercontent.com/openshift/installer/${RELEASE}/upi/openstack/down-containers.yaml
Expand Down Expand Up @@ -405,11 +394,11 @@ open(path, "w").write(yaml.dump(data, default_flow_style=False))'
```
<!--- e2e-openstack-upi: INCLUDE END --->

### Modify NetworkType (Required for Kuryr SDN)
### Modify NetworkType (Required for OpenShift SDN)

By default the `networkType` is set to `OVNKubernetes` on the `install-config.yaml`.

If an installation with Kuryr is desired, you must modify the `networkType` field.
If an installation with OpenShift SDN is desired, you must modify the `networkType` field.

This command will do it for you:

Expand All @@ -418,12 +407,10 @@ $ python -c '
import yaml
path = "install-config.yaml"
data = yaml.safe_load(open(path))
data["networking"]["networkType"] = "Kuryr"
data["networking"]["networkType"] = "OpenShiftSDN"
open(path, "w").write(yaml.dump(data, default_flow_style=False))'
```

Also set `os_networking_type` to `Kuryr` in `inventory.yaml`.

## Edit Manifests

We are not relying on the Machine API so we can delete the control plane Machines and compute MachineSets from the manifests.
Expand Down Expand Up @@ -903,10 +890,6 @@ The playbook places the Control Plane in a Server Group with "soft anti-affinity

The master nodes should load the initial Ignition and then keep waiting until the bootstrap node stands up the Machine Config Server which will provide the rest of the configuration.

### Control Plane Trunks (Kuryr SDN)

If `os_networking_type` is set to `Kuryr` in the Ansible inventory, the playbook creates the Trunks for Kuryr to plug the containers into the OpenStack SDN.

### Wait for the Control Plane to Complete

When that happens, the masters will start running their own pods, run etcd and join the "bootstrap" cluster. Eventually, they will form a fully operational control plane.
Expand Down Expand Up @@ -981,10 +964,6 @@ This process is similar to the masters, but the workers need to be approved befo

The workers need no ignition override.

### Compute Nodes Trunks (Kuryr SDN)

If `os_networking_type` is set to `Kuryr` in the Ansible inventory, the playbook creates the Trunks for Kuryr to plug the containers into the OpenStack SDN.

### Compute Nodes with SR-IOV NICs

Using single root I/O virtualization (SR-IOV) networking as an additional network in OpenShift can be beneficial for applications that require high bandwidth and low latency. To enable this in your cluster, you will need to install the [SR-IOV Network Operator](https://docs.openshift.com/container-platform/4.6/networking/hardware_networks/installing-sriov-operator.html). If you are not sure whether your cluster supports this feature, please refer to the [SR-IOV hardware networks documentation](https://docs.openshift.com/container-platform/4.6/networking/hardware_networks/about-sriov.html). If you are planning an openstack deployment with SR-IOV networks and need addition resources, check the [OpenStack SR-IOV deployment docs](https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/16.1/html-single/network_functions_virtualization_planning_and_configuration_guide/index#assembly_sriov_parameters). Once you meet these requirements, you can start provisioning an SR-IOV network and subnet in OpenStack.
Expand Down Expand Up @@ -1111,19 +1090,12 @@ $ ansible-playbook -i inventory.yaml \
down-bootstrap.yaml \
down-control-plane.yaml \
down-compute-nodes.yaml \
down-load-balancers.yaml \
down-containers.yaml \
down-network.yaml \
down-security-groups.yaml
```
<!--- e2e-openstack-upi(deprovision): INCLUDE END --->

The playbook `down-load-balancers.yaml` idempotently deletes the load balancers created by the Kuryr installation, if any.

> **Note**
> The deletion of load balancers with `provisioning_status` `PENDING-*` is skipped.
> Make sure to retry the `down-load-balancers.yaml` playbook once the load balancers have transitioned to `ACTIVE`.
Delete the RHCOS image if it's no longer useful.

<!--- e2e-openstack-upi(deprovision): INCLUDE START --->
Expand Down
17 changes: 0 additions & 17 deletions docs/user/openstack/known-issues.md
Expand Up @@ -146,23 +146,6 @@ The teardown playbooks provided for UPI installation will not delete:

These objects have to be manually removed after running the teardown playbooks.

## Requirement to create Control Plane Machines manifests (Kuryr SDN)

Installations with Kuryr SDN can timeout due to changes in the way Kuryr detects
the OpenStack Subnet used by the cluster's nodes. Kuryr relied on the Network of
the cluster's nodes Subnet having a specific tag, but the tag was removed for IPI
Installations causing the need to discover it from the OpenShift Machine objects,
which the creation is removed on one of the UPI steps. Until the fix for
[the issue][bugzilla-upi] is available, as a workaround, only the compute machine
manifests should be removed in the [Remove machines and machinesets][manifests-removal]
section of the UPI guide. The command to run is:

```console
$ rm -f openshift/99_openshift-cluster-api_worker-machineset-*.yaml
```
[bugzilla-upi]: https://bugzilla.redhat.com/show_bug.cgi?id=1927244
[manifests-removal]:../openstack/install_upi.md#remove-machines-and-machinesets

## Limitations of creating external load balancers using pre-defined FIPs

On most clouds, the default policy prevents non-admin users from creating
Expand Down

0 comments on commit d5e2150

Please sign in to comment.