This Ansible project generates a Kubernetes MachineSet object that can be used to provision OpenShift worker nodes running on top of OpenStack. Additional networks can be specified that will be attached to the worker nodes.
-
Ansible 2.9+
-
Provide a valid OpenStack clouds.yaml file in the searchpath
-
The openstack.ansible.cloud module:
ansible-galaxy collection install openstack.cloud
-
metadata.json
file created by the Openshift cluster install -
A running OCP 4.9+ cluster
The following script deploys the PAO
./deploy/pao-deploy.sh
Look inside the script to see the necessary steps.
Wait for the performanceprofile to be applied.
Check that the fast datapath nodes now have hugpages.
oc get nodes -l 'node-role.kubernetes.io/performance=fdp' -ojson | jq '.items[].status.allocatable
{
"attachable-volumes-cinder": "256",
"cpu": "6",
"ephemeral-storage": "25678828Ki",
"hugepages-1Gi": "8Gi",
"hugepages-2Mi": "0",
"memory": "6893468Ki",
"pods": "250"
}
{
"attachable-volumes-cinder": "256",
"cpu": "6",
"ephemeral-storage": "25678828Ki",
"hugepages-1Gi": "8Gi",
"hugepages-2Mi": "0",
"memory": "6893468Ki",
"pods": "250"
}
"hugepages-1Gi": "8Gi",
indicates the presence of hugepages on the node.
An ansible playbook is provided that generates a MachineSet file that can be used to deploy workers with the specified additional networks.
An inventory.yaml file provides user-definable parameters related to generating a MachineSet.
Number of worker nodes to create
number_of_replicas: 2
Location of metadata.json file generated by the openshift-installer after cluster deployment
cluster_metadata_path: "metadata.json"
Specify the cloud to use in the clouds.yaml file.
openstack_cloud: "openstack"
OpenStack flavor to use for the creation of the nodes
nova_flavor: "ocp-worker"
Name of the node role for this machinset
node_role: "worker"
Name of glance image to use. Defaults to "{{ infraID }}-rhcos"
glance_image_name_or_location: "path-to-metadata.json"
Subnet ID for openshift cluster network. Defaults to "{{ metadata.infraID }}-openshift"
machines_subnet_UUID: "6a295ed7-9464-43d1-8523-bcd6f7711370"
The following shows the specification of additional networks in the inventory.yaml file. The user can specify either the network UUID or the network name. In either case, OpenStack is queried to determine if the specified network is present. In the case of the network name being specified, the queried UUID is substituted for the MachineSet file.
The vnic_type field can be either normal or direct. When connecting an OVS-DPDK based network, vnic_type should be normal. For an SR-IOV connection, specify vnic_type as direct.
The driver field controls whether or not the interface driver within the VM worker will be switched to vfio-pci or will remain with the default driver (virtio). If the driver field is not specified, the driver will remain the default.
The tags field and associated list allow the user to specify tags that will be added to the OpenStack port object.
Please refer to the documentation for more information.
additional_networks:
- network_UUID: "60c4b0dd-065a-4f70-8eee-5e7c4a1b8b09"
name_suffix: "uplink1"
tags:
- tag1
vnic_type: "normal"
- name: "uplink2"
name_suffix: "uplink2"
tags:
- tag1
vnic_type: "normal"
driver: "vfio-pci"
- network_UUID: "489b4c4d-86ea-4fd2-9a58-b8ec878b5a4c"
name_suffix: "radio_downlink"
tags:
- tag1
vnic_type: "direct"
driver: "vfio-pci"
- name: "radio_uplink"
name_suffix: "radio_uplink"
tags:
- tag1
vnic_type: "direct"
An example of running the playbook is below:
ansible-playbook play.yaml -e cluster_metadata_path=/home/kni/sos-fdp/build-artifacts/metadata.json -i ./inventory.yaml
The command line references the following metadata.json file:
{
"clusterName": "fdp",
"clusterID": "61fa93c1-a958-4ec1-a526-2f0b9272538a",
"infraID": "fdp-5sd65",
"openstack": {
"cloud": "openstack",
"identifier": {
"openshiftClusterID": "fdp-5sd65"
}
}
}
The corresponding MachineSet file will be created in: ./build/${infraID}-${worker_role}-machineset.yaml
oc apply -f ./build/${infraID}-${worker_role}-machineset.yaml
Wait for completion. At this point, the machine-api will deploy the additional worker nodes as specified in the MachineSet file.
There are two method to adding a host-device secondary network. First, the Cluster Network Operator Cluster Network can be patched to add additionalNetworks. Second, NetworkAttachmentDefinitions can be applied that specify the network name and host-device specifications.
A second Ansible playbook is provided that generates network-attachment-definition files for the ovs-dpdk additional networks. The script will only generate network-attachement-definitions for virtio network attachments. SR-IOV networks will be skipped as SR-IOV is handled by the sriov-network-operator. An example generated NetworkAttachmentDefinition is below:
apiVersion: k8s.cni.cncf.io/v1
kind: NetworkAttachmentDefinition
metadata:
name: "uplink1"
spec:
config: |-
{
"cniVersion": "0.3.1",
"name": "uplink1",
"type": "host-device",
"pciBusId": "0000:00:04.0",
"ipam": { }
}
The script operates by querying both OpenStack and OpenShift and matching mac addresses between the environments. Matched mac addresses allow the determination of the pci addresses of the connected additional networks. The script queries both OpenStack and OpenShift to gather the necessary information.
The following generates the NetworkAttachmentDefinitions for ovs-dpdk connected additional networks.
ansible-playbook gen.yaml -e cluster_metadata_path=/home/kni/sos-fdp/build-artifacts/metadata.json -i ./inventory.yaml
oc apply -f build/<na-file-name.yaml>
At this point, the additional networks have been defined using the NetworkAttachmentDefinition.
The following is an example Pod that uses the additional networks.
apiVersion: v1
kind: Pod
metadata:
name: testpmd
namespace: default
annotations:
k8s.v1.cni.cncf.io/networks: "uplink1,uplink2"
spec:
containers:
- name: testpmd
command: ["/bin/sh"]
args: ["-c", "testpmd -l $(taskset -pc 1 | cut -d: -f2) --in-memory -w 00:04.0 -w 00:05.0 --socket-mem 1024 -n 4 -- --nb-cores=1 --auto-start --forward-mode=mac --stats-period 10"]
image: registry.redhat.io/openshift4/dpdk-base-rhel8:v4.6
securityContext:
privileged: true
runAsUser: 0
resources:
requests:
memory: 1000Mi
hugepages-1Gi: 3Gi
cpu: '3'
limits:
hugepages-1Gi: 3Gi
cpu: '3'
memory: 1000Mi
volumeMounts:
- mountPath: /dev/hugepages
name: hugepage
readOnly: False
volumes:
- name: hugepage
emptyDir:
medium: HugePages