This repository contains Ansible playbooks and tasks for OpenShift 4 cluster installation on vmware with UPI mode.
The original repository was made by Marco Betti: https://github.com/marbe/ocp4-vmware-upi-installer
The aim of this playbooks is to automate the UPI installation steps described into Red Hat OpenShift documentation
These playbooks have been tested with OpenShift Container Platform up to 4.7.z.
The minimun pre-requisite is a Red Hat / CentOS server to run this repo's playbooks on, called bastion or jump host.
DNS, DHCP and Load Balancers can either be installed with provided playbooks, or can be pre-provisioned as corporate infrastructure services. In this last case they must be configured ad described into Red Hat OpenShift documentation
Playbooks have been developed and tested on RHEL 7. While most thing will work on RHEL 8 also, this version has not been tested yet.
Ansible require version on bastion host:
- ansible > 2.8 is required
- These playbooks have been developed and tested with ansible 2.9.3.
- ansible >= 2.10 may lead to issues.
Some playbooks are provided in order to test that infrastructure pre-requisites are met, specifically those related to DNS records.
In order to provision VMs on vmware cluster, pyvmomi python library is required on bastion.
At first, you will need to install python pip on your system. You can do it by:
- enabling
epel
package repository usingepel-enable-playbook.yaml
- running:
# yum install python-pip
Finally, pvvmomi
installation can be managed by the specific pre-requisites playbook (ocp4-playbook-vmware-prereq.yaml
).
vSphere permissions required from OpenShift installer to properly configure vsphere storageclass are detailed on vmware vsphere storage for kubernetes page.
vSphere permissions required by Ansible vmware_guest module are documented into Notes section of vmware_guest page.
Other useful information are provided within Ansible VMware Guide.
Playbook name | Description |
---|---|
ocp4-playbook-vmware-prereq.yaml |
Install vmware_guest ansible module's prerequisites (currently pyvmomi). |
ocp4-playbook-cluster-create-1.yaml |
Setup the OpenShift 4 cluster manifests |
ocp4-playbook-cluster-create-2.yaml |
Creates VMs on vmware cluster. All VMs are created powered off. |
ocp4-playbook-poweron-vms.yaml |
Power on OpenShift VMs on vmware cluster. |
ocp4-playbook-erase-vms.yaml |
Power off and erase OpenShift VMs on vmware cluster. |
ocp4-playbook-test-uri.yaml |
Test https get to URI required to install and use OpenShift. |
ocp4-playbook-test-dns.yaml |
Test for proper DNS records configuration. |
ocp4-playbook-boot_delay-vms.yaml |
Configure Boot Delay for VMs in order to let the possibility to press the TAB or E key to edit the kernel command line. |
ocp4-playbook-disk-add.yaml |
Add additional_disks to storage_nodes in order to use Local Storage Operator to deploy OpenShift Container Storage. |
epel-enable-playbook.yaml |
Playbook to enable epel repository if needed (e.g. to install ansible from epel or nagios-plugins-dhcp to debug dhcp configuration). |
infrastructure-services-setup.yaml |
Playbook to install and configure infrastructure services (DNS, LB, DHCP) on bastion host in order to completely automate UPI pre-requisites. |
- clone/download this repo on bastion host.
- Provide an SSH keypair that will be configured to access CoreOS OCP nodes with
core
user. - customize
vars/ocp4-vars-vmware-upi-installer.yaml
var file with specific information related to your infrastructure. - OPTIONAL - create infrastructure services (DNS, LB, DHCP) with playbook
infrastructure-services-setup.yaml
- test URI https get with playbook
ocp4-playbook-test-uri.yaml
- test DNS records configuration with playbook
ocp4-playbook-test-dns.yaml
- install ansible vmware_guest prerequisites with playbook
ocp4-playbook-vmware-prereq.yaml
- create OpenShift Cluster on vmware with playbook
ocp4-playbook-cluster-create-1.yaml
- Copy nsx-t configuration yml to /tmp/ install directory
- Continue OpenShift Cluster installation with playbook
ocp4-playbook-cluster-create-2.yaml
- poweron VMs with playbook
ocp4-playbook-poweron-vms.yaml
All playbooks run on localhost. To run them, simply type:
ansible-playbook <path_to_cloned_repo>/playbooks/<playbook-name>
To build openshift installation manifests and run openshift installer, this directory is created ad working directory:
/tmp/openshift-install-<date +%Y%m%d>
Also:
openshift-install
and
oc
binaries are installed under /tmp
directory in such a way that, after VMs poweron, you can follow and complete the installation with:
$ ansible-playbook ocp4-vmware-upi-installer/playbooks/ocp4-playbook-cluster-create.yaml
$ ansible-playbook ocp4-vmware-upi-installer/playbooks/ocp4-playbook-poweron-vms.yaml
$ /tmp/openshift-install --dir=/tmp/openshift-install-<date +%Y%m%d> wait-for bootstrap-complete
$ export KUBECONFIG=/tmp/openshift-install-<date +%Y%m%d>/auth/kubeconfig
$ /tmp/oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs /tmp/oc adm certificate approve
$ /tmp/oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"storage":{"emptyDir":{}}}}'
$ /tmp/oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"managementState": "Managed"}}'
$ /tmp/openshift-install --dir=/tmp/openshift-install-<date +%Y%m%d> wait-for install-complete
At the time of first writing this project, static IP address setting was not documented specifically for vSphere UPI installation. The supported procedure was the very same of baremetal scenario.
The supported procedure consisted of modifying first boot kernel parameters by editing the kernel command line, as described here.
In order to let the time to press the TAB or E key after powering up VM on vcenter console, ayou could use the ocp4-playbook-boot_delay-vms.yaml
playbook that configures Boot Delay VMs parameter (default 10s).
UPDATE WITH OpenShift Container Platform 4.6 release
You can now override default Dynamic Host Configuration Protocol (DHCP) networking in vSphere. This requires setting the static IP configuration and then setting a guestinfo property before booting a VM from an OVA in vSphere:
In order to use static IP with this playbook, simply set static_ip: true
into var file vars/ocp4-vars-vmware-upi-installer.yaml
and configure variables in the proper way.
NOTE: Having DNS records properly set for each node is a prerequisite for both direct and reverse (PTR record) resolution.
In order to find information related to vmware infrastructure, govc software can be used:
$ mkdir ~/bin
$ export GOVC_URL=https://github.com/vmware/govmomi/releases/download/v0.22.1/govc_linux_amd64.gz
$ curl -L ${GOVC_URL} | gunzip > ~/bin/govc
$ chmod +x ~/bin/govc
$ govc version
govc 0.22.1
$ export GOVC_URL="vcenter.ocplab.net"
$ export GOVC_USERNAME="<...>"
$ export GOVC_PASSWORD="<...>"
$ export GOVC_INSECURE=1
$ export GOVC_DATASTORE="<...>"
$ govc about
Name: VMware vCenter Server
Vendor: VMware, Inc.
Version: 6.7.0
Build: 8170161
OS type: linux-x64
API type: VirtualCenter
API version: 6.7
Product ID: vpx
$ govc ls
/DC1/vm
/DC1/network
/DC1/host
/DC1/datastore
$ govc ls /DC1/network
/DC1/network/LAN2
/DC1/network/LAN1
/DC1/network/WAN1
$ govc ls /DC1/vm
/DC1/vm/Discovered virtual machine
/DC1/vm/ocp4
$ govc ls /DC1/vm/ocp4
/DC1/vm/ocp4/rhcos-4.3.0-x86_64
Pull requests are wellcome!
Please provide your contributions by branching master branch.