Skip to content

rszmigiel/openshift-bm-upi-ansible

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

12 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Ansible playbook to spin-up baremetal UPI OpenShift installation environment.

The purpose of this playbook is to reduce amount of time required to perform installation of Red Hat OpenShift on baremetal servers using User Provisioned Infrastructure (UPI) method. It follows the official OpenShift installation guide available at: https://docs.openshift.com/container-platform/4.8/installing/installing_bare_metal/installing-bare-metal.html

Environment

The following diagram describes the environment configuration at its initial stage, before installation starts. For further details/requirements please read the next paragraph.

Initial setup diagram

Requirements

  1. Valid Red Hat OpenShift subscriptions

  2. Valid Red Hat Enterprise Linux subscriptions

  3. Access to https://cloud.redhat.com

  4. Pair of activation key and organisation ID to register servers with Red Hat (https://access.redhat.com/articles/1378093)

  5. Provisioning host

    1. Preinstalled RHEL 8

    2. 8 CPUs

    3. 16GB of memory

    4. 100GB of free disk space

    5. Connection to baremetal network through bridge interface - as described at the diagram above

    6. Valid RHEL subscription with access to Red Hat packages

  6. Provisioning VM

    1. Valid RHEL subscription with access to Red hat packages

  7. Three master nodes

    1. PXE boot configured on NIC connected to baremetal network

  8. Two worker nodes (optional)

    1. PXE boot configured on NIC connected to baremetal network

Extra customization

For additional customisation options please review and edit the following templates:

Installation procedure

  1. Install RHEL8 on the provisioning host

  2. Collect a list of MAC addresses from the NICs configured for PXE boot on all cluster nodes (masters and workers)

  3. Download RHEL8 qcow2 image from https://access.redhat.com/downloads and store it as /var/lib/libvirt/images/rhel-8-x86_64-kvm.qcow2, for an instance:

    curl -L 'https://access.cdn.redhat.com/content/origin/files/(...)' -o \
    /var/lib/libvirt/images/rhel-8-x86_64-kvm.qcow2
  4. Configure network bridge connected to baremetal network

    1. If you’re using the same physical NIC for SSH connection to the provisioning host, this will interrupt your connection. You may have to perform the following operation using out-of-band management interface/console

    2. You have to remove IP configuration from NIC before you add it to the bridge, then the IP address has to be configured on the top of bridge interface

    3. Update the NIC name (eth0) and IP address in the snippet bellow accordingly to your configuration

    # nmcli connection add ifname baremetal type bridge con-name baremetal \
    ipv4.method manual ipv4.address 192.168.234.2/24 ipv6.method disabled \
    ipv4.dns 8.8.8.8 ipv4.gateway 192.168.234.254
    # nmcli con add type bridge-slave ifname eth0 master baremetal
  5. Copy this repository to the system which can access provisioning host and has ansible-playbook binary available. It can be provisioning host itself.
    The following extra packages are also required on the Ansible’s controller:

    1. python3-netaddr.noarch

    2. python3-dns

  6. Download your pull-secret.txt file from https://console.redhat.com/openshift/create/local and put it to files/pull-secret.txt

  7. Create configuration file config/vars.yaml - please see config/vars.yaml.example for the blueprint - it should be pretty self-explaining

    1. rhn put your Red Hat’s access details (activation key, organization ID) and the pool name. For more details please see:

    2. cluster configure cluster version, name, domain, ingress and API IP addresses, NTP servers IP addresses.

    3. cluster.networkType configure what SDN solution should be used, OpenShiftSDN or OVNKubernetes.

    4. nodes.ipxe_image name of the iPXE image, please see https://ipxe.org/appnote/buildtargets for details how to find the right iPXE image to boot from

    5. nodes.masters put a list of MAC addresses for PXE booting NIC on master nodes - order does matter

    6. nodes.workers put a list of MAC addresses for PXE booting NIC on worker nodes - order does matter. If you’re not deploying any worker nodes, please set it to workers: {}

    7. baremetal_network the "main" network configuration, including IP, DNS, DHCP ranges configuration - adjust it to your environment

    8. provisioning_vm configuration of provisioning VM where the installation tool will be run from. It also host DHCP server for baremetal network (dnsmasq), haproxy for load balancer of API and ingress endpoints. Please ensure it has the right IP address configured and path to its qcow2 image. You may want to change the default root password.

    9. bootstrap_vm configuration of temporary bootstrap VM, ensure its qcow2 image path is valid, the other defaults should be just fine.

  8. Update inventory.yaml file to reflect your IP configuration for provisioning host and provisioning VM. It has to be consistent with the IP addresses configured earlier in config/vars.yaml file

  9. For additional customisation options please review and edit the following templates:

    1. roles/provisioning_vm/templates/install_config.yaml.j2

    2. roles/provisioning_vm/templates/autoboot.ipxe.j2

  10. Run the installation playbook

    $ ansible-playbook -i inventory.yaml main.yaml
  11. Once successfully finished, the environment should be looking as at the following diagram

    End of deployment diagram
  12. Now you can power on the baremetal nodes, assuming they will boot via PXE and you’re provided the right MAC addresses to the config file, nodes should be bootstrapped with the right roles assigned.

  13. You can continue at https://docs.openshift.com/container-platform/4.8/installing/installing_bare_metal/installing-bare-metal.html#installation-installing-bare-metal_installing-bare-metal

Establishing connection to OpenShift UI (Virtualized environment)

If you deploy OpenShift using this method in virtualized environment, with single powerfull host only and VMs as masters/workers nodes, you can create SOCKS5 proxy with your ssh client in order to reach OpenShift resources via web browser.

ssh root@your_host -D localhost:12345

Then configure your browser to use SOCKS5 proxy at localhost:12345

Cleaning up

To clean up the environment please use cleanup.yaml playbook:

ansible-playbook -i inventory.yaml cleanup.yaml

Credits

  1. Rhys Oxenham, Ben Schmaus and August Simonelli for their work on development of openshift-aio (OpenShift All-in-One) https://github.com/RHFieldProductManagement/openshift-aio which partially has been used here

  2. Mohammed Salih for sharing his autoboot.ipxe script I’ve used and modified here.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages