This guide is intended to demonstrate how to perform the OpenShift installation using the IPI method on Microsoft Azure Government. In addition, the guide will walk through performing this installation on an existing disconnected network. In other words the network does not allow access to and from the internet. And finally, this installation will not store administrative credentials and disable the cloud credential operator from automatically creating new service principal accounts for use by other services. Instead, those accounts will be created manually.
A video that walks through this guide is available here: https://youtu.be/cAdGCLQ15zI
In this guide, we will install OpenShift onto an existing virtual network. This virtual network will contain two private subnets that are firewalled off from access to and from the internet. As we will need a way to gain access to those subnets, there is one subnet that will be the public subnet and that will host the bastion node from which we will use to access the private network. The following section entitled Example MAG configuration details the network configuration used in the guide. While the internet is firewalled off from the private network, we still need to allow access to the Azure and Azure Government cloud APIs. Without that we will not be able to install a cloud aware OpenShift cluster. Please note the firewall rules created that allow this access to the Azure cloud APIs.
This guide will assume that the user has valid accounts and subscriptions to both Red Hat OpenShift and MS Azure Government. This guide will also assume that an SSH keypair was created and the files azure-key.pem and azure-key.pub both exist.
The following section may be used to create a virtual network with the following components.
- Service Principal Account
- Azure Virtual Network
- Private DNS zone
- Firewall
- Public and Private subnets
- Bastion Host
- Registry Host
For the purpose of this demo, it is the assumed that these components will be provided by the user or the cloud administrator. However, IPI can also create these components for you, if desired. Please create or provide components according to the examples in this section.
If you have already created and validated these resources, please skip to the Installing OpenShift section.
Use the link below and follow the instructions to install the Azure CLI
Login to azure and set the cloud provider
az login
az cloud set --name AzureUSGovernment
In order to perform the install, one or more Service Principal accounts will need to be created. In order to perform the install, a Service Account with the role of ‘Contributor’ and 'User Access Administrator' will need to be created. In addition, there are a few services that require a Service Account in order to provide cloud aware functionality. This guide will use two Service Principal Accounts, one for the installation and one for each service. However, the user may opt to create additional Service Principal accounts as needed. The following document details how to obtain what credentials are needed.
The following commands may be used to create the Service Principal for the installation. Make note of the subscription id, tenant id, client id and password token, these will be used later in this guide.
az ad sp create-for-rbac --role Contributor --name <service_principal>
az role assignment create --role "User Access Administrator" \
--assignee-object-id $(az ad sp list --filter "appId eq '<appId>'" \
| jq '.[0].objectId' -r)
Create a resource group where AZURE_REGION is either usgovtexas or usgovvirginia
az group create -l <AZURE_REGION> -n <RESOURCE_GROUP>
az network vnet create -g <RESOURCE_GROUP> -n <VNET_NAME> --address-prefixes 10.1.0.0/16
The Firewall will block traffic to and from the internet. In order for the OpenShift cluster to be cloud aware and to be able to run the IPI method of install, we need to allow access to the Azure and Azure for Government APIs.
az extension add -n azure-firewall
az network firewall create -g <RESOURCE_GROUP> -n <FW>
az network vnet subnet create \
-g <RESOURCE_GROUP> \
--vnet-name <VNET_NAME> \
-n AzureFirewallSubnet \
--address-prefixes 10.1.10.0/24
az network public-ip create \
--name fw-pip \
--resource-group <RESOURCE_GROUP> \
--allocation-method static \
--sku standard
az network firewall ip-config create \
--firewall-name <FW> \
--name FW-config \
--public-ip-address fw-pip \
--resource-group <RESOURCE_GROUP> \
--vnet-name <VNET_NAME>
fwprivaddr=$( \
az network firewall ip-config list \
-g <RESOURCE_GROUP> \
-f <FW> \
--query "[?name=='FW-config'].privateIpAddress" \
--output tsv)
az network route-table create \
--name Firewall-rt-table \
--resource-group <RESOURCE_GROUP> \
--disable-bgp-route-propagation true
az network route-table route create \
--resource-group ${RG} \
--name fw-route \
--route-table-name Firewall-rt-table \
--address-prefix 0.0.0.0/0 \
--next-hop-type VirtualAppliance \
--next-hop-ip-address $fwprivaddr
az network firewall application-rule create \
--collection-name azure_gov \
--firewall-name <FW> \
--name azure \
--protocols Http=80 Https=443 \
--resource-group <RESOURCE_GROUP> \
--target-fqdns \
*microsoftonline.us \
*graph.windows.net \
*usgovcloudapi.net \
*applicationinsights.us \
*microsoft.us \
--source-addresses 10.1.1.0/24 10.1.2.0/24 \
--priority 100 \
--action Allow
az network firewall application-rule create \
--collection-name azure_ms \
--firewall-name <FW> \
--name azure \
--protocols Http=80 Https=443 \
--resource-group <RESOURCE_GROUP> \
--target-fqdns \
*azure.com *microsoft.com \
*microsoftonline.com \
*windows.net \
--source-addresses 10.1.1.0/24 10.1.2.0/24 \
--priority 200 \
--action Allow
az network vnet subnet create \
-g <RESOURCE_GROUP> \
--vnet-name <VNET_NAME> \
-n <PUBLIC_SUBNET> \
--address-prefixes 10.1.0.0/24
az network vnet subnet create \
-g <RESOURCE_GROUP> \
--vnet-name <VNET_NAME> \
-n <CONTROL_SUBNET> \
--address-prefixes 10.1.1.0/24 \
--route-table Firewall-rt-table
az network vnet subnet create \
-g <RESOURCE_GROUP> \
--vnet-name <VNET_NAME> \
-n <COMPUTE_SUBNET> \
--address-prefixes 10.1.2.0/24 \
--route-table Firewall-rt-table
Note: Ensure that the file azure-key.pub exists in the current working directory. Also, if the operator catalog will also be downloaded copied over, please adjust the os-disk-size-gb value accordingly.
az vm create -n <BASTION> -g <RESOURCE_GROUP> \
--image RedHat:RHEL:8.2:latest \
--size Standard_D2s_v3 \
--os-disk-size-gb 150 \
--public-ip-address bastion-pub-ip \
--vnet-name <VNET_NAME> --subnet <PUBLIC_SUBNET> \
--admin-username azureuser \
--ssh-key-values azure-key.pub
Note: Ensure that the file azure-key.pub exists in the current working directory. Also, if the operator catalog will also be downloaded copied over, please adjust the os-disk-size-gb value accordingly.
az vm create -n <REGISTRY> -g <RESOURCE_GROUP> \
--image RedHat:RHEL:8.2:latest \
--size Standard_D2s_v3 \
--os-disk-size-gb 150 \
--public-ip-address '' \
--vnet-name <VNET_NAME> --subnet <CONTROL_SUBNET> \
--admin-username azureuser \
--ssh-key-values azure-key.pub
The REGISTRY_IP is the private ip address assigned to the Registry host in the previous step.
az network private-dns zone create -g <RESOURCE_GROUP> -n <DOMAIN>
az network private-dns link vnet create \
-g <RESOURCE_GROUP> -n private-dnslink \
-z <DOMAIN> -v <VNET_NAME> -e true
az network private-dns record-set a add-record \
-g <RESOURCE_GROUP> \
-z <DOMAIN> \
-n registry \
-a <REGISTRY_IP>
scp -i azure-key.pem azure-key.pem azureuser@${BASTION_PUBLIC_IP}:~/.ssh/azure-key.pem
ssh -i azure-sshkey.pem azureuser@${BASTION_PUBLIC_IP}
sudo lsblk #identify blk dev where home is mapped to (ex /dev/sda2)
sudo parted -l #when prompted type 'fix'
sudo growpart /dev/sda 2
sudo pvresize /dev/sda2
sudo pvscan
sudo lvresize -r -L +125G /dev/mapper/rootvg-homelv
#From bastion
ssh -i ~/.ssh azure-sshkey.pem azureuser@registry.<DOMAIN>
sudo lsblk # identify blk dev where home is mapped to (ex /dev/sda2)
sudo parted -l #when prompted type 'fix'
sudo growpart /dev/sda 2
sudo pvresize /dev/sda2
sudo pvscan
sudo lvresize -r -L +125G /dev/mapper/rootvg-homelv
In order to capture all the artifacts needed to install openshift, this guide will use a tool called openshift4_mirror. Please see https://repo1.dso.mil/platform-one/distros/red-hat/ocp4/openshift4-mirror for more information about this tool. In addition, the pull-secret will need to be obtained from https://cloud.redhat.com/openshift/install/pull-secret. If the operator catalogs are also needed, ensure that there is enough disk space and remove the --skip-catalogs flag.
The following steps require OpenShift 4.6 and above. Replace <OCP_VERSION> with the specific target install version, such as 4.7.0.
#From Bastion
sudo dnf install podman
mkdir mirror && cd mirror
podman run -it -v ./:/app/bundle:Z quay.io/redhatgov/openshift4_mirror:latest
./openshift_mirror bundle \
--openshift-version <OCP_VERSION> \
--platform azure \
--skip-existing --skip-catalogs \
--pull-secret '<PULL_SECRET>'
#exit by using ctrl-d
tar czf OpenShiftBundle-<OCP_VERSION>.tgz <OCP_VERSION>/
#From Bastion
scp -i ~/.ssh/azure-key.pem OpenShiftBundle-<OCP_VERSION>.tgz registry.<DOMAIN>:~
ssh -i ~/.ssh/azure-key.pem registry.<DOMAIN>:~
For the purpose of this demo, we will use a temporary registry to serve the OpenShift install media. PLEASE NOTE: you should replace this step with a registry of your choice.
#From Registry
tar xzf OpenShiftBundle-<OCP_VERSION>.tgz
cd <OCP_VERSION>
openssl req -newkey rsa:4096 -nodes -sha256 -keyout domain.key -x509 -days 365 -out domain.crt -subj "/CN=registry.<DOMAIN>/O=Red Hat/L=Default City/ST=TX/C=US"
sudo firewall-cmd --zone=public --permanent --add-port=5000/tcp
sudo firewall-cmd --reload
bin/oc image serve --dir=$PWD/release/ --tls-crt=domain.crt --tls-key=domain.key
#Test from Bastion
curl -k https://registry.<DOMAIN>:5000/v2/openshift/
#From Registry
cd && mkdir ocp_install && cd ocp_install
vi install-config.yaml # copy and paste install-config.template from below
#Edit template as needed
The template below has defined the parameters for this use case. Please supply the user specific content. Note, that is FIPS cryptography is required, this must be set in the install-config.yaml prior to installation. It cannot be changed post-installation.
apiVersion: v1
baseDomain: <DOMAIN>
compute:
- hyperthreading: Enabled
name: worker
platform:
azure:
osDisk:
diskSizeGB: 512
type: Standard_D2s_v3
replicas: 4
controlPlane:
hyperthreading: Enabled
name: master
platform:
azure:
osDisk:
diskSizeGB: 512
type: Standard_D8s_v3
replicas: 3
metadata:
creationTimestamp: null
name: <CLUSTER_NAME>
networking:
clusterNetwork:
- cidr: 10.11.0.0/16
hostPrefix: 23
machineNetwork:
- cidr: 10.1.1.0/24
- cidr: 10.1.2.0/24
networkType: OpenShiftSDN
serviceNetwork:
- 172.30.0.0/16
platform:
azure:
baseDomainResourceGroupName: <RESOURCE_GROUP>
cloudName: AzureUSGovernmentCloud
computeSubnet: <COMPUTE_SUBNET>
controlPlaneSubnet: <CONTROL_SUBNET>
networkResourceGroupName: <RESOURCE_GROUP>
outboundType: UserDefinedRouting
region: <AZURE_REGION>
virtualNetwork: <VNET_NAME>
publish: Internal
fips: <true/false>
pullSecret: |
{ "auths": { "<REGISTRY_DNS>": { "auth": "", "email": "example@redhat.com" } } }
additionalTrustBundle: |
-----BEGIN CERTIFICATE-----
MIIFozCCA4ugAwIBAgIUKcifYaM+d4mCC6RNgnKUpFFARfswDQYJKoZIhvcNAQEL
...
-----END CERTIFICATE-----
imageContentSources:
- mirrors:
- <REGISTRY_DNS>:5000/openshift/release
source: quay.io/openshift-release-dev/ocp-release
- mirrors:
- <REGISTRY_DNS>:5000/openshift/release
source: registry.svc.ci.openshift.org/ocp/release
- mirrors:
- <REGISTRY_DNS>:5000/openshift/release
source: quay.io/openshift-release-dev/ocp-v4.0-art-dev
sshKey: |
ssh-rsa AAAAB3Nza...
The first time the openshift-install binary is run, it will prompt for the azure subscription id, tenant id, client id, and client secret/password. These values will need to correspond to the service principal account required for the installation. It will then save this to $HOME/.azure/osServicePrincipal.json and will reference that file for future runs.
#From Registry
cd ~/<OCP_VERSION>
bin/openshift-install create manifests --dir=/home/azureuser/ocp_install/ --log-level=debug
Update to Cloud Credentials Operator to set to Manual mode instead of the default Mint Mode, and remove the cloud credential secret.
cat <<EOF > /home/azureuser/ocp_install//manifests/cco-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: cloud-credential-operator-config
namespace: openshift-cloud-credential-operator
annotations:
release.openshift.io/create-only: "true"
data:
disabled: "true"
EOF
rm /home/azureuser/ocp_install/openshift/99_cloud-creds-secret.yaml
New secrets need to be created for each credential request that is created in the install. Please refer to the section regarding the creation of the service principals above for details on identifying what credentials are needed. For each credential request, we need to create a secret using the name and namespace defined in the request. In addition, we need to identify the ClusterId for this install. To do this view the file /home/azureuser/ocp_install/.openshift_install_state.json and look for the ClusterID definition. Copy and save the InfraId value.
For each credential request, create a new file under /home/azureuser/ocp_install/openshift/ with a unique name and set the content of the file as follows
kind: Secret
apiVersion: v1
metadata:
name: <CCO_CR_NAME>
namespace: <CCO_CR_NAMESPACE>
stringData:
azure_subscription_id: "<SUBSCRIPTION_ID>"
azure_client_id: "<CLIENT_ID>"
azure_client_secret: "<CLIENT_SECRET>"
azure_tenant_id: "<TENANT_ID>"
azure_resource_prefix: "<CLUSTER_ID>"
azure_resourcegroup: "<CLUSTER_ID>-rg"
azure_region: "<REGION>"
#From Registry
bin/openshift-install create cluster --dir=/home/azureuser/ocp_install/ --log-level=debug
Once the installation completes successfully, the logs will print out the URL to the OpenShift console along with the password for the kubeadmin account. Please note that you will need to establish a VPN connection, or some like method in order to be able to access the web console. Additionally, It will print the path to the kubeconfig file that may be used with the OpenShift CLI (oc) to connect to the OpenShift API service. The following is an example of the logs.
INFO Install complete!
INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/azureuser/ocp_install/auth/kubeconfig'
INFO Access the OpenShift web-console here: https://console-openshift-console.apps.openshift.dbasparta.io
INFO Login to the console with user: "kubeadmin", and password: "XXXXX-XXXXX-XXXXX-XXXXX"
DEBUG Time elapsed per stage:
DEBUG Infrastructure: 13m57s
DEBUG Bootstrap Complete: 9m25s
DEBUG Bootstrap Destroy: 5m57s
DEBUG Cluster Operators: 12m39s
INFO Time elapsed: 42m7s