Skip to content

rh-mobb/terraform-aro

Repository files navigation

Using Terraform to build an ARO cluster

Azure Red Hat OpenShift (ARO) is a fully-managed turnkey application platform.

Supports Public ARO clusters and Private ARO clusters.

Setup

Using the code in the repo will require having the following tools installed:

  • The Terraform CLI
  • The OC CLI

Create the ARO cluster and required infrastructure

Public ARO cluster

  1. Create a local variables file

    make tfvars
  2. Modify the terraform.tfvars var file, you can use the variables.tf to see the full list of variables that can be set.

    NOTE: You can define the subscription_id needed for the Auth using export TF_VAR_subscription_id="xxx" as well.

  3. Deploy your cluster

    make create

    NOTE: By default the ingress_profile and the api_server_profile is both Public, but can be change using the TF variables.

Private ARO cluster

  1. Modify the terraform.tfvars var file, you can use the variables.tf to see the full list of variables that can be set.

    make create-private

    NOTE: restrict_egress_traffic=true will secure ARO cluster by routing Egress traffic through an Azure Firewall.

    NOTE2: Private Clusters can be created without Public IP using the UserDefineRouting flag in the outboundtype=UserDefineRouting variable. By default LoadBalancer is used for the egress.

Test Connectivity

  1. Get the ARO cluster's api server URL.

    ARO_URL=$(az aro show -n $AZR_CLUSTER -g $AZR_RESOURCE_GROUP -o json | jq -r '.apiserverProfile.url')
    echo $ARO_URL
  2. Get the ARO cluster's Console URL

    CONSOLE_URL=$(az aro show -n $AZR_CLUSTER -g $AZR_RESOURCE_GROUP -o json | jq -r '.consoleProfile.url')
    echo $CONSOLE_URL
  3. Get the ARO cluster's credentials.

    ARO_USERNAME=$(az aro list-credentials -n $AZR_CLUSTER -g $AZR_RESOURCE_GROUP -o json | jq -r '.kubeadminUsername')
    ARO_PASSWORD=$(az aro list-credentials -n $AZR_CLUSTER -g $AZR_RESOURCE_GROUP -o json | jq -r '.kubeadminPassword')
    echo $ARO_PASSWORD
    echo $ARO_USERNAME

Public Test Connectivity

  1. Log into the cluster using oc login command from the create admin command above. ex.

    oc login $ARO_URL -u $ARO_USERNAME -p $ARO_PASSWORD
  2. Check that you can access the Console by opening the console url in your browser.

Private Test Connectivity

  1. Save the jump host public IP address

    JUMP_IP=$(az vm list-ip-addresses -g $AZR_RESOURCE_GROUP -n $AZR_CLUSTER-jumphost -o tsv \
    --query '[].virtualMachine.network.publicIpAddresses[0].ipAddress')
    echo $JUMP_IP
  2. update /etc/hosts to point the openshift domains to localhost. Use the DNS of your openshift cluster as described in the previous step in place of $YOUR_OPENSHIFT_DNS below

    127.0.0.1 api.$YOUR_OPENSHIFT_DNS
    127.0.0.1 console-openshift-console.apps.$YOUR_OPENSHIFT_DNS
    127.0.0.1 oauth-openshift.apps.$YOUR_OPENSHIFT_DNS
  3. SSH to that instance, tunneling traffic for the appropriate hostnames. Be sure to use your new/existing private key, the OpenShift DNS for $YOUR_OPENSHIFT_DNS and your Jumphost IP

    sudo ssh -L 6443:api.$YOUR_OPENSHIFT_DNS:6443 \
    -L 443:console-openshift-console.apps.$YOUR_OPENSHIFT_DNS:443 \
    -L 80:console-openshift-console.apps.$YOUR_OPENSHIFT_DNS:80 \
    aro@$JUMP_IP
  4. Log in using oc login

    oc login $ARO_URL -u $ARO_USERNAME -p $ARO_PASSWORD

NOTE: Another option to connect to a Private ARO cluster jumphost is the usage of sshuttle. If we suppose that we deployed ARO vnet with the 10.0.0.0/20 CIDR we can connect to the cluster using (both API and Console):

sshuttle --dns -NHr aro@$JUMP_IP 10.0.0.0/20 --daemon

and opening a browser the api.$YOUR_OPENSHIFT_DNS and console-openshift-console.apps.$YOUR_OPENSHIFT_DNS will be reachable.

Cleanup

  1. Delete Cluster and Resources

    make destroy-force