Deploying IBM Cloud Private on Azure using Terraform
These Terraform example templates uses the Terraform AzureRM Provider to provision servers in Azure and Terraform Module ICP Deploy to deploy IBM Cloud Private on them.
- Working copy of Terraform
- Basic understanding of IBM Cloud Private
- Azure account
- Access to ICP Images tarball if deploying ICP Enterprise Edition templates
All templates are tested on Ubuntu 16.04 and RHEL. Details on running RHEL in production here
Each template example provided is highly customizable, but are all configured with sensible defaults so they will provide a starting point for the most common use cases.
Basic template which deploys a single master node on an azure VM. Both Maser and Proxy are assigned public IP addresses so they can be easily accessed over the internet. IBM Cloud Private Community Edition is installed directly from Docker Hub, so this template does not require access to ICP Enterprise Edition licenses and Image tarball. Suitable for initial tests and validations.
Deploy ICP Enterprise Edition in a highly available configuration, with cluster deployed across 3 Azure availability zones
Deploy ICP Enterprise Edition in a highly available configuration, with cluster availability managed using Azure Availability Sets
Using the templates
- Select the appropriate template for your use case
- Adjust it as required, or use one of the sample
terraform initin the selected template directory
Note: If you are using Terraform v0.12.xx you may get errors saying "Unsupported Argument". To fix this, you'll want to get v0.11.xx (https://releases.hashicorp.com/terraform/0.11.14/) and use that instead.
You will be prompted by the azure provider to login to create a temporary token. To create a permanent service principal which does not time out and require re-authentication, follow these steps outlined in Terraform docs
Note: For ICP to work on Azure, the kubernetes controller manager needs to dynamically update the Azure Routing Table. It is therefore essential that the variables
aadClientSecret is populated with a service principal that has permissions to update the azure routing table.
Using the environments
When the template creation is completed successfully, terraform will produce an output similar to this:
Outputs: ICP Admin Password = 2f052b35d7cdc3c87b5d6b49009fe972 ICP Admin Username = admin ICP Boot node = 220.127.116.11 ICP Console URL = https://hktestas-f4c95db9-control.westeurope.cloudapp.azure.com:8443 ICP Kubernetes API URL = https://hktestas-f4c95db9-control.westeurope.cloudapp.azure.com:8001 cloudctl = cloudctl login --skip-ssl-validation -a https://hktestas-f4c95db9-control.westeurope.cloudapp.azure.com:8443 -u admin -p 2f052b35d7cdc3c87b5d6b49009fe972 -n default -c id-myicp-account
You can use
cloudctl to configure you local
helm command line client to use this environments, and access the Web Console with the provided username and password
For instructions on how to install cloudctl go to the IBM KnowledgeCenter
Azure Network Options and information
You can read more about the Azure network and options in docs/azure-networking.md
Using integrated Azure functionality
Depending on your configuration you can use integrated functionality from the Azure Cloud provider for Kubernetes.
Once you have logged in to the environment using
cloudctl or via
kubectl authentication information from the dashboard, you can create Azure Load Balancers and Persistent Volumes using
Using the Azure Loadbalancer
See details and examples for exposing your workloads with Azure LoadBalancer in azure-loadbalancer.md
Dynamic Volume Provisioning
To be able to dynamically create and attach volumes, we need to create the necessary cluster role and clusterrole binding for the
kubectl create clusterrole system:azure-cloud-provider --verb=get,create --resource=secrets kubectl create clusterrolebinding system:azure-cloud-provider --clusterrole=system:azure-cloud-provider --serviceaccount=kube-system:persistent-volume-binder