Skip to content

Azure/terraform-azapi-hybridcontainerservice

Repository files navigation

***Note: This project is in preview stage, the API might change


This Terraform module deploys a Hybrid Kubernetescluster on ASZ using Hybrid Container Service and add support for adding node pool

Usage in Terraform 1.2.0

Note: Currently, ARB(Arc Resource Bridge) creation and vnet create can't be provisioned by portal, so we assume customer has a resource group, in the resource group it has pre-provisioned ARB and vnet. This module only create hybrid aks with exsiting ARB and vnet.

Please view folders in examples.

There're some examples in the examples folder. You can execute terraform apply command in examples's sub folder to try the module. These examples are tested against every PR with the E2E Test.

Pre-Commit & Pr-Check & Test

Configurations

We assumed that you have setup service principal's credentials in your environment variables like below:

export ARM_SUBSCRIPTION_ID="<azure_subscription_id>"
export ARM_TENANT_ID="<azure_subscription_tenant_id>"
export ARM_CLIENT_ID="<service_principal_appid>"
export ARM_CLIENT_SECRET="<service_principal_password>"

On Windows Powershell:

$env:ARM_SUBSCRIPTION_ID="<azure_subscription_id>"
$env:ARM_TENANT_ID="<azure_subscription_tenant_id>"
$env:ARM_CLIENT_ID="<service_principal_appid>"
$env:ARM_CLIENT_SECRET="<service_principal_password>"

We provide a docker image to run the pre-commit checks and tests for you: mcr.microsoft.com/azterraform:latest

To run the pre-commit task, we can run the following command:

$ docker run --rm -v $(pwd):/src -w /src mcr.microsoft.com/azterraform:latest make pre-commit

On Windows Powershell:

$ docker run --rm -v ${pwd}:/src -w /src mcr.microsoft.com/azterraform:latest make pre-commit

In pre-commit task, we will:

  1. Run terraform fmt -recursive command for your Terraform code.
  2. Run terrafmt fmt -f command for markdown files and go code files to ensure that the Terraform code embedded in these files are well formatted.
  3. Run go mod tidy and go mod vendor for test folder to ensure that all the dependencies have been synced.
  4. Run gofmt for all go code files.
  5. Run gofumpt for all go code files.
  6. Run terraform-docs on README.md file, then run markdown-table-formatter to format markdown tables in README.md.

Then we can run the pr-check task to check whether our code meets our pipeline's requirement(We strongly recommend you run the following command before you commit):

$ docker run --rm -v $(pwd):/src -w /src mcr.microsoft.com/azterraform:latest make pr-check

On Windows Powershell:

$ docker run --rm -v ${pwd}:/src -w /src mcr.microsoft.com/azterraform:latest make pr-check

To run the e2e-test, we can run the following command:

docker run --rm -v $(pwd):/src -w /src -e ARM_SUBSCRIPTION_ID -e ARM_TENANT_ID -e ARM_CLIENT_ID -e ARM_CLIENT_SECRET mcr.microsoft.com/azterraform:latest make e2e-test

On Windows Powershell:

docker run --rm -v ${pwd}:/src -w /src -e ARM_SUBSCRIPTION_ID -e ARM_TENANT_ID -e ARM_CLIENT_ID -e ARM_CLIENT_SECRET mcr.microsoft.com/azterraform:latest make e2e-test

To follow Ensure AKS uses disk encryption set policy we've used azurerm_key_vault in example codes, and to follow Key vault does not allow firewall rules settings we've limited the ip cidr on it's network_acls. On default we'll use the ip return by https://api.ipify.org?format=json api as your public ip, but in case you need use other cidr, you can assign on by passing an environment variable:

docker run --rm -v $(pwd):/src -w /src -e TF_VAR_key_vault_firewall_bypass_ip_cidr="<your_cidr>" -e ARM_SUBSCRIPTION_ID -e ARM_TENANT_ID -e ARM_CLIENT_ID -e ARM_CLIENT_SECRET mcr.microsoft.com/azterraform:latest make e2e-test

On Windows Powershell:

docker run --rm -v ${pwd}:/src -w /src -e TF_VAR_key_vault_firewall_bypass_ip_cidr="<your_cidr>" -e ARM_SUBSCRIPTION_ID -e ARM_TENANT_ID -e ARM_CLIENT_ID -e ARM_CLIENT_SECRET mcr.microsoft.com/azterraform:latest make e2e-test

Prerequisites

License

MIT

Contributing

This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.

When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.

Trademarks

This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft's Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party's policies.

Document generation

terraform-docs markdown table --output-file README.md --output-mode insert  --sort-by required .

Module Spec

The following sections are generated by terraform-docs and markdown-table-formatter, please DO NOT MODIFY THEM MANUALLY!

Requirements

Name Version
terraform >= 1.3
azurerm ~>3.0

Providers

Name Version
azapi n/a
azurerm ~>3.0

Modules

No modules.

Resources

Name Type
azapi_resource.provisionedCluster resource
azapi_resource.clusterVnet data source
azurerm_resource_group.rg data source

Inputs

Name Description Type Default Required
cluster_name (Required) The name of cluster string n/a yes
customLocation_id (Required) The name of the customer location that the resources for consul will run in string n/a yes
public_key (Required) Base64 encoded public certificate used by the agent to do the initial handshake to the backend services in Azure. string n/a yes
resource_group_name (Required) The name of the resource group that the resources for consul will run in string n/a yes
vnet_name (Required) The name of the vnet that the resources for consul will run in string n/a yes
admin_group_object_IDs (Optional) If you want to Use Azure AD for authentication and Kubernetes native RBAC for authorization, specify AAD group here. Assign Azure Active Directory groups that will have admin access within the cluster. Make sure you are part of the assigned groups to ensure cluster access after deployment regardless of if you are an Owner or a Contributor. list(string) [] no
controlplane_VM_size (Optional) VM sku of the control plane string "Standard_A4_v2" no
controlplane_count (Optional) VM count of the control plane number 1 no
default_agent_VM_size (Optional) VM sku of the default node pool string "Standard_A4_v2" no
default_agent_count (Optional) VM count of the default node pool number 1 no
environment (Optional) The environment of the site, possiblily value like: test/prod string "" no
kubernetes_version (Optional) kubernates version of this hybrid aks string "v1.22.11" no
loadbalancer_VM_size (Optional) VM sku of the load balancer string "Standard_K8S3_v1" no
loadbalancer_count (Optional) VM count of the load balancer number 1 no
loadbalancer_sku (Optional) value string "unstacked-haproxy" no
network_policy (Optional) network cni of this hybrid aks string "calico" no
pod_cidr (Optional) CIDR of pods in this hybrid aks string "10.245.0.0/16" no
site_name (Optional) The name of the site string "" no

Outputs

Name Description
cluster_id the id of created hybrid aks
resource_group_id the id of the resource group

About

This Terraform module deploys a Hybrid Kubernetescluster on ASZ using Hybrid Container Service and add support for adding node pool

Resources

License

Code of conduct

Security policy

Stars

Watchers

Forks

Packages

No packages published