A Terraform module to deploy a container app in Azure with the following characteristics:
- Ability to specify all the parameters of log analytics workspace resource.
- Specify the container app image using
image
parameter intemplate
block undercontainer_apps
variable. - For multiple apps, specify the container parameters under
containers
. It's a set of objects with the following parameters:name
- (Required) The name of the container.image
- (Required) The container image.resources
- (Optional) The resource requirements for the container.ports
- (Optional) The ports exposed by the container.environment_variables
- (Optional) The environment variables for the container.command
- (Optional) The command to run within the container in exec form.args
- (Optional) The arguments to the command incommand
field.liveness_probe
- (Optional) The liveness probe for the container.readiness_probe
- (Optional) The readiness probe for the container.volume_mounts
- (Optional) The volume mounts for the container.volumes
- (Optional) The volumes for the container.secrets
- (Optional) The secrets for the container.image_pull_secrets
- (Optional) The image pull secrets for the container.security_context
- (Optional) The security context for the container.resources
- (Optional) The resource requirements for the container.ports
- (Optional) The ports exposed by the container.environment_variables
- (Optional) The environment variables for the container.command
- (Optional) The command to run within the container in exec form.args
- (Optional) The arguments to the command incommand
field.liveness_probe
- (Optional) The liveness probe for the container.
Please view folders in examples
.
This module uses terraform-provider-modtm to collect telemetry data. This provider is designed to assist with tracking the usage of Terraform modules. It creates a custom modtm_telemetry
resource that gathers and sends telemetry data to a specified endpoint. The aim is to provide visibility into the lifecycle of your Terraform modules - whether they are being created, updated, or deleted. This data can be invaluable in understanding the usage patterns of your modules, identifying popular modules, and recognizing those that are no longer in use.
The ModTM provider is designed with respect for data privacy and control. The only data collected and transmitted are the tags you define in module's modtm_telemetry
resource, an uuid which represents a module instance's identifier, and the operation the module's caller is executing (Create/Update/Delete/Read). No other data from your Terraform modules or your environment is collected or transmitted.
One of the primary design principles of the ModTM provider is its non-blocking nature. The provider is designed to work in a way that any network disconnectedness or errors during the telemetry data sending process will not cause a Terraform error or interrupt your Terraform operations. This makes the ModTM provider safe to use even in network-restricted or air-gaped environments.
If the telemetry data cannot be sent due to network issues, the failure will be logged, but it will not affect the Terraform operation in progress(it might delay your operations for no more than 5 seconds). This ensures that your Terraform operations always run smoothly and without interruptions, regardless of the network conditions.
You can turn off the telemetry collection by declaring the following provider
block in your root module:
provider "modtm" {
enabled = false
}
We assumed that you have setup service principal's credentials in your environment variables like below:
export ARM_SUBSCRIPTION_ID="<azure_subscription_id>"
export ARM_TENANT_ID="<azure_subscription_tenant_id>"
export ARM_CLIENT_ID="<service_principal_appid>"
export ARM_CLIENT_SECRET="<service_principal_password>"
On Windows Powershell:
$env:ARM_SUBSCRIPTION_ID="<azure_subscription_id>"
$env:ARM_TENANT_ID="<azure_subscription_tenant_id>"
$env:ARM_CLIENT_ID="<service_principal_appid>"
$env:ARM_CLIENT_SECRET="<service_principal_password>"
We provide a docker image to run the pre-commit checks and tests for you: mcr.microsoft.com/azterraform:latest
To run the pre-commit task, we can run the following command:
$ docker run --rm -v $(pwd):/src -w /src mcr.microsoft.com/azterraform:latest make pre-commit
On Windows Powershell:
$ docker run --rm -v ${pwd}:/src -w /src mcr.microsoft.com/azterraform:latest make pre-commit
In pre-commit task, we will:
- Run
terraform fmt -recursive
command for your Terraform code. - Run
terrafmt fmt -f
command for markdown files and go code files to ensure that the Terraform code embedded in these files are well formatted. - Run
go mod tidy
andgo mod vendor
for test folder to ensure that all the dependencies have been synced. - Run
gofmt
for all go code files. - Run
gofumpt
for all go code files. - Run
terraform-docs
onREADME.md
file, then runmarkdown-table-formatter
to format markdown tables inREADME.md
.
Then we can run the pr-check task to check whether our code meets our pipeline's requirement(We strongly recommend you run the following command before you commit):
$ docker run --rm -v $(pwd):/src -w /src mcr.microsoft.com/azterraform:latest make pr-check
On Windows Powershell:
$ docker run --rm -v ${pwd}:/src -w /src mcr.microsoft.com/azterraform:latest make pr-check
To run the e2e-test, we can run the following command:
docker run --rm -v $(pwd):/src -w /src -e ARM_SUBSCRIPTION_ID -e ARM_TENANT_ID -e ARM_CLIENT_ID -e ARM_CLIENT_SECRET mcr.microsoft.com/azterraform:latest make e2e-test
On Windows Powershell:
docker run --rm -v ${pwd}:/src -w /src -e ARM_SUBSCRIPTION_ID -e ARM_TENANT_ID -e ARM_CLIENT_ID -e ARM_CLIENT_SECRET mcr.microsoft.com/azterraform:latest make e2e-test
Name | Version |
---|---|
terraform | >= 1.2 |
azurerm | >= 3.87, < 4.0 |
Name | Version |
---|---|
azurerm | >= 3.87, < 4.0 |
No modules.
Name | Type |
---|---|
azurerm_container_app.container_app | resource |
azurerm_container_app_environment.container_env | resource |
azurerm_container_app_environment_dapr_component.dapr | resource |
azurerm_container_app_environment_storage.storage | resource |
azurerm_log_analytics_workspace.laws | resource |
azurerm_container_app_environment.container_env | data source |
Name | Description | Type | Default | Required |
---|---|---|---|---|
container_app_environment | Reference to existing container apps environment to use. | object({ |
null |
no |
container_app_environment_infrastructure_subnet_id | (Optional) The existing subnet to use for the container apps control plane. Changing this forces a new resource to be created. | string |
null |
no |
container_app_environment_internal_load_balancer_enabled | (Optional) Should the Container Environment operate in Internal Load Balancing Mode? Defaults to false . Changing this forces a new resource to be created. |
bool |
null |
no |
container_app_environment_name | (Required) The name of the container apps managed environment. Changing this forces a new resource to be created. | string |
n/a | yes |
container_app_environment_tags | A map of the tags to use on the resources that are deployed with this module. | map(string) |
{} |
no |
container_app_secrets | (Optional) The secrets of the container apps. The key of the map should be aligned with the corresponding container app. | map(list(object({ |
{} |
no |
container_apps | The container apps to deploy. | map(object({ |
n/a | yes |
dapr_component | (Optional) The Dapr component to deploy. | map(object({ |
{} |
no |
dapr_component_secrets | (Optional) The secrets of the Dapr components. The key of the map should be aligned with the corresponding Dapr component. | map(list(object({ |
{} |
no |
env_storage | (Optional) Manages a Container App Environment Storage, writing files to this file share to make data accessible by other systems. | map(object({ |
{} |
no |
environment_storage_access_key | (Optional) The Storage Account Access Key. The key of the map should be aligned with the corresponding environment storage. | map(string) |
null |
no |
location | (Required) The location this container app is deployed in. This should be the same as the environment in which it is deployed. | string |
n/a | yes |
log_analytics_workspace | (Optional) A Log Analytics Workspace already exists. | object({ |
null |
no |
log_analytics_workspace_allow_resource_only_permissions | (Optional) Specifies if the log Analytics Workspace allow users accessing to data associated with resources they have permission to view, without permission to workspace. Defaults to true . |
bool |
true |
no |
log_analytics_workspace_cmk_for_query_forced | (Optional) Is Customer Managed Storage mandatory for query management? Defaults to false . |
bool |
false |
no |
log_analytics_workspace_daily_quota_gb | (Optional) The workspace daily quota for ingestion in GB. Defaults to -1 which means unlimited. |
number |
-1 |
no |
log_analytics_workspace_internet_ingestion_enabled | (Optional) Should the Log Analytics Workspace support ingestion over the Public Internet? Defaults to true . |
bool |
true |
no |
log_analytics_workspace_internet_query_enabled | (Optional) Should the Log Analytics Workspace support query over the Public Internet? Defaults to true . |
bool |
true |
no |
log_analytics_workspace_local_authentication_disabled | (Optional) Specifies if the log analytics workspace should enforce authentication using Azure Active Directory. Defaults to false . |
bool |
false |
no |
log_analytics_workspace_name | (Optional) Specifies the name of the Log Analytics Workspace. Must set this variable if var.log_analytics_workspace is null . Changing this forces a new resource to be created. |
string |
null |
no |
log_analytics_workspace_reservation_capacity_in_gb_per_day | (Optional) The capacity reservation level in GB for this workspace. Must be in increments of 100 between 100 and 5000. reservation_capacity_in_gb_per_day can only be used when the sku is set to CapacityReservation . |
number |
null |
no |
log_analytics_workspace_retention_in_days | (Optional) The workspace data retention in days. Possible values are either 7 (Free Tier only) or range between 30 and 730. | number |
null |
no |
log_analytics_workspace_sku | (Optional) Specifies the SKU of the Log Analytics Workspace. Possible values are Free , PerNode , Premium , Standard , Standalone , Unlimited , CapacityReservation , and PerGB2018 (new SKU as of 2018-04-03 ). Defaults to PerGB2018 . |
string |
"PerGB2018" |
no |
log_analytics_workspace_tags | (Optional) A mapping of tags to assign to the resource. | map(string) |
null |
no |
resource_group_name | (Required) The name of the resource group in which the resources will be created. | string |
n/a | yes |
tracing_tags_enabled | Whether enable tracing tags that generated by BridgeCrew Yor. | bool |
false |
no |
tracing_tags_prefix | Default prefix for generated tracing tags | string |
"avm_" |
no |
Name | Description |
---|---|
container_app_environment_id | The ID of the Container App Environment within which this Container App should exist. |
container_app_fqdn | The FQDN of the Container App's ingress. |
container_app_identities | The identities of the Container App, key is Container App's name. |
container_app_ips | The IPs of the Latest Revision of the Container App. |