-
Notifications
You must be signed in to change notification settings - Fork 10.1k
Description
Terraform Version
terraform -version
Terraform v1.4.2
on darwin_arm64
+ provider registry.terraform.io/hashicorp/aws v6.15.0
+ provider registry.terraform.io/hashicorp/http v3.5.0Terraform Configuration Files
##Module code
variable "message" {
type = string
}
output "message" {
value = var.message
}
##Test code
locals {
# Parsing config HTTP response body
raw_config = jsondecode(data.http.config.response_body)
config = local.raw_config.config
#deployment_check = length(try(local.config.is_primary == true ? { primary = true } : {})) > 0 ? 1 : 0 -- this is used to try with for fro_each and also it not worked skip it
deployment_check = local.config.is_primary ? 1 : 0
}
# Secret that stores api key path
data "aws_secretsmanager_secret" "api_key" {
name = "abc/service/market/
}
# Retrieve the actual API key value from Secrets Manager
data "aws_secretsmanager_secret_version" "api_key" {
secret_id = data.aws_secretsmanager_secret.api_key.id
}
# Fetch DynamoDB configurations
data "http" "config" {
url = "https://localhost:8080/api/v1/service"
request_headers = {
X-API-Key = jsondecode(data.aws_secretsmanager_secret_version.api_key.secret_string)["terraform"]["key"]
Accept = "application/json"
}
}
# DDB module
module "test_message" {
depends_on = [data.http.config]
count = local.deployment_check
#count = try(jsondecode(data.http.config.response_body).config.is_primary ? 1 : 0, 0)
source = "./modules/test_message"
message = "DDB modules running"
}
output "test" {
value = module.test_message[*].message
}
Debug Output
##terraform plan
data.aws_secretsmanager_secret.api_key: Reading...
data.aws_secretsmanager_secret.api_key: Read complete after 1s [id=xxxxxxx]
data.aws_secretsmanager_secret_version.api_key: Reading...
data.aws_secretsmanager_secret_version.api_key: Read complete after 0s [id=xxxx]
data.http.config: Reading...
data.http.config: Read complete after 0s [id=abc]
Changes to Outputs:
+ test = [
+ "DDB modules running",
]
You can apply this plan to save these new output values to the Terraform state, without changing any real infrastructure.
##terraform apply
data.aws_secretsmanager_secret.api_key: Reading...
data.aws_secretsmanager_secret.api_key: Read complete after 1s [id=xxxx]
data.aws_secretsmanager_secret_version.api_key: Reading...
data.aws_secretsmanager_secret_version.api_key: Read complete after 0s [id=xxxx]
data.http.config: Reading...
data.http.config: Read complete after 0s [id=abc]
Changes to Outputs:
+ test = [
+ "DDB modules running",
]
You can apply this plan to save these new output values to the Terraform state, without changing any real infrastructure.
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes
\
Apply complete! Resources: 0 added, 0 changed, 0 destroyed.
Outputs:
test = [
"DDB modules running",
]
##terraform destroy
data.aws_secretsmanager_secret.api_key: Reading...
data.aws_secretsmanager_secret.api_key: Read complete after 1s [id=xxxx]
data.aws_secretsmanager_secret_version.api_key: Reading...
data.aws_secretsmanager_secret_version.api_key: Read complete after 0s [id=xxxxx]
data.http.config: Reading...
data.http.config: Read complete after 1s [id=abc]
Changes to Outputs:
- test = [
- "DDB modules running",
] -> null
You can apply this plan to save these new output values to the Terraform state, without changing any real infrastructure.
Do you really want to destroy all resources?
Terraform will destroy all your managed infrastructure, as shown above.
There is no undo. Only 'yes' will be accepted to confirm.
Enter a value: yes
##terraform destroy
╷
│ Error: Invalid count argument
│
│ on test.tf line 31, in module "test_message":
│ 31: count = local.deployment_check
│
│ The "count" value depends on resource attributes that cannot be determined until apply, so Terraform cannot predict how many instances will be created. To work around this, use the -target argument to first apply only the resources that
│ the count depends on.
Expected Behavior
If I re-run the Terraform destroy, it should not complain about the count value — it should honor the try block. I added the try block because the dependent HTTP data block gets destroyed, so I wrapped the count in a try statement.
Actual Behavior
Even with or without the try wrapper around count, it’s still throwing an error while reapplying destroy. It should use the try block and default to 0 if it’s unable to find the HTTP response
The "count" value depends on resource attributes that cannot be determined until apply, so Terraform cannot predict how many instances will be created. To work around this, use the -target argument to first apply only the resources that
│ the count depends on.
Steps to Reproduce
terraform initterraform applyterraform destroyterraform destroy
Additional Context
Why do I need to run destroy again and again? Some people might have this doubt, so I just want to clarify. All my CI/CD pipelines run on GitHub workflows, and as you all know, the recent runners have been quite unreliable, often causing failures after some jobs are completed. Because of this, I end up re-running the failed jobs, and they go through the destroy steps again. These re-runs are failing due to the count issue, and they aren’t moving to the next environment because of the fail-safe condition we set up to prevent higher environments from running when lower ones fail.
References
No response
Generative AI / LLM assisted development?
No response