Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Terraform wants to change azurerm_monitor_diagnostic_setting log category settings #5673

Closed
Tommy-Ten opened this issue Feb 11, 2020 · 15 comments

Comments

@Tommy-Ten
Copy link

Tommy-Ten commented Feb 11, 2020

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Terraform (and AzureRM Provider) Version

Terraform v0.12.20

  • provider.azurerm v1.41.0

Affected Resource(s)

  • azurerm_monitor_diagnostic_setting

Terraform Configuration Files

resource "azurerm_monitor_diagnostic_setting" "export_activity_logs" {
  name                           = var.actLogName
  target_resource_id             = data.azurerm_subscription.current.id
  storage_account_id             = data.azurerm_storage_account.logging_activity_logs_archive_storage_account.id
  eventhub_authorization_rule_id = azurerm_eventhub_namespace_authorization_rule.eventhub_auth_rule.id

  log {
    category = "Administrative"
    enabled  = true

    retention_policy {
      enabled = true
      days    = 365
    }
  }

  log {
    category = "Security"
    enabled  = false

    retention_policy {
      enabled = false
    }
  }

  log {
    category = "ServiceHealth"
    enabled  = false

    retention_policy {
      enabled = false
    }
  }

  log {
    category = "Alert"
    enabled  = false

    retention_policy {
      enabled = false
    }
  }

  log {
    category = "Recommendation"
    enabled  = false

    retention_policy {
      enabled = false
    }
  }

  log {
    category = "Policy"
    enabled  = false

    retention_policy {
      enabled = false
    }
  }

  log {
    category = "Autoscale"
    enabled  = false

    retention_policy {
      enabled = false
    }
  }

  log {
    category = "ResourceHealth"
    enabled  = false

    retention_policy {
      enabled = false
    }
  }
}

Expected Behavior

After an initial terraform apply, when I run terraform plan or another terraform apply, I should see no changes.

Actual Behavior

After an initial terraform apply, when I run terraform plan or another terraform apply, I see settings for log categories that I defined in my configuration being changed.

Resource actions are indicated with the following symbols:
  ~ update in-place

Terraform will perform the following actions:

  # module.event_hub_activity_logs.azurerm_monitor_diagnostic_setting.export_activity_logs will be updated in-place
  ~ resource "azurerm_monitor_diagnostic_setting" "export_activity_logs" {
        eventhub_authorization_rule_id = "XXXXX"
        id                             = "XXXXX"
        name                           = "XXXXX"
        storage_account_id             = "XXXXX"
        target_resource_id             = "XXXXX"

      + log {
          + category = "Administrative"
          + enabled  = true

          + retention_policy {
              + days    = 365
              + enabled = true
            }
        }
      - log {
          - category = "Administrative" -> null
          - enabled  = true -> null
        }
      + log {
          + category = "Alert"
          + enabled  = false

          + retention_policy {
              + enabled = false
            }
        }
      - log {
          - category = "Alert" -> null
          - enabled  = false -> null
        }
      + log {
          + category = "Autoscale"
          + enabled  = false

          + retention_policy {
              + enabled = false
            }
        }
      - log {
          - category = "Autoscale" -> null
          - enabled  = false -> null
        }
      + log {
          + category = "Policy"
          + enabled  = false

          + retention_policy {
              + enabled = false
            }
        }
      - log {
          - category = "Policy" -> null
          - enabled  = false -> null
        }
      + log {
          + category = "Recommendation"
          + enabled  = false

          + retention_policy {
              + enabled = false
            }
        }
      - log {
          - category = "Recommendation" -> null
          - enabled  = false -> null
        }
      + log {
          + category = "ResourceHealth"
          + enabled  = false

          + retention_policy {
              + enabled = false
            }
        }
      - log {
          - category = "ResourceHealth" -> null
          - enabled  = false -> null
        }
      + log {
          + category = "Security"
          + enabled  = false

          + retention_policy {
              + enabled = false
            }
        }
      - log {
          - category = "Security" -> null
          - enabled  = false -> null
        }
      + log {
          + category = "ServiceHealth"
          + enabled  = false

          + retention_policy {
              + enabled = false
            }
        }
      - log {
          - category = "ServiceHealth" -> null
          - enabled  = false -> null
        }
    }

Plan: 0 to add, 1 to change, 0 to destroy.

Steps to Reproduce

  1. Add the configuration block above
  2. Run terraform apply to set the initial diagnostic settings
  3. Run terraform apply (or terraform plan) again to observe the planned changes

References

@jonmaestas
Copy link

Are these issues related? #2466

@rinjohn
Copy link

rinjohn commented Apr 14, 2020

Is there any solution for this? I am struggling with this issue while setting up the diagnostic settings of Recovery Service Vault and Azure SQL Database.
terraform version -0.12.13
Provider version- v1.27.1

@Tommy-Ten
Copy link
Author

Are these issues related? #2466

Yes, it seems so.

@batesandy
Copy link

Seeing the same for diagnostic logs on subscription resource

@nyuen
Copy link
Contributor

nyuen commented Apr 23, 2020

The issue is that the activity logs do not support Retention Policy which is mandatory on the Terraform provider

This one should probably be optional in the provider's code since it also seems optional (omitEmpty) in the Azure SDK

"retention_policy": {
Type: schema.TypeList,
Required: true,
MaxItems: 1,
Elem: &schema.Resource{
Schema: map[string]*schema.Schema{
"enabled": {
Type: schema.TypeBool,
Required: true,
},

  "days": {
    Type:         schema.TypeInt,
    Optional:     true,
    ValidateFunc: validation.IntAtLeast(0),
  },
},

},
},

@learnprofile

This comment has been minimized.

@MaxiPalle

This comment has been minimized.

@rudolphjacksonm

This comment has been minimized.

@rudolphjacksonm
Copy link
Contributor

I'm not sure I understand why my comment was marked as off-topic. This issue hasn't been fixed by @nyuen 's PR, since it's still appearing in provider version v2.13.0 and above. Therefore there's still an issue to be solved and whatever the bug is, it's still present in recent versions of the provider.

@nyuen
Copy link
Contributor

nyuen commented Jul 22, 2020

Hi @rudolphjacksonm , the intent of my change was to make the retention policy optional as the new Activity log experience doesn't seem to provide the option to specify a retention policy anymore (as per the portal UI).

To make the workflow idempotent I've considered what is returned by the terraform state: if you have a look at what the diff indicates on the subsequent terraform apply you will see that the retention_policy is not stored at all which is causing a diff.

below is the Terraform code that I'm now using to create activity logs in Terraform as per the changes I've made to the azurerm provider

Sample code

provider "azurerm" {
  version = "=2.19.0"
  features {}
}

data "azurerm_subscription" "current" {
}

resource "azurerm_resource_group" "test_diag_rg" {
  name     = "rg-bug-terraform5673"
  location = "southeastasia"
}


resource "azurerm_storage_account" "tf_diag" {
  name                     = "tfbug5673stnyuen"
  resource_group_name      = azurerm_resource_group.test_diag_rg.name
  location                 = azurerm_resource_group.test_diag_rg.location
  account_tier             = "Standard"
  account_replication_type = "GRS"
}


resource "azurerm_monitor_diagnostic_setting" "export_activity_logs" {
  name                           = "demo_log_tf"
  target_resource_id             = data.azurerm_subscription.current.id
  storage_account_id             = azurerm_storage_account.tf_diag.id

  log {
    category = "Administrative"
    enabled  = true
  }

  log {
    category = "Security"
    enabled  = false
  }

  log {
    category = "ServiceHealth"
    enabled  = false
  }

  log {
    category = "Alert"
    enabled  = false
  }

  log {
    category = "Recommendation"
    enabled  = false
  }

  log {
    category = "Policy"
    enabled  = false
  }

  log {
    category = "Autoscale"
    enabled  = false
  }

  log {
    category = "ResourceHealth"
    enabled  = false
  }
}

@rudolphjacksonm
Copy link
Contributor

rudolphjacksonm commented Jul 22, 2020

Hi @nyuen, I've tried the same on my end but Terraform still wants to change the category for each entry. I've tried applying this several times and inspected the tfstate, which shows the retention_policy value is set to an empty array. Let me know if I'm doing something wrong here:

Sample Code

resource "azurerm_monitor_diagnostic_setting" "aks_cluster_diagnostics" {
  count                          = var.aks_enable_diagnostics == "true" && var.aks_diagnostic_event_hub_name != "" ? 1 : 0
  name                           = "aks-cluster-to-eventhub"
  target_resource_id             = azurerm_kubernetes_cluster.aks_with_aad_parameters.id
  eventhub_name                  = "aks-cluster-diagnostics"
  eventhub_authorization_rule_id = "${data.azurerm_subscription.current.id}/resourceGroups/${var.aks_rg_name}/providers/Microsoft.EventHub/namespaces/${var.aks_diagnostic_event_hub_name}/AuthorizationRules/RootManageSharedAccessKey"
  log {
    category = "kube-apiserver"
    enabled  = true
  }
  log {
    category = "kube-controller-manager"
    enabled  = true
  }
  log {
    category = "kube-scheduler"
    enabled  = true
  }
  log {
    category = "kube-audit"
    enabled  = true
  }
  log {
    category = "cluster-autoscaler"
    enabled  = true
  }
  metric {
    category = "AllMetrics"
    enabled  = true
  }
  depends_on = [azurerm_kubernetes_cluster.aks_with_aad_parameters]
}

resource "azurerm_monitor_diagnostic_setting" "aks_nsg_diagnostics" {
  count                          = var.aks_enable_diagnostics == "true" && var.aks_diagnostic_event_hub_name != "" ? 1 : 0
  name                           = "aks-nsg-to-eventhub"
  target_resource_id             = data.azurerm_resources.aks_cluster_managed_nsg.resources[0].id
  eventhub_name                  = "aks-nsg-diagnostics"
  eventhub_authorization_rule_id = "${data.azurerm_subscription.current.id}/resourceGroups/${var.aks_rg_name}/providers/Microsoft.EventHub/namespaces/${var.aks_diagnostic_event_hub_name}/AuthorizationRules/RootManageSharedAccessKey"
  log {
    category = "NetworkSecurityGroupEvent"
    enabled  = true
  }
  log {
    category = "NetworkSecurityGroupRuleCounter"
    enabled  = true
  }
  depends_on = [
    azurerm_kubernetes_cluster.aks_with_aad_parameters
  ]
}

Plan Output

# module.aks-cluster.azurerm_monitor_diagnostic_setting.aks_cluster_diagnostics[0] will be updated in-place
  ~ resource "azurerm_monitor_diagnostic_setting" "aks_cluster_diagnostics" {
        eventhub_authorization_rule_id = "/subscriptions/000000-00000-00000-00000/resourceGroups/devuks1/providers/Microsoft.EventHub/namespaces/devuks1-logging-ns-primary/AuthorizationRules/RootManageSharedAccessKey"
        eventhub_name                  = "aks-cluster-diagnostics"
        id                             = "/subscriptions/000000-00000-00000-00000/resourcegroups/devuks1/providers/Microsoft.ContainerService/managedClusters/devuks1|aks-cluster-to-eventhub"
        name                           = "aks-cluster-to-eventhub"
        target_resource_id             = "/subscriptions/000000-00000-00000-00000/resourcegroups/devuks1/providers/Microsoft.ContainerService/managedClusters/devuks1"

      - log {
          - category = "cluster-autoscaler" -> null
          - enabled  = true -> null

          - retention_policy {
              - days    = 0 -> null
              - enabled = false -> null
            }
        }
      + log {
          + category = "cluster-autoscaler"
          + enabled  = true
        }
      - log {
          - category = "kube-apiserver" -> null
          - enabled  = true -> null

          - retention_policy {
              - days    = 0 -> null
              - enabled = false -> null
            }
        }
      + log {
          + category = "kube-apiserver"
          + enabled  = true
        }
      - log {
          - category = "kube-audit" -> null
          - enabled  = true -> null

          - retention_policy {
              - days    = 0 -> null
              - enabled = false -> null
            }
        }
      + log {
          + category = "kube-audit"
          + enabled  = true
        }
      - log {
          - category = "kube-controller-manager" -> null
          - enabled  = true -> null

          - retention_policy {
              - days    = 0 -> null
              - enabled = false -> null
            }
        }
      + log {
          + category = "kube-controller-manager"
          + enabled  = true
        }
      - log {
          - category = "kube-scheduler" -> null
          - enabled  = true -> null

          - retention_policy {
              - days    = 0 -> null
              - enabled = false -> null
            }
        }
      + log {
          + category = "kube-scheduler"
          + enabled  = true
        }

      - metric {
          - category = "AllMetrics" -> null
          - enabled  = true -> null

          - retention_policy {
              - days    = 0 -> null
              - enabled = false -> null
            }
        }
      + metric {
          + category = "AllMetrics"
          + enabled  = true
        }
    }

  # module.aks-cluster.azurerm_monitor_diagnostic_setting.aks_nsg_diagnostics[0] must be replaced
-/+ resource "azurerm_monitor_diagnostic_setting" "aks_nsg_diagnostics" {
        eventhub_authorization_rule_id = "/subscriptions/000000-00000-00000-000009/resourceGroups/devuks1/providers/Microsoft.EventHub/namespaces/devuks1-logging-ns-primary/AuthorizationRules/RootManageSharedAccessKey"
        eventhub_name                  = "aks-nsg-diagnostics"
      ~ id                             = "/subscriptions/000000-00000-00000-00000/resourceGroups/mc_devuks1_uksouth/providers/Microsoft.Network/networkSecurityGroups/aks-agentpool-28835032-nsg|aks-nsg-to-eventhub" -> (known after apply)
        name                           = "aks-nsg-to-eventhub"
      ~ target_resource_id             = "/subscriptions/000000-00000-00000-00000/resourceGroups/mc_devuks1_uksouth/providers/Microsoft.Network/networkSecurityGroups/aks-agentpool-28835032-nsg" -> (known after apply) # forces replacement

      - log {
          - category = "NetworkSecurityGroupEvent" -> null
          - enabled  = true -> null

          - retention_policy {
              - days    = 0 -> null
              - enabled = false -> null
            }
        }
      + log {
          + category = "NetworkSecurityGroupEvent"
          + enabled  = true
        }
      - log {
          - category = "NetworkSecurityGroupRuleCounter" -> null
          - enabled  = true -> null

          - retention_policy {
              - days    = 0 -> null
              - enabled = false -> null
            }
        }
      + log {
          + category = "NetworkSecurityGroupRuleCounter"
          + enabled  = true
        }
    }
"attributes": {
            "eventhub_authorization_rule_id": "/subscriptions/00000-00000-00000-00000/resourceGroups/devuks1/providers/Microsoft.EventHub/namespaces/devuks1-logging-ns-primary/AuthorizationRules/RootManageSharedAccessKey",
            "eventhub_name": "aks-cluster-diagnostics",
            "id": "/subscriptions/00000-00000-00000-00000/resourcegroups/devuks1/providers/Microsoft.ContainerService/managedClusters/devuks1|aks-cluster-to-eventhub",
            "log": [
              {
                "category": "cluster-autoscaler",
                "enabled": true,
                "retention_policy": []
              },
              {
                "category": "kube-apiserver",
                "enabled": true,
                "retention_policy": []
              },
              {
                "category": "kube-audit",
                "enabled": true,
                "retention_policy": []
              },
              {
                "category": "kube-controller-manager",
                "enabled": true,
                "retention_policy": []
              },
              {
                "category": "kube-scheduler",
                "enabled": true,
                "retention_policy": []
              }
            ],

@nyuen
Copy link
Contributor

nyuen commented Jul 22, 2020

My fix addresses specifically the Activity log which doesn't support the retention_policy even when stored in a storage account. For the Kubernetes related diagnostic settings it seems that the retention policy shouldn't be empty (even though you're not storing the settings to a storage account).

I would try:

log {
    category = "kube-apiserver"
    enabled  = true
 retention_policy {
               days    = 0
              enabled = false 
            }
}

@rudolphjacksonm
Copy link
Contributor

@nyuen that worked! I've applied the same change for our Eventhub diagnostic settings which were getting recreated on every apply due to the same issue. Thanks so much for your help, that's been bothering me for ages!

@magodo
Copy link
Collaborator

magodo commented Aug 4, 2020

The issue being discussed here is that though user has specified all the available diag settings, terraform still reports diff, which has been addressed by #6603. So I'm gonna close this issue for now.

For others who gets diff because of not specifying all the available diag settings, you can subscribe #7235 for any update for that issue.

@magodo magodo closed this as completed Aug 4, 2020
@ghost
Copy link

ghost commented Sep 3, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 hashibot-feedback@hashicorp.com. Thanks!

@hashicorp hashicorp locked and limited conversation to collaborators Sep 3, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests