Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error recovering azure key from the keyvault. Terraform recovers the key but does not import in to the state file #12299

Open
Divya1388 opened this issue Jun 21, 2021 · 2 comments

Comments

@Divya1388
Copy link

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Terraform (and AzureRM Provider) Version

Terraform: v0.15.4
Azurerm: v2.64.0

Affected Resource(s)

azurerm_key_vault_key

Terraform Configuration Files

terraform {
  required_providers {
    azurerm = {
      source  = "hashicorp/azurerm"
      version = ">=2.35.0"
    }
    kubernetes = {
      source  = "hashicorp/kubernetes"
      version = ">= 1.13.2, < 2.0.0" # Breaking changes in 2.0
    }
    local = {
      source  = "hashicorp/local"
      version = ">=2.0.0"
    }
    null = {
      source  = "hashicorp/null"
      version = ">= 3.0.0"
    }
    random = {
      source  = "hashicorp/random"
      version = ">= 3.0.0"
    }
  }
}

provider "azurerm" {
  features {
    key_vault {
      recover_soft_deleted_key_vaults = true
      purge_soft_delete_on_destroy    = false
    }
  }
}

#
# CUSTOMER MANAGED KEYS
#

# The cluster owns its customer managed key since there are so many constraints
# around geographic region and key vault/resource location when using CMK.
resource "azurerm_key_vault_key" "cmk" {
  name         = "cmk-${var.name}"
  key_vault_id = var.cmk_key_vault_id
  key_type     = "RSA"
  key_size     = 2048
  key_opts = [
    "decrypt",
    "encrypt",
    "sign",
    "unwrapKey",
    "verify",
    "wrapKey",
  ]
  depends_on = [
    # Can't depend on the access policy that allows the cluster to get to the
    # CMK or there's a cycle:
    # - Access policy needs the disk encryption set principal
    # - Disk encryption set needs the key created
    #
    # However, when testing we want to add a dependency to the access policy on
    # the _provisioning_ CI/CD account because we want the key to be created
    # after we grant access to the provisioning account; and we want the key to
    # be deleted before we remove that access.
    var.cmk_dependencies
  ]
  tags = local.all_tags

  lifecycle {
    ignore_changes = [
      tags["date_created"]
    ]
  }
}

# The disk encryption set is what ties the node OS disks and persistent volume
# claims to the CMK.
resource "azurerm_disk_encryption_set" "cluster" {
  name                = "des-${var.name}"
  resource_group_name = var.resource_group_name
  location            = var.location
  key_vault_key_id    = azurerm_key_vault_key.cmk.id
  tags                = local.all_tags

  identity {
    type = "SystemAssigned"
  }

  lifecycle {
    ignore_changes = [
      tags["date_created"]
    ]
  }
}

# The disk encryption set needs permissions on the key vault so it can access
# the CMK and do its work.
resource "azurerm_key_vault_access_policy" "cmk" {
  key_vault_id = var.cmk_key_vault_id
  tenant_id    = azurerm_disk_encryption_set.cluster.identity.0.tenant_id
  object_id    = azurerm_disk_encryption_set.cluster.identity.0.principal_id
  key_permissions = [
    "decrypt",
    "encrypt",
    "get",
    "sign",
    "unwrapKey",
    "verify",
    "wrapKey",
  ]
}

resource "azurerm_kubernetes_cluster" "cluster" {
  name                   = var.name
  location               = var.location
  resource_group_name    = var.resource_group_name
  dns_prefix             = var.name
  disk_encryption_set_id = azurerm_disk_encryption_set.cluster.id
  tags                   = local.all_tags
  kubernetes_version = var.kubernetes_version
  private_cluster_enabled = true

  default_node_pool {
    name                = "nodepool01"
    vm_size             = "Standard_DS2_v2"
    availability_zones  = ["1", "2", "3"]
    enable_auto_scaling = true
    type                = "VirtualMachineScaleSets"
    min_count           = var.node_min_count
    max_count           = var.node_max_count
    vnet_subnet_id      = var.node_vnet_subnet_id
  }
  
  service_principal {
    client_id     = data.azuread_service_principal.cluster_service_principal.application_id
    client_secret = var.cluster_service_principal_secret
  }

  network_profile {
    network_plugin = "kubenet"
    outbound_type = "userDefinedRouting"
  }

  role_based_access_control {
    enabled = true
    azure_active_directory {
      managed = true
    }
  }
}

Expected Behaviour

When we do Terraform apply it works fine, if we do terraform destroy and terraform apply, the key should have been recovered and used

Actual Behaviour

First time when we apply the configuration it works fine. When we destroy and apply again the terraform will throw the following error:

Error: keyvault.BaseClient#GetKey: Failure responding to request: StatusCode=404 -- Original Error: autorest/azure: Service returned an error. Status=404 Code="KeyNotFound" Message="A key with (name/id) cmk-aks-testmodule-001 was not found in this key vault. If you recently deleted this key you may be able to recover it using the correct recovery command. For help resolving this issue, please see https://go.microsoft.com/fwlink/?linkid=2125182"

if we re run terraform apply the key is seen in the portal (terraform recovers the key) but it will throw the following error:

Error: A resource with the ID "https://kv-tmptest-abc2.vault.azure.net/keys/cmk-aks-testmodule-001/3dfe35fdae724cbb9c69f8f09655c680" already exists - to be managed via Terraform this resource needs to be imported into the State. Please see the resource documentation for "azurerm_key_vault_key" for more information.

Steps to Reproduce

  1. Terraform apply
  2. Terraform destroy
  3. Terraform apply
  4. Terraform apply

Important Factoids

The keyvault provided as input is a private link key vault. The most unclear issue here is terraform recovers the key, however it does not import in to the state file and throws error about a key already being presented, however in reality that key was recovered by terraform.

Any help on this will be awesome.

Thanks and Regards,
Divya

@Sur3n00
Copy link

Sur3n00 commented Jun 22, 2021

Duplicate of # #12285

@katbyte katbyte added the service/key-vault Key Vault label Jun 22, 2021
@mjatkin-azzo
Copy link

This doesn't appear to be a duplicate of the above-mentioned issue. Was there any update on if this behaviour would be changed?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants