Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Sanitize kube_admin_config in terraform plan/apply output #4105

Closed
rudolphjacksonm opened this issue Aug 16, 2019 · 7 comments · Fixed by #15800
Closed

Sanitize kube_admin_config in terraform plan/apply output #4105

rudolphjacksonm opened this issue Aug 16, 2019 · 7 comments · Fixed by #15800
Labels
bug service/kubernetes-cluster upstream/terraform This issue is blocked on an upstream issue within Terraform (Terraform Core/CLI, The Plugin SDK etc)
Milestone

Comments

@rudolphjacksonm
Copy link
Contributor

rudolphjacksonm commented Aug 16, 2019

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Terraform (and AzureRM Provider) Version

terraform: 0.12.6
azurerm: 1.32.1

Affected Resource(s)

  • azurerm_kubernetes_cluster

Terraform Configuration Files

resource "azurerm_kubernetes_cluster" "aks_with_aad_parameters" {
  count               = var.aks_aad_enabled == "true" ? 1 : 0
  name                = var.aks_cluster_name
  resource_group_name = var.aks_rg_name
  location            = var.aks_location
  dns_prefix          = var.aks_dns_prefix
  kubernetes_version  = var.aks_kubernetes_version

  agent_pool_profile {
    name            = var.aks_agentpool_name
    max_pods        = var.aks_max_pods
    count           = var.aks_node_count
    os_disk_size_gb = var.aks_node_os_disk_size_gb
    vm_size         = var.aks_agent_vm_sku
    vnet_subnet_id  = var.aks_subnet_id
  }

  linux_profile {
    admin_username = var.aks_agent_admin_user
    ssh_key {
      key_data = var.aks_public_key_data
    }
  }

  network_profile {
    network_plugin     = var.aks_network_plugin
    network_policy     = var.aks_network_policy
    dns_service_ip     = var.aks_dnsServiceIP
    docker_bridge_cidr = var.aks_dockerBridgeCidr
    service_cidr       = var.aks_serviceCidr
  }

  service_principal {
    client_id     = data.azurerm_key_vault_secret.cluster_sp_id.value
    client_secret = data.azurerm_key_vault_secret.cluster_sp_secret.value
  }

  role_based_access_control {
    enabled = true
    azure_active_directory {
      client_app_id     = var.aks_aad_clientapp_id
      server_app_id     = var.aks_aad_serverapp_id
      server_app_secret = var.aks_aad_serverapp_secret
      tenant_id         = data.azurerm_client_config.current.tenant_id
    }
  }
}

Expected Behavior

When running terraform plan or terraform apply I should not be able to see the attributes for kube_admin_config, much like kube_admin_config_raw is sanitized.

Actual Behavior

If run terraform plan to create an AKS cluster with terraform v0.12.6, the output for kube_admin_config is shown unsanitized in the output of a terraform plan. Oddly, kube_admin_config_raw is sanitized. Both should be sanitized because as of this moment someone with only cluserUser access could easily view the job output in our CI platform and grab the admin credentials, thus bypassing RBAC.

Steps to Reproduce

  1. terraform plan
@rudolphjacksonm rudolphjacksonm changed the title Sanitize kube_admin_confiig in terraform plan/apply output Sanitize kube_admin_config in terraform plan/apply output Aug 16, 2019
@nexxai
Copy link
Contributor

nexxai commented Aug 16, 2019

Submitted a PR for this: #4107

@favoretti
Copy link
Collaborator

Since this issue has been reported a long time ago and relates to the version of provider we no longer support - I'm going to close it. Please open a new updated bug report on current versions of terraform and provider if this is still relevant. Thank you.

@github-actions
Copy link

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Sep 18, 2021
@katbyte katbyte reopened this Oct 14, 2021
@katbyte katbyte modified the milestones: v2.81.0, v3.0.0 Oct 14, 2021
@katbyte
Copy link
Collaborator

katbyte commented Oct 14, 2021

This is still a valid issue in the latest version of the provider, as such we've opened a PR to add a flag that will mark the entire block as sensitive as a stop gap measure #13732 without breaking any existing users - for 3.0 we'll remove the flag and make this the default behaviour unless the underlying bug in the plugin-sdk is resolved.

@hashicorp hashicorp unlocked this conversation Oct 14, 2021
@katbyte katbyte added the upstream/terraform This issue is blocked on an upstream issue within Terraform (Terraform Core/CLI, The Plugin SDK etc) label Oct 14, 2021
katbyte pushed a commit that referenced this issue Oct 14, 2021
…blocks can now be marked entirely as `Sensitive` via environment variable. (#13732)

Can now mark the whole of kube_config and kube_admin_config as sensitive by setting the environment variable ARM_AKS_KUBE_CONFIGS_SENSITIVE to true.

(workaround for #4105)
@cailyoung
Copy link

cailyoung commented Feb 1, 2022

Hey folks. #13732 doesn't appear to work for printing of state values during data reads. Example (redacted by me, fully readable in the GHA logs):

We are using provider 2.82.0 and running in Terraform Cloud

  env:
5
    TEST_ENV_TF_WORKSPACE_POSTFIX: test
6
    PROD_ENV_TF_WORKSPACE_POSTFIX: production
7
    ARM_AKS_KUBE_CONFIGS_SENSITIVE: true
<snip>
# module.aks.data.azurerm_kubernetes_cluster.clustername will be read during apply
91
  # (config refers to values not yet known
92
 <= data "azurerm_kubernetes_cluster" "clustername"  {
93
     <snip>
352
      ~ identity                        = [] -> (known after apply)
353
      ~ kube_admin_config               = [
354
          - {
355
              - client_certificate     = <redacted>
              - client_key             = <redacted>
              - cluster_ca_certificate = <redacted>
358
              - host                   = <redacted>
359
              - password               = <redacted>
360
              - username               = <redacted>
361
            },
362
        ] -> (known after apply)

EDIT: We think a 'final apply' with everything having the env var set seems to have updated the Cloud state so that it no longer prints these on plans. So is there anything we could have done to prevent the 'first-time' logging of these values?

@github-actions
Copy link

This functionality has been released in v3.0.0 of the Terraform Provider. Please see the Terraform documentation on provider versioning or reach out if you need any assistance upgrading.

For further feature requests or bug reports with this functionality, please create a new GitHub issue following the template. Thank you!

@github-actions
Copy link

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Apr 24, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug service/kubernetes-cluster upstream/terraform This issue is blocked on an upstream issue within Terraform (Terraform Core/CLI, The Plugin SDK etc)
Projects
None yet
5 participants