-
Notifications
You must be signed in to change notification settings - Fork 10.2k
Description
Terraform Version
Terraform v1.11.2
on darwin_arm64
+ provider registry.terraform.io/gavinbunney/kubectl v1.19.0
+ provider registry.terraform.io/hashicorp/azurerm v3.117.1
+ provider registry.terraform.io/hashicorp/google v6.25.0
+ provider registry.terraform.io/hashicorp/helm v2.17.0
+ provider registry.terraform.io/hashicorp/kubernetes v2.36.0Terraform Configuration Files
provider "kubernetes" {
config_path = "~/.kube/config"
config_context = "cluster01"
config_context_cluster = "cluster01"
insecure = true
}
Debug Output
2025-03-25T13:15:25.703-0700 [DEBUG] provider.terraform-provider-kubernetes_v2.36.0_x5: 2025/03/25 13:15:25 [DEBUG] Using custom current context: "cluster01"
2025-03-25T13:15:25.703-0700 [DEBUG] provider.terraform-provider-kubernetes_v2.36.0_x5: 2025/03/25 13:15:25 [DEBUG] Using overridden context: api.Context{LocationOfOrigin:"", Cluster:"cluster01", AuthInfo:"", Namespace:"", Extensions:map[string]runtime.Object(nil)}
Expected Behavior
As far as I can tell, the config_context setting in the kubernetes provider is not being respected. In the debug output above, which was run with my kubectl context set to cluster02 terraform apply told me it was going to attempt to make 25 changes which is not correct. You can see in the debug output that it says it's going to set the context to cluster01 but this does not actually happen., If I set my kubectl context manually before running terraform apply to cluster01 then it detects that there are no changes needed. Initially I was not setting config_context_cluster but I added that to see if it would help, it made no change.
I was expecting these settings to set the kubectl context and ensure that the terraform apply is run against the correct cluster but that seems to not be the case. Is there something missing here to ensure that happens?
Actual Behavior
terraform apply is run against whichever cluster is currently set as default in kubectl config
Steps to Reproduce
- use kubectl to set context to cluster other than what is specified in terraform kubernetes provider config in providers.tf
terraform apply. you get no errors but are told that it needs to make many changes because it's looking at the wrong cluster. this could cause quite a few issues.
Additional Context
No response
References
No response
Generative AI / LLM assisted development?
No response