Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Kubernetes Provider tries to reach localhost:80/api when targeting azurerm resources #405

Closed
damnedOperator opened this issue Apr 18, 2019 · 11 comments

Comments

@damnedOperator
Copy link

Terraform Version

0.11.13 Azure DevOps Extension

Affected Resource(s)

  • kubernetes_role_binding.ingressRoleBinding
  • kubernetes_namespace.environment
  • kubernetes_role.ingressRole
    etc. Every kubernetes Resource i try to use.

Terraform Configuration Files

https://ptv2box.ptvgroup.com/index.php/s/OBZRxYIp0k9qIuT

Debug Output

https://gist.github.com/damnedOperator/f7aa5fcffb49ed12cd24d5fa58f362c1

Expected Behavior

Terraform apply should end without errors and the kubernetes Provider should configure the Kubernetes cluster created on AKS

Actual Behavior

Terraform apply fails and provider tries to connect to either localhost or ".visualstudio.com"

Steps to Reproduce

  1. Setup an Azure DevOps Server 2019
  2. Setup a Linux Agent
  3. Configure your pipeline with everything

Important Factoids

The terraform Job is running in a release pipeline on Azure DevOps Server 2019. What makes me wonder is that the azurerm provider does not go into error.
We deploy the tfstate file from source control so TF does always know how its state is.

References

#382 reports similar behaviour, but in our case, the provider is configured with attributes from the AKS creation

@pdecat
Copy link
Contributor

pdecat commented Apr 18, 2019

Could you share your provider configuration?

Provider initialization is done at startup, that may explain what's going on.

@damnedOperator
Copy link
Author

provider "kubernetes" {
  host                   = "${azurerm_kubernetes_cluster.cluster.kube_config.0.host}"
  username               = "${azurerm_kubernetes_cluster.cluster.kube_config.0.username}"
  password               = "${azurerm_kubernetes_cluster.cluster.kube_config.0.password}"
  client_certificate     = "${base64decode(azurerm_kubernetes_cluster.cluster.kube_config.0.client_certificate)}"
  client_key             = "${base64decode(azurerm_kubernetes_cluster.cluster.kube_config.0.client_key)}"
  cluster_ca_certificate = "${base64decode(azurerm_kubernetes_cluster.cluster.kube_config.0.cluster_ca_certificate)}"
}```

@pdecat
Copy link
Contributor

pdecat commented Apr 18, 2019

As the cluster is in the same terraform stack, these values are not available during initialization.

A second apply should work in that case.

I personally, with GKE, do not manage the cluster and the k8s configuration in the same stack, but use remote states or datasources to get the credentials from my gke stack in the k8s stack.

Some people have had success by generating kubeconfig files and using the load_config_file = true.

Note: using datasources instead of remote states to pass the credentials is the preferred choice when available as it avoids storing the kubernetes credentials in the the terraform state.

@damnedOperator
Copy link
Author

damnedOperator commented Apr 18, 2019

As the cluster is in the same terraform stack, these values are not available during initialization.

Ok, then I was confused by the documentation on terraform.io as - as I renember it - says that you can use this configuration. But how? Would be pretty cool for us, so that we don't produce more blocker traffic on our agent cluster...

@pdecat
Copy link
Contributor

pdecat commented Apr 18, 2019

@pdecat
Copy link
Contributor

pdecat commented Apr 18, 2019

Regarding the usage of datasources with separate stacks: #161 (comment) (that's for EKS, not AKS)

@damnedOperator
Copy link
Author

Regarding the usage of datasources with separate stacks: #161 (comment)

Thank you! I think I will be able to go on from here. Just have to find out if it is possible to do with azurerm.

@pdecat
Copy link
Contributor

pdecat commented Apr 18, 2019

@bcastilho90
Copy link

Apparently this issue doesnt happen if you specify the provider version "1.10.0" for kubernetes. My code is very similar to this, except for the use of username and password:

provider "kubernetes" {
  version = "1.10.0"
  host                   = "${azurerm_kubernetes_cluster.cluster.kube_config.0.host}"
  client_certificate     = "${base64decode(azurerm_kubernetes_cluster.cluster.kube_config.0.client_certificate)}"
  client_key             = "${base64decode(azurerm_kubernetes_cluster.cluster.kube_config.0.client_key)}"
  cluster_ca_certificate = "${base64decode(azurerm_kubernetes_cluster.cluster.kube_config.0.cluster_ca_certificate)}"
}

username = "${azurerm_kubernetes_cluster.cluster.kube_config.0.username}"
password = "${azurerm_kubernetes_cluster.cluster.kube_config.0.password}"

@ghost
Copy link

ghost commented Apr 21, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 hashibot-feedback@hashicorp.com. Thanks!

@ghost ghost locked and limited conversation to collaborators Apr 21, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants