-
Notifications
You must be signed in to change notification settings - Fork 36
refactored modules #23
Conversation
3da0da2 to
51db39c
Compare
|
I'd really like to have the example in this repo reflect use of the Helm Terraform provider, but I am currently running into hashicorp/terraform-provider-helm#77 (or something that presents as that). |
|
The external data This appears to be hashicorp/terraform#17862 (and related issues). $ TF_WARN_OUTPUT_ERRORS=0 terraform destroy -force
data.linode_instance_type.type: Refreshing state...
data.linode_instance_type.type: Refreshing state...
linode_instance.instance: Refreshing state... (ID: 12451578)
module.masters.module.master_instance.linode_instance.instance: Destroying... (ID: 12451578)
module.masters.master_instance.linode_instance.instance: Still destroying... (ID: 12451578, 10s elapsed)
module.masters.master_instance.linode_instance.instance: Still destroying... (ID: 12451578, 20s elapsed)
module.masters.master_instance.linode_instance.instance: Still destroying... (ID: 12451578, 30s elapsed)
module.masters.master_instance.linode_instance.instance: Still destroying... (ID: 12451578, 40s elapsed)
module.masters.module.master_instance.linode_instance.instance: Destruction complete after 42s
Destroy complete! Resources: 1 destroyed. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've been going through this off and on for a bit. It's quite a large diff, but I think everything looks okay. I was able to spin up a cluster locally from this branch, so there's that.
That said, as all of the resources being deployed have changed, re-running terraform apply after this change will destroy and recreate the entire cluster. That's kind of Terraform's M.O., but... I wonder if there's a way to prevent this for clusters deployed with a previous version of the provider. (Is this worth worrying about? Should we expect users of this module to lock to a specific version? We should probably update the example in the README to do that if so.)
example/main.tf
Outdated
| module "linode_k8s" { | ||
| # source = "linode/k8s/linode" | ||
| # version = "0.0.6" | ||
| source = "git::https://github.com/displague/terraform-linode-k8s?ref=separate_modules" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this should ref master
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
other than the ref change, this LGTM!
|
@jfrederickson I think that the performance and flexibility enhancement of this change outweighs the inconvenience of needing to re-provision. That said, we should provide a warning of some sort to existing users. |
|
Addressed both concerns -- thanks @asauber and @jfrederickson |
|
@jfrederickson this is way out of date, but |
|
This work (and the changes that followed) seem to lend themselves towards a Linode example in https://github.com/hashicorp/cluster-api-provider-terraform-cloud/tree/main/examples/. That example could leverage these modules rather than duplicating infra provisioning in the CAPTC provider. (An LKE example could also be added, but that would be a more straightforward copy of the GKE pattern) |
The purpose is to refactor the TF config into separate master, node, and instance modules so that the basic/common, and more opinionated, K8s provisioning can be reused outside of this project. Another benefit would be more parallelization of instance provisioning.
TODO:
indentation and field order changes are due to
terrraform fmt