Skip to content
This repository was archived by the owner on Jul 19, 2024. It is now read-only.

Conversation

@displague
Copy link
Contributor

@displague displague commented Dec 11, 2018

The purpose is to refactor the TF config into separate master, node, and instance modules so that the basic/common, and more opinionated, K8s provisioning can be reused outside of this project. Another benefit would be more parallelization of instance provisioning.

TODO:

  • add descriptions to all variables
  • verify all variables are making it through to their ancestor modules
  • add a readme for each sub-module
  • update the base readme to describe the modules
  • verify master and nodes provision in parallel

indentation and field order changes are due to terrraform fmt

@displague displague changed the title WIP refactored modules (untested) refactored modules Dec 18, 2018
@displague
Copy link
Contributor Author

I'd really like to have the example in this repo reflect use of the Helm Terraform provider, but I am currently running into hashicorp/terraform-provider-helm#77 (or something that presents as that).

@displague
Copy link
Contributor Author

displague commented Jan 21, 2019

The external data kubeadm_join and its output variable kubeadm_join_command are running into this problem on destroy:

Error: Error applying plan:

1 error(s) occurred:

* module.masters.output.kubeadm_join_command: Resource 'data.external.kubeadm_join' does not have attribute 'result' for variable 'data.external.kubeadm_join.result'

Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure.

$ terraform destroy -force
data.linode_instance_type.type: Refreshing state...
data.linode_instance_type.type: Refreshing state...
linode_instance.instance: Refreshing state... (ID: 12451578)

Error: Error applying plan:

1 error(s) occurred:

* module.masters.output.kubeadm_join_command: variable "kubeadm_join" is nil, but no error was reported

Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure.

This appears to be hashicorp/terraform#17862 (and related issues).

$ TF_WARN_OUTPUT_ERRORS=0 terraform destroy -force
data.linode_instance_type.type: Refreshing state...
data.linode_instance_type.type: Refreshing state...
linode_instance.instance: Refreshing state... (ID: 12451578)
module.masters.module.master_instance.linode_instance.instance: Destroying... (ID: 12451578)
module.masters.master_instance.linode_instance.instance: Still destroying... (ID: 12451578, 10s elapsed)
module.masters.master_instance.linode_instance.instance: Still destroying... (ID: 12451578, 20s elapsed)
module.masters.master_instance.linode_instance.instance: Still destroying... (ID: 12451578, 30s elapsed)
module.masters.master_instance.linode_instance.instance: Still destroying... (ID: 12451578, 40s elapsed)
module.masters.module.master_instance.linode_instance.instance: Destruction complete after 42s

Destroy complete! Resources: 1 destroyed.

@displague displague changed the base branch from separate_modules to master January 21, 2019 04:15
Copy link

@jfrederickson jfrederickson left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've been going through this off and on for a bit. It's quite a large diff, but I think everything looks okay. I was able to spin up a cluster locally from this branch, so there's that.

That said, as all of the resources being deployed have changed, re-running terraform apply after this change will destroy and recreate the entire cluster. That's kind of Terraform's M.O., but... I wonder if there's a way to prevent this for clusters deployed with a previous version of the provider. (Is this worth worrying about? Should we expect users of this module to lock to a specific version? We should probably update the example in the README to do that if so.)

example/main.tf Outdated
module "linode_k8s" {
# source = "linode/k8s/linode"
# version = "0.0.6"
source = "git::https://github.com/displague/terraform-linode-k8s?ref=separate_modules"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this should ref master

Copy link
Contributor

@asauber asauber left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

other than the ref change, this LGTM!

@asauber
Copy link
Contributor

asauber commented Jan 28, 2019

@jfrederickson I think that the performance and flexibility enhancement of this change outweighs the inconvenience of needing to re-provision. That said, we should provide a warning of some sort to existing users.

@displague
Copy link
Contributor Author

displague commented Jan 28, 2019

Addressed both concerns -- thanks @asauber and @jfrederickson

@displague displague merged commit 50d76c5 into linode:master Jan 28, 2019
@displague
Copy link
Contributor Author

@jfrederickson this is way out of date, but terraform state mv provides the means to transition from the old resource names to the new ones.

@displague
Copy link
Contributor Author

This work (and the changes that followed) seem to lend themselves towards a Linode example in https://github.com/hashicorp/cluster-api-provider-terraform-cloud/tree/main/examples/. That example could leverage these modules rather than duplicating infra provisioning in the CAPTC provider.

(An LKE example could also be added, but that would be a more straightforward copy of the GKE pattern)

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants