Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Changing operating_system on existing cluster does not trigger a destroy and create #5501

Open
ocofaigh opened this issue Jul 11, 2024 · 2 comments
Labels
service/Kubernetes Service Issues related to Kubernetes Service Issues

Comments

@ocofaigh
Copy link

Changing the value of operating_system on existing cluster that was created using ibm_container_vpc_cluster does not trigger a destroy and create of the cluster. This is not consistent with the terraform common practices - it should attempt to destroy and recreate. It is up to the user of the provider to decide on the behavior, rather than the provider itself. https://developer.hashicorp.com/terraform/language/meta-arguments/lifecycle#ignore_changes change field exists for this reason.

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Terraform CLI and Terraform IBM Provider Version

Affected Resource(s)

  • ibm_container_vpc_cluster

Terraform Configuration Files

Please include all Terraform configurations required to reproduce the bug. Bug reports without a functional reproduction may be closed without investigation.

resource "ibm_container_vpc_cluster" "cluster" {
  name              = "my_vpc_cluster"
  vpc_id            = "r006-abb7c7ea-aadf-41bd-94c5-b8521736fadf"
  kube_version      = "4.15_openshift"
  flavor            = "bx2.16x64"
  worker_count      = "2"
  entitlement       = "cloud_pak"
  operating_system  = "REDHAT_8_64"
  cos_instance_crn  = ibm_resource_instance.cos_instance.id
  resource_group_id = data.ibm_resource_group.resource_group.id
  zones {
      subnet_id = "0717-0c0899ce-48ac-4eb6-892d-4e2e1ff8c9478"
      name      = "us-south-1"
    }
}

Debug Output

Panic Output

Expected Behavior

Cluster should destroy and create if the value of operating_system is changed

Actual Behavior

No changes identified

Steps to Reproduce

  1. terraform apply with operating_system value set to REDHAT_8_64
  2. terraform apply with operating_system value now set to RHCOS

Important Factoids

References

  • #0000
@hkantare
Copy link
Collaborator

@z0za
Can you look into the issue

@z0za
Copy link
Contributor

z0za commented Jul 12, 2024

hey @ocofaigh - it is expected. there has been a change in the behaviour of the cluster resource and now the workerpool related fields only affect the cluster creation.
https://registry.terraform.io/providers/IBM-Cloud/ibm/latest/docs/resources/container_vpc_cluster#operating_system

@attilatabori has written a blog post describing the change:
https://community.ibm.com/community/user/blogs/attila-lszl-tbori/2024/01/22/upcoming-changes-to-ibm-cloud-terraform-provider

after the cluster and default workerpool have been created, the user needs to create a workerpool resource with import_on_create is set to true to be able to manage the default workerpool.
https://github.com/IBM-Cloud/terraform-provider-ibm/blob/master/ibm/service/kubernetes/resource_ibm_container_worker_pool.go#L195
unfortunately the import_on_create flag is missing from the documentation, so we will need to fix it.
[edit] created a PR for this: #5506

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
service/Kubernetes Service Issues related to Kubernetes Service Issues
Projects
None yet
Development

No branches or pull requests

3 participants