Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Terraform apply after changing max_pods_per_node does not take effect #876

Open
bettaps opened this issue Nov 28, 2023 · 1 comment
Open
Labels
bug Something isn't working

Comments

@bettaps
Copy link

bettaps commented Nov 28, 2023

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Terraform Version and Provider Version

Terraform v1.4.2

Affected Resource(s)

oci_containerengine_node_pool

Terraform Configuration Files

# Copy-paste your Terraform configurations here - for large Terraform configs,
# please use a service like Dropbox and share a link to the ZIP file. 
# Please remove any sensitive information from configuration files before sharing them. 

Debug Output

Panic Output

Expected Behavior

Using oke module, we wanted to change the value of max_pods_per_node from 110 to 15, after terraform apply it has to update this value on the nodepool.

Actual Behavior

In terraform plan it shows that it needs to be updated, but after doing the apply, the node pool configuration does not get updated. When you do terraform plan again, it shows that the parameter has not changed.

Steps to Reproduce

  1. terraform apply

Important Factoids

References

@bettaps bettaps added the bug Something isn't working label Nov 28, 2023
@syedthameem85
Copy link
Member

syedthameem85 commented Jan 24, 2024

@bettaps / @hyder / @devoncrouse - This looks like a provider bug. There's a similar issue opened here -oracle/terraform-provider-oci#2011. The terraform apply for autoscaler_pool completes without actually changing the value for max_pods_per node but fails for oke-vm-standard pool.

Error: 409-Conflict, Cannot perform nodepool cycling and nodepool Placement Configuration change simultaneously.
│ Suggestion: The resource is in a conflicted state. Please retry again or contact support for help with service: Containerengine Node Pool
│ Documentation: https://registry.terraform.io/providers/oracle/oci/latest/docs/resources/containerengine_node_pool
│ API Reference: https://docs.oracle.com/iaas/api/#/en/containerengine/20180222/NodePool/UpdateNodePool
│ Request Target: PUT https://containerengine.us-ashburn-1.oci.oraclecloud.com/20180222/nodePools/ocid1.nodepool.oc1.iad.aaaaaaaaxxxxxx
│ Provider version: 5.25.0, released on . This provider is 15250 Update(s) behind to current.
│ Service: Containerengine Node Pool
│ Operation Name: UpdateNodePool
│ OPC request ID:


│ with module.oke.module.workers[0].oci_containerengine_node_pool.workers["oke-vm-standard"],
│ on .terraform/modules/oke/modules/workers/nodepools.tf line 5, in resource "oci_containerengine_node_pool" "workers":
│ 5: resource "oci_containerengine_node_pool" "workers" {

@bettaps / @hyder / @devoncrouse - Any feedback ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants