Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

initial_node_count and node_count both set when node pool autoscaling is disabled #311

Closed
thecodejunkie opened this issue Nov 6, 2019 · 6 comments · Fixed by #313
Closed
Assignees
Labels
bug Something isn't working good first issue Good for newcomers triaged Scoped and ready for work

Comments

@thecodejunkie
Copy link

thecodejunkie commented Nov 6, 2019

Hi

I've noticed that if autoscaling=false is set on a node pool, then initial_node_count will be set to 1 (and not 0 as documented in #297) and so is node_count, which causes an error since both can't be set. What is the intended behavior here?

Thanks

@thecodejunkie
Copy link
Author

Actually, even for a node pool with autoscaling it seems to be set to initial_node_count = 1 so it doesn't seem like it doesn't have a default value of 0 not being optional

@morgante
Copy link
Contributor

morgante commented Nov 6, 2019

That PR is not yet merged because it is incorrect, so I wouldn't rely on it for documentation.

As such, the best reference is still the actual code: https://github.com/terraform-google-modules/terraform-google-kubernetes-engine/blob/master/cluster.tf#L140

From this, you can see that if autoscaling is disabled we will set both initial_node_count and node_count to the min_count provided.

Could you share the error message you see and your config?

@thecodejunkie
Copy link
Author

Looking at the code, this line https://github.com/terraform-google-modules/terraform-google-kubernetes-engine/blob/master/cluster.tf#L146 will set node_count to min_count if autoscaling is false

@thecodejunkie
Copy link
Author

Yep. just tried re-running and got this

Error: Cannot set both initial_node_count and node_count on node pool default-pool

Looking at the plan I can see both being set

      + initial_node_count  = 1
      + instance_group_urls = (known after apply)
      + location            = "europe-west4"
      + max_pods_per_node   = (known after apply)
      + name                = "default-pool"
      + name_prefix         = (known after apply)
      + node_count          = 1

For my node pool

    {
      autoscaling        = false
      auto_upgrade       = false
      name               = local.default_pool_name
      machine_type       = var.default_node_pool.machine_type
      service_account    = (var.default_node_pool.service_account != "") ? var.default_node_pool.service_account : ""
      # initial_node_count = 0
    }

If I uncomment initial_node_count = 0 then it works

@morgante
Copy link
Contributor

morgante commented Nov 6, 2019

Yup, looks like a bug!

Specifically, we need to update this line to not set initial_node_count if autoscaling is false.

@morgante morgante added bug Something isn't working good first issue Good for newcomers triaged Scoped and ready for work labels Nov 7, 2019
taylorludwig added a commit to taylorludwig/terraform-google-kubernetes-engine that referenced this issue Nov 9, 2019
…m-google-modules#311

Dont set initial_node_count when autoscaling is disabled on node pool.
Use new node pool var  when setting desired size of pool - matches provider var
aaron-lane pushed a commit that referenced this issue Nov 13, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working good first issue Good for newcomers triaged Scoped and ready for work
Projects
None yet
3 participants