-
-
Notifications
You must be signed in to change notification settings - Fork 326
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Placement group contains already 10 servers #1157
Comments
@CroutonDigital You are in luck, when you hare nodepools with count 0 at the end of a nodepool, you can remove those. Please do so and try again. Here the agent nodepools setup I propose, delete the line Try this: agent_nodepools = [
{
name = "agent-small",
server_type = "cpx11",
location = "fsn1",
labels = [],
taints = [],
count = 0
},
{
name = "agent-large",
server_type = "ccx23",
location = "fsn1",
labels = [
"nodetype=core-ccx23"
],
taints = [],
count = 6
},
{
name = "bots-large",
server_type = "ccx23",
location = "fsn1",
labels = [
"nodetype=bots-node"
],
taints = [],
count = 6
}
] |
If that does not work, please give me the output of Note that this is how they are created and allocated: resource "hcloud_placement_group" "agent" {
count = ceil(local.agent_count / 10)
name = "${var.cluster_name}-agent-${count.index + 1}"
labels = local.labels
type = "spread"
}
placement_group_id = var.placement_group_disable ? null : hcloud_placement_group.agent[floor(index(keys(local.agent_nodes), each.key) / 10)].id As you have a maximum of 10 nodes per placement group. |
The above PR should fix your issue @CroutonDigital, but the trouble is that it's probably not backward compatible. We will reserve it for our next major release v3. In the meantime, please follow the guidance laid out previously to debug with the hcloud cli and playing with nodepools definitions carefully. Note that your first agent nodepool, even if it cannot be deleted, since you have a count of 0 already, you can change the node kind and name. |
@CroutonDigital If you clone the repo locally, checkout to the |
When I try comment block:
then terraform plan rebuild full cluster:
|
Lets try clone branch with fix, and try on my test cluster |
I comment # placement_group_disable = true I see new second placement group, empty.
|
@CroutonDigital Thanks for sharing, now try with the stable branch (your current version): agent_nodepools = [
{
name = "agent-large-0",
server_type = "ccx23",
location = "fsn1",
labels = [],
taints = [],
count = 2
},
{
name = "agent-large",
server_type = "ccx23",
location = "fsn1",
labels = [
"nodetype=core-ccx23"
],
taints = [],
count = 4
},
{
name = "bots-large",
server_type = "ccx23",
location = "fsn1",
labels = [
"nodetype=bots-node"
],
taints = [],
count = 6
}
] |
Utimately, we will need to implement the a one placement group per nodepool policy, but if the above temporarily fixes it for you, it would be great. Otherwise just add more nodepools at the end, after removing the empty onces. |
@Silvest89 FYI the above. If you have ideas on how to temporarily solve his issues, more than welcome, I'm running out of ideas. |
One placement group per node pool it's seems a good idea |
@CroutonDigital Please at least post your terraform plan with the new branch. |
A similar problem occurred when I tried to increase the count of agents |
|
I try again, but same issue with: placement group 211529 contains already 10 servers (service_error) |
@CroutonDigital Thank you for trying. @mnencia Cornered the issue, I will work on a fix ASAP. Keep you and @maximen39 posted, give me 48h tops. |
@mysticaltech when I can try test your fix? |
@CroutonDigital I will see if I can finish this weekend, sorry for the delay 🤞 |
Hi, @mysticaltech Today I try fix on branch fix/placement-group-logic
I change round function from floor to ceil and apply, now I have 2 placement group with distributed servers. |
Not correct fix, cause random generate index |
@CroutonDigital Yes. Please if you want to try to fix is, go to the PR and look for @ mnencia explanations. He cornered the issue. Sorry for the delay on my part, did not find the time, if you push a PR, please point it to the open PR branch. Otherwise I will try to address the issue this week. |
Hey all, I'm coming here though this comment #1185 (comment) . I've been reading this issue (and the close PR1161), and if I understand correctly, the issue is this:
Would it make sense to scale the number of placement groups by the number of nodepools? Let each nodepool have their own placement groups, where number of placement groups == ceil(number of nodes in the pool / 10). I don't know the limit on the number of placement groups, and if this is a bad idea from a cluster design perspective. |
OK, found the limits: https://docs.hetzner.com/cloud/placement-groups/overview#limits |
@CroutonDigital As the old saying goes, better late than never. Thanks for @valkenburg-prevue-ch for his invaluable help, there is now a way to make it work for you, please see the new placement group customization options in Please let us know! |
Description
My k8s cluster 10 nodes.
I want add 2 additionals nodes. When apply I got error:
I try enable placement_group_disable = true
but after apply not removed placement group from nodes. New VM added with no placement group.
May be need params for create placement group for each VM groups like:
Kube.tf file
Screenshots
No response
Platform
Linux
The text was updated successfully, but these errors were encountered: