Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: zram_size not passed on #1326

Closed
lennart opened this issue Apr 23, 2024 · 4 comments
Closed

[Bug]: zram_size not passed on #1326

lennart opened this issue Apr 23, 2024 · 4 comments
Labels
bug Something isn't working

Comments

@lennart
Copy link
Contributor

lennart commented Apr 23, 2024

Description

the agent_nodepools/control_plane_nodes option for zram_size is not passed on to the actual host resource, therefore zram swap is not configured (although the corresponding files are in place, they are just not activated)

I guess, zram_size = each.value.zram_size has to be appended right after swap_size in the corresponding files for agents and control_planes

Kube.tf file

locals {
  hcloud_token = "xxxxxxxxxxx"
}
variable "cloudflare_api_token" {
  type = string
  sensitive = true
}
module "kube-hetzner" {
  providers = {
    hcloud = hcloud
  }
  hcloud_token = var.hcloud_token != "" ? var.hcloud_token : local.hcloud_token
  source = "kube-hetzner/kube-hetzner/hcloud"
  ssh_port = 23232
  ssh_public_key = file("~/.ssh/landscape_id_ed25519.pub")
  ssh_private_key = file("~/.ssh/landscape_id_ed25519")
  control_plane_nodepools = [
    {
      name        = "control-plane-fsn1",
      server_type = "cax11",
      location    = "fsn1",
      labels      = [],
      taints      = [],
      count       = 1
      kubelet_args = ["runtime-request-timeout=10m0s"]
      zram_size = "2G"
    },
    {
      name        = "control-plane-nbg1",
      server_type = "cax11",
      location    = "nbg1",
      labels      = [],
      taints      = [],
      count       = 1
      kubelet_args = ["runtime-request-timeout=10m0s"]
      zram_size = "2G"
    },
    {
      name        = "control-plane-hel1",
      server_type = "cax11",
      location    = "hel1",
      labels      = [],
      taints      = [],
      count       = 1
      kubelet_args = ["runtime-request-timeout=10m0s"]
      zram_size = "2G"
    }
  ]
  agent_nodepools = [
    {
      name        = "agent-small",
      server_type = "cx21",
      location    = "fsn1",
      labels      = [],
      taints      = [],
      zram_size = "2G"
      nodes = {
        "1" : {
          append_index_to_node_name = false,
          location                  = "nbg1",
          labels = [
          ]
        },
        "20" : {
          append_index_to_node_name = false,
          labels = [
          ]
        }
      }
      longhorn_volume_size = 0
      kubelet_args = ["runtime-request-timeout=10m0s"]
    },
  ]
  enable_wireguard = true
  load_balancer_type     = "lb11"
  load_balancer_location = "fsn1"
  base_domain = "beta.al0.de"
  enable_longhorn = true
  disable_hetzner_csi = true
  cluster_name = "al0"
  cni_plugin = "cilium"
  disable_kube_proxy = true
  disable_network_policy = true
  dns_servers = [
    "1.1.1.1",
    "8.8.8.8",
    "2606:4700:4700::1111",
  ]
  extra_kustomize_parameters={
    cloudflare_api_token: var.cloudflare_api_token
  }
}
provider "hcloud" {
  token = var.hcloud_token != "" ? var.hcloud_token : local.hcloud_token
}
terraform {
  required_version = ">= 1.5.0"
  required_providers {
    hcloud = {
      source  = "hetznercloud/hcloud"
      version = ">= 1.43.0"
    }
  }
}
output "kubeconfig" {
  value     = module.kube-hetzner.kubeconfig
  sensitive = true
}
variable "hcloud_token" {
  sensitive = true
  default   = ""
}

Screenshots

No response

Platform

Linux

@lennart lennart added the bug Something isn't working label Apr 23, 2024
@lennart
Copy link
Contributor Author

lennart commented Apr 23, 2024

ah, I realize, in current master this is only the case for agents (control planes pass this on...) is this intentional?

@lennart
Copy link
Contributor Author

lennart commented Apr 24, 2024

also, when specifying zram on a nodepool that has a nodes map, even with the suggested change one still has to explicitly configure zram_size for every node in the mapping, otherwise the nodepool setting is overridden with the default for each node (which is ""):

agent_nodepools = [
    {
      name        = "agent-small",
      server_type = "cx21",
      location    = "fsn1",
      labels      = [],
      taints      = [],
      zram_size = "2G"
      nodes = {
        "1" : {
          append_index_to_node_name = false,
          location                  = "nbg1",
          labels = [
          ]
        },
        "20" : {
          append_index_to_node_name = false,
          labels = [
          ]
        }
      }
      longhorn_volume_size = 0
      kubelet_args = ["runtime-request-timeout=10m0s"]
    },
  ]

I would expect, that if I do not specify zram_size of a node in the mapping, it would use the value specified in the pool (maybe one could use a different default for the nodes in the mapping that is considered unset?)

so currently I ended up with:

agent_nodepools = [
    {
      name        = "agent-small",
      server_type = "cx21",
      location    = "fsn1",
      labels      = [],
      taints      = [],
      nodes = {
        "1" : {
          append_index_to_node_name = false,
          location                  = "nbg1",
          zram_size = "2G"
          labels = [
          ]
        },
        "20" : {
          append_index_to_node_name = false,
          zram_size = "2G"
          labels = [
          ]
        }
      }
      longhorn_volume_size = 0
      kubelet_args = ["runtime-request-timeout=10m0s"]
    },
  ]

and the change in agents.tf that passes on the zram value

@mysticaltech
Copy link
Collaborator

@lennart You did well, just merged the PR, we do not want to block this possibility if it's just one line away. Thanks for this.

@lennart
Copy link
Contributor Author

lennart commented May 1, 2024

@mysticaltech thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants