Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature Request]: Using control planes and nodes together. i.e. make every control plane also an agent with Longhorn #1136

Closed
Taronyuu opened this issue Dec 27, 2023 · 7 comments

Comments

@Taronyuu
Copy link
Sponsor

Taronyuu commented Dec 27, 2023

Description

I've been trying to setup a cluster with 3 nodes where every node is both an control plane and an agent. This to lower the cost and allow HA for the control planes too. I've ran into several issues:

  1. Without any agent node the LB will fail. I've solved that by having 3 control planes and 1 agent.
  2. I've set allow_scheduling_on_control_plane to true so control planes can be used.
  3. This resulted in a deployment of 4 servers, 3 control planes and 1 agent where all 4 can be used. Quite okay.
  4. When enabling longhorn_volume_size on every control plane and agent it will only deploy a Hetzner Volume for 1 agent instead of 1 agent and the 3 control planes.

Is there a way to have master planes and agent nodes be 'shared' so 3 nodes are enough? I would also be fine with 4 servers as long as Longhorn and Hetzner Volumes can be used on the control planes.

Is this not a current feature or am I missing something?

Here is a snippet of my config:

    {
      name        = "node1",
      server_type = "cax11",
      location    = "fsn1",
      labels      = [],
      taints      = [],
      count       = 1
      swap_size   = "2G"

      longhorn_volume_size = 150
    },
    {
      name        = "node2",
      server_type = "cax11",
      location    = "nbg1",
      labels      = [],
      taints      = [],
      count       = 1
      swap_size   = "2G"

      longhorn_volume_size = 150
    },
    {
      name        = "node3",
      server_type = "cax11",
      location    = "fsn1",
      labels      = [],
      taints      = [],
      count       = 1
      swap_size   = "2G"

      longhorn_volume_size = 150
    },
  ]

  agent_nodepools = [
    {
      name        = "node4",
      server_type = "cax11",
      location    = "nbg1",
      labels      = [],
      taints      = [],
      count       = 1
      swap_size   = "2G"
      # swap_size   = "2G" # remember to add the suffix, examples: 512M, 1G
      # zram_size   = "2G" # remember to add the suffix, examples: 512M, 1G
      # kubelet_args = ["kube-reserved=cpu=50m,memory=300Mi,ephemeral-storage=1Gi", "system-reserved=cpu=250m,memory=300Mi"]

      # In the case of using Longhorn, you can use Hetzner volumes instead of using the node's own storage by specifying a value from 10 to 10000 (in GB)
      # It will create one volume per node in the nodepool, and configure Longhorn to use them.
      # Something worth noting is that Volume storage is slower than node storage, which is achieved by not mentioning longhorn_volume_size or setting it to 0.
      # So for something like DBs, you definitely want node storage, for other things like backups, volume storage is fine, and cheaper.
      longhorn_volume_size = 150

      # Enable automatic backups via Hetzner (default: false)
      # backups = true
    },
  ]```
@Taronyuu
Copy link
Sponsor Author

Additional questions for the pros: I've been reading up on Longhorn and everyone says that C*X11 servers are too small in combination with Longhorn. I can understand if volumes are being synced between servers, but is this still an issue if everything is stored on Hetzner Volumes?

If this is still an issue, is there a way to still use Hetzner Volumes in a different way? All I am trying to do is create a cluster with as few resources as possible for now until I have to scale further and where backups are not required (ie: where data is stored somewhere outside of the servers)

@mysticaltech
Copy link
Collaborator

@aleksasiriski @ifeulner Folks, that more down your alley, could you help here please 🙏

@aleksasiriski
Copy link
Member

Why do you use Longhorn on top of Hetzner volumes? Just use hcloud-volumes for RWO and if you need RWX use openebs-nfs-provisioner with hcloud-volumes as backend.

@Taronyuu
Copy link
Sponsor Author

@aleksasiriski I thought that that was a requirement? Based on this line:

In the case of using Longhorn, you can use Hetzner volumes instead of using the node's own storage by specifying a value from 10 to 10000 (in GB)

The way I read it is 'if you want to use Hetzner Volumes, enable Longhorn', but it seems that my assumption is wrong?

In that situation, if I disable Longhorn then my Hetzner Volumes will keep working?

@mysticaltech
Copy link
Collaborator

@Taronyuu We indeed need to make the text clearer, Hetzner volumes with longhorn is in fact fully optional, by default hcloud csi that is enabled by default will use volumes.

@Taronyuu
Copy link
Sponsor Author

Taronyuu commented Dec 30, 2023

@Taronyuu We indeed need to make the text clearer, Hetzner volumes with longhorn is in fact fully optional, by default hcloud csi that is enabled by default will use volumes.

Awesome! Will it break something if I:

  1. Set the agent node count to 0
  2. Disable longhorn
  3. Enable deploying on master planes

I think disabling Longhorn (even when all of my volumes are provisioned on Hetzner Volumes) will be the biggest breaking change, but I am not sure? I already have items provisioned using Hetzner volumes but did not specifically mention Longhorn.

I'll PR soon for an updated documentation to make this more clear :)

@mysticaltech
Copy link
Collaborator

PRs always welcome!

About 1 and 3, of course you have to cordon, drain, kubectl delete node, and normally it should be ok since everything can be scheduled on your control planes with the proper flag.

About 2. I do not know really, it could break. If it's a production setup, best to do blue green like deploy via a new cluster.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants