Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

node agent: allow running out of cluster using $KUBECONFIG #414

Open
milesbxf opened this issue Jun 3, 2022 · 4 comments
Open

node agent: allow running out of cluster using $KUBECONFIG #414

milesbxf opened this issue Jun 3, 2022 · 4 comments
Labels
Need community involvement Needs community involvement on some action item. refactoring Modifying existing code. waiting for contributor Waiting on the contribution from the community contributor.

Comments

@milesbxf
Copy link

milesbxf commented Jun 3, 2022

Describe the problem/challenge you have

I'm running openebs/zfs-localpv on NixOS. I've been unable to get the node agent running in-cluster, since the node agent requires access to the zfs binary on the node. NixOS stores binaries in very non-standard locations, and I wasn't successful in modifying the zfs-chroot configmap to get it working. I realised that whilst unconventional, it'd be far easier to just run the node agent directly on the node, out of cluster.

However, whilst the kube client config code can load from an external kubeconfig file, there's no way to configure the binary to do so, and it's hardcoded to use in-cluster config or fail.

Describe the solution you'd like

I'd like the node agent to fall back to looking up kubeconfig from the standard KUBECONFIG environment variable if it's unable to get in-cluster config and we've not set the kubeconfig path elsewhere.

I have a branch here with this solution, which is working fine for me - happy to raise this as a PR if you're happy with the approach: https://github.com/openebs/zfs-localpv/compare/develop...milesbxf:run-out-of-cluster?expand=1

Anything else you would like to add:

I realise from the docs that you're only currently intending to support Ubuntu and CentOS as target OSes - that's totally fair, and I'm happy to run this myself. However, the solution above would allow me to do so without maintaining a fork, and shouldn't impact any other usage 🙏

Environment:

  • ZFS-LocalPV version : develop branch
  • Kubernetes version (use kubectl version): v1.23.7
  • Kubernetes installer & version: NixOS (v1.23.7)
  • Cloud provider or hardware configuration: Bare metal
  • OS (e.g. from /etc/os-release): "NixOS 22.11 (Raccoon)"
@Abhinandan-Purkait
Copy link
Member

@milesbxf We would review the changes. Although we don't have any infra to test these changes. cc @tiagolobocastro

@Abhinandan-Purkait Abhinandan-Purkait added Need community involvement Needs community involvement on some action item. refactoring Modifying existing code. waiting for contributor Waiting on the contribution from the community contributor. labels Jun 6, 2024
@tiagolobocastro
Copy link

This seems reasonable to me, and not OS specific as it's just adding support for running out-of cluster which sometimes is useful to debug things - so we'd be happy to take this :)

I actually run NixOS myself but mostly focus on mayastor so never hit this previously.
I had thought that we bundle the zfs binary rather than taking from the host. Would bundling the binary be a better solution?

I also saw something once, a work around for calling NixOS host binaries from a pod. I'll see if I can find it.

@ncrmro
Copy link

ncrmro commented Jun 9, 2024

I had chatgpt generate an example linking the binaries that i slightly modified.

{
  # Other configurations...

  environment.systemPackages = with pkgs; [
    zfs
  ];

  environment.etc = {
    "zfs-usr-bin.conf" = {
      text = ''
        [Install]
        WantedBy=multi-user.target
      '';
    };
    "zfs-usr-bin.service" = {
      text = ''
        [Unit]
        Description=ZFS symlinks in /usr/bin

        [Service]
        Type=oneshot
        ExecStart=/run/current-system/sw/bin/mkdir -p /usr/bin
        ExecStart=/run/current-system/sw/bin/ln -sf /run/current-system/sw/bin/zfs /usr/bin/zfs
        ExecStart=/run/current-system/sw/bin/ln -sf /run/current-system/sw/bin/zpool /usr/bin/zpool
        ExecStart=/run/current-system/sw/bin/ln -sf /run/current-system/sw/bin/other-zfs-binary /usr/bin/other-zfs-binary
        RemainAfterExit=true

        [Install]
        WantedBy=multi-user.target
      '';
    };
  };
  
  systemd.services.zfs-usr-bin = {
    description = "ZFS symlinks in /usr/bin";
    after = [ "network.target" ];
    wantedBy = [ "multi-user.target" ];
    serviceConfig = {
      Type = "oneshot";
      ExecStart = [
        "${pkgs.coreutils}/bin/mkdir -p /usr/bin"
        "${pkgs.coreutils}/bin/ln -sf ${pkgs.zfs}/bin/zfs /usr/bin/zfs"
        "${pkgs.coreutils}/bin/ln -sf ${pkgs.zfs}/bin/zpool /usr/bin/zpool"
        "${pkgs.coreutils}/bin/ln -sf ${pkgs.zfs}/bin/other-zfs-binary /usr/bin/other-zfs-binary"
      ];
      RemainAfterExit = true;
    };
  };
}

@orville-wright
Copy link
Contributor

@tiagolobocastro @Abhinandan-Purkait if we take this community contribution into the project, please ensure that it's attributed as a community driven contribute i.e. @ncrmro creates the PR and clearly shows as the author / creator.

  • it seems that this feature has merit and would be useful to others in the community.

  • we may want to have a quick
    discussion in this weeks Maintainers mtg in this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Need community involvement Needs community involvement on some action item. refactoring Modifying existing code. waiting for contributor Waiting on the contribution from the community contributor.
Projects
None yet
Development

No branches or pull requests

5 participants