Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

innernet (open-source, Rust, tailscale-alternative) - needs a nixos module #118005

Open
colemickens opened this issue Mar 30, 2021 · 22 comments
Open

Comments

@colemickens
Copy link
Member

Project description

"A private network system that uses WireGuard under the hood."

https://blog.tonari.no/introducing-innernet

Metadata

@ptman
Copy link
Contributor

ptman commented Mar 31, 2021

#118007

@MatthewCroughan
Copy link
Contributor

Cutting edge.

@ptman
Copy link
Contributor

ptman commented Apr 20, 2021

The package is now in unstable. On first run I get:

[!] updated permissions for /var/lib/innernet to 0700.

Which seems like something that the package could handle. Otherwise excellent work.

@colemickens
Copy link
Member Author

Not to be too greedy, but there's not a NixOS module for it yet, is there?

@ptman
Copy link
Contributor

ptman commented Apr 20, 2021

At least the command given by innernet-server new doesn't work:

    systemctl enable --now innernet-server@$NAME

@ptman
Copy link
Contributor

ptman commented Apr 20, 2021

Oh, and:

[!] updated permissions for /etc/innernet to 0700.

@vikanezrimaya
Copy link
Member

The systemctl bits need to be handled by a NixOS module, not the package. We could try reusing some of the upstream unit files though while writing it, and generally the module would mostly be installing the unit files in an appropriate place and ensuring that an invite (maybe read from a file outside /nix/store? since an invite, even one-time only, could be misused to get an attacker-controlled machine onto the network) will be processed before the network connection gets established.

@ptman
Copy link
Contributor

ptman commented Apr 21, 2021

Based on https://github.com/tonarino/innernet/blob/main/server/innernet-server%40.service I managed to get the following working:

  systemd.services.innernet-mynet = {
    description = "innernet server for mynet";
    wantedBy = [ "multi-user.target" ];
    after = [ "network-online.target" "nss-lookup.target" ];
    wants = [ "network-online.target" "nss-lookup.target" ];
    path = with pkgs; [ iproute ];
    environment = { RUST_LOG = "info"; };
    serviceConfig = {
      Restart = "always";
      ExecStart = "${unstable.innernet}/bin/innernet-server serve mynet";
    };
  };

@colemickens
Copy link
Member Author

@ptman do you plan to send a PR? I want to help get the innernet momentum rolling and I think a nixos module would really help

if you don't want to, I can run with it and get something submitted to start iterating on.

Thanks!

@ptman
Copy link
Contributor

ptman commented Apr 26, 2021

@colemickens Sorry, I have very little expertise and time to get more familiar with it. Please go ahead.

@ptman
Copy link
Contributor

ptman commented May 19, 2021

I've had to set networking.wireguard.enable = true; to get the kernel modules needed by innernet. Maybe this is something that could be fixed with a dependency?

@bjornfor bjornfor changed the title innernet (open-source, Rust, tailscale-alternative) - needs a package and a nixos module innernet (open-source, Rust, tailscale-alternative) - needs a nixos module Jul 17, 2021
@firestack
Copy link

I wrote this module based on the above innernet systemd service config and combining the wireguard enable

I was having difficulty implementing the /etc "server configuration|interface" files due to being unable to progress past the error message error: stack overflow (possible infinite recursion)

I don't really have a clear enough understanding of how to debug this that I do not know how to continue developing or improving this from here.

{ lib, config, pkgs, ... }: with lib; with attrsets; {
  options.services.innernet = {
    enable = mkEnableOption "innernet-server service";

    server = let module = {
        source = mkOption {
          description = ''
            Configuration of the path, text, or symlink which represents
            the configuration for this server, located at
            `/etc/innernet-server/$\{serverName}.conf`
            ! CONTAINS PRIVATE KEYS !
          '';
          # to be expanded to essentially be an alias over env.etc.<innernet-server/{}.conf>
          type = types.path;
        };

      }; in mkOption {
        default = {};
        description = ''
          Innernet-server instance configuration
        '';
        type = with lib.types; attrsOf (submodule module);
      };
  };

  config = let cfg = config.services.innernet; in lib.mkIf cfg.enable {
    networking.wireguard.enable = true;

    systemd.services = mapAttrs'
      (server: serverCfg: nameValuePair "innernet-server-${server}" {
        description = "innernet-server for interface ${server}";
        wantedBy = [ "multi-user.target" ];
        after = [ "network-online.target" "nss-lookup.target" ];
        wants = [ "network-online.target" "nss-lookup.target" ];
        path = with pkgs; [ iproute ];
        environment = { RUST_LOG = "info"; };
        serviceConfig = {
          Restart = "always";
          ExecStart = "${pkgs.innernet}/bin/innernet-server serve ${server}";
        };
      })
      (cfg.server);

    environment.etc = mapAttrs'
      (server: serverCfg: nameValuePair "innernet-server/${server}.conf" { text = ""; })
      cfg.server;
  };
}

@PhilTaken
Copy link
Contributor

PhilTaken commented Jul 30, 2021

I spent some time on this and came up with that:

{ config, lib, pkgs, ... }:

with lib;

let
  cfg = config.services.innernet;
in {
  meta.maintainers = with maintainers; [ PhilTaken ];

  options.services.innernet = with types; {
    enable = mkEnableOption "innernet client daemon";

    port = mkOption {
      type = port;
      default = 51820;
      description = "The port to listen on for tunnel traffic";
    };

    configFile = mkOption {
      type = path;
      description = "Path to the config file for the innernet server interface";
    };

    package = mkOption {
      type = package;
      default = pkgs.innernet;
      defaultText = "pkgs.innernet";
      description = "The package to use for innernet";
    };

    openFirewall = mkOption {
      type = bool;
      default = false;
    };
  };

  config = let
    interfaceName = builtins.head (builtins.match "[a-zA-Z_/-]+/([a-zA-Z_-]+).conf" "${cfg.configFile}");
  in mkIf cfg.enable {
    networking.wireguard.enable = true;
    networking.firewall.allowedTCPPorts = mkIf cfg.openFirewall [ cfg.port ];

    environment.systemPackages = [ cfg.package ]; # for the CLI
    environment.etc = {
      "innernet-server/${interfaceName}.conf" = {
        mode = "0644"; text = fileContents "${cfg.configFile}";
      };
    };

    systemd.packages = [ cfg.package ];
    systemd.services.innernetd = {
      after = [ "network-online.target" "nss-lookup.target" ];
      wantedBy = [ "multi-user.target" ];

      path = [ pkgs.iproute ];
      environment = { RUST_LOG = "info"; };
      serviceConfig =  {
        Restart = "always";
        ExecStart = "${cfg.package}/bin/innernet-server serve ${interfaceName}";
      };
    };
  };
}

it works for me so far, although I have yet to actually test it out in a proper environment.
if anybody's interested I can make a PR :)

EDIT: fixed the match function missing its second argument

@PhilTaken
Copy link
Contributor

With a revised version of the above nixos module everything works properly now.

I could make a PR at this point but I'd like to add declarative network configuration to this such that all the CIDRs and peers could be defined/added declaratively instead of being added imperatively.

if anybody wants to use my module at this point you can find the latest version here.
You will just have to add all the cidrs and peers by hand using the command line interface.

@tomberek
Copy link
Contributor

tomberek commented Sep 2, 2021

It isn't the worst to include the module as-is before supporting the declarative peers/CIDRs. I gave it a quick test and it seems to function as expected. The peers/CIDRS themselves are in a relatively simple sqlite format. An option would be to declare the entire network+associations and insert them in runCommand. The db would then be in read-only mode unless one selects an option to point it at an impure path outside of the nix store.

@vikanezrimaya
Copy link
Member

I'd say that if I wanted to make a fully declarative VPN I wouldn't need innernet, rather I would simply use declarative Wireguard support built right into NixOS. So having CIDRs and peers added by hand looks fine to me as a PR to make the module at least exist.

@PhilTaken
Copy link
Contributor

I'd say that if I wanted to make a fully declarative VPN I wouldn't need innernet, rather I would simply use declarative Wireguard support built right into NixOS. So having CIDRs and peers added by hand looks fine to me as a PR to make the module at least exist.

While your reasoning does make sense, imperative configuration as it currently is, does (in my opinion) not fit the nix standard which is why I didn't submit a PR yet. I personally support @tomberek 's suggestion

@johnae
Copy link
Contributor

johnae commented Sep 14, 2021

I've very recently created a declarative innernet module for myself which I may make a pull request to nixpkgs out of. It is completely declarative and requires some sort of secrets management (I'm using https://github.com/ryantm/agenix myself). Being completely declarative means that you don't really create invitations and send to people - every peer must be defined through the nixos module.

First I went down the "reimplement things from innernet in pure nix" route (like creating the database, adding peers using the sqlite cli etc) but this felt like throwing away much of the good stuff that the innernet cli provides. Instead, I opted to use the innernet tools as much as possible and only using the sqlite cli for final touch ups.

One current downside (depending on how you view it I guess) is that you shouldn't manage peers, cidrs etc via the cli tools - but you can ofc, it's just that it'll be thrown away on the next nixos-rebuild (or rather, the next restart of the innernet-server service).

I haven't tested it extensively and there are some things not supported yet (like associations which should be easy to add fully). I could also imagine that what I'm doing is somehow very dumb and turns out to be a problem somehow (eg. throwing away the db and all config on every restart of the innernet-server service).

You can find the module here: https://github.com/johnae/world/blob/main/modules/innernet.nix

If you're curious about usage you can see it here:
https://github.com/johnae/world/blob/2db0864950d37af3bd9cb5b4690234112f38fa58/hosts.toml#L28

and here for example
https://github.com/johnae/world/blob/2db0864950d37af3bd9cb5b4690234112f38fa58/hosts.toml#L142

Let me know what you think if you're testing it. Oh and the TOML is really just my preference for configuring hosts, it's not something this module depends on if you're wondering.

@dit7ya
Copy link
Member

dit7ya commented Mar 17, 2022

Hey @PhilTaken @johnae, any update on a PR?

@PhilTaken
Copy link
Contributor

Hey @PhilTaken @johnae, any update on a PR?

Sorry, I abandoned my efforts in favor of just setting it up with networking.wireguard. Every host needs to explicitly know every other node in the network to reach it but that was less of an issue with a declarative peers list that is being passed to every node at build-time than I thought when I pursued innernet as a nixos module initially.

@johnae
Copy link
Contributor

johnae commented Mar 21, 2022

I'm using my own modules for this, see the links above.

@ilovethensa
Copy link

any updates on this?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests