Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Proposal: Configuration block directives for loading tailscale configuration #24

Closed
clly opened this issue Nov 3, 2023 · 8 comments
Closed

Comments

@clly
Copy link
Contributor

clly commented Nov 3, 2023

I've seen a few comments in some PRs or issues about switching to a configuration block for loading tailscale configuration. Since it's come up multiple times I figured a GitHub issue would help track potential approaches. I had some thoughts and will include them here but we can always drop it if you've already a potential approach

Global Configuration

Anything that needs to be global across all tailscale servers would end up in a global configuration block. This lets the plugin also implement setup and teardown functionality (but only when the global options are specified). Anything that sets up a bare listener without specifying any tailscale block would not get the plugin teardown called on it.

Some examples could be:

  1. whether servers should be ephemeral
  2. External storage for tailscale server state
  3. a global auth key

Per server/listener configuration

The bigger question that I might have is where to store per listener configuration (such as auth keys) particularly when http servers declare conflicting configuration. My thought is to include individual server configuration as individual blocks inside the global configuration.

The final configuration would look something like this:

tailscale {
    authkey = "TS_AUTHKEY_global"
    state {
      kubestore "secret_name"
    }
    a { # this matches tailscale/a
      authkey = "TS_AUTHKEY"
      ephemeral = true
    }
}

:80 {
  bind tailscale/a
}

:80 {
  bind tailscale/b  #tailscale/b inherits global configuration
}

Please let me know your thoughts

@willnorris
Copy link
Member

I don't love having to separate the config (into the global section) from where it's used, but I think this is probably the best we can do. However, I do like that this can make it a lot clearer how a single tsnet instance can be used as a listener, a transport, and/or an auth handler. Configure it once globally, and then reference it where needed. So that's actually really nice.

@clly
Copy link
Contributor Author

clly commented Nov 26, 2023

Configure it once globally, and then reference it where needed.

That's what I was thinking as well. It ensures that there can't be conflicting configuration on the same tsnet instance as well.

If there's a big desire to keep configuration where it's used, the method proposed here could be the way to go. Then when configuring the tsnet instances we'd want to throw an error if we run into any conflicts (e.g. on control url, ephemeral state, storage, etc)

The tsnet instance configuration would match between each Module + implementation specific configuration. That might look something like this

:80 {
  listen_wrappers {
    tailscale {
       host = "a"
       authkey = "TS_AUTHKEY"
       control_url = "https://example.com"
    }
  }
}

:90 {
    reverse_proxy / {
      to "ts-backend:5000"
      transport "tailscale" {
        host = "a"  # You probably need a host to do a transport?
        authkey = "TS_AUTHKEY"
        control_url = "https://c.tailscale.com"  # oh no! a conflict. Log a warning and use the first one? Crash? Create a new tsnet instance with different state (a-1)?
      }
    }
}

Like you said, everything is used where it's defined. My personal opinion is that when we identify a shared piece of data that can be redefined in multiple places it's better to configure them separately than redefine them multiple times. Conflicts can create user confusion (though cleared up through documentation) and less clean behaviors on those conflicts depending on decided on behavior.

If y'all decide on one let me know, but otherwise I can start working on a PR for the original proposal since it sounded acceptable?

@willnorris
Copy link
Member

yeah, I definitely agree... when we are reusing a single tsnet instance, it totally make sense to separate it to avoid having to interpret conflicts.

I suppose the global config would basically provision a caddy app that would maintain the usage pool of tsnet instances, configuring appropriately as needed. I've never created a caddy app like that, so will need to look at some examples. But does that sound like the right approach here, @mholt ?

@mholt
Copy link
Contributor

mholt commented Dec 1, 2023

Yeah, if you want to centralize configuration that a lot of handlers might use, you could use a Caddy app and stick it in the global options of the Caddyfile.

There are also some config primitives you can take advantage of depending on your needs:

  • Remember that the native config format is JSON, and you can use that to elegantly define variables that are used for all the HTTP routes, even routes for different sites, on the same server (server = set of listeners).
  • Environment variables are supported placeholders in a lot of Caddy config. Adding support for placeholders in a plugin like this is very easy (1 line to get or make the Replacer, then one line to call Replace).

# oh no! a conflict. Log a warning and use the first one? Crash? Create a new tsnet instance with different state (a-1)?

This kind of thing is why we tend to avoid global config in Caddy, even the global config we do have is riddled with bugs here and there. Just something to keep in mind.

@clly
Copy link
Contributor Author

clly commented Dec 1, 2023

This kind of thing is why we tend to avoid global config in Caddy, even the global config we do have is riddled with bugs here and there. Just something to keep in mind.

In my mind the conflicts would mostly show up in the listen_wrapper case and the limited configuration surface on the Network module would prevent conflicts (beyond someone just creating the same server twice globally)

Does your experience say to go with a listen_wrapper instead? If I understand correctly, you would need to define each wrapper for each port you want caddy to bind to? Wouldn't that open more configuration surface and more possibilities for conflicts?

you can use that to elegantly define variables that are used for all the HTTP routes, even routes for different sites, on the same server (server = set of listeners).

Maybe that's what I'm missing because I'm not as familiar with the JSON configuration

@mholt
Copy link
Contributor

mholt commented Dec 1, 2023

Right, so in that case, maybe you do want an app module that can centralize a single configuration of tailscale listeners that you then use by name.

The PKI app is like this: you can define CAs and give them a name. They have exactly 1 configuration, and they can be referenced throughout the config by their name, since modules can load app modules and access their methods.

@clly
Copy link
Contributor Author

clly commented Feb 25, 2024

I've created an initial PR to support using the Caddy configuration file to setup the tailscale plugin here #30. Based on the initial response here I can extend it to support more configuration or change the approach.

@clly
Copy link
Contributor Author

clly commented May 5, 2024

Closed via #30

@clly clly closed this as completed May 5, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants