Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Dynamic Peers #185

Closed
maaft opened this issue Jan 10, 2022 · 10 comments
Closed

Dynamic Peers #185

maaft opened this issue Jan 10, 2022 · 10 comments

Comments

@maaft
Copy link

maaft commented Jan 10, 2022

We want to use innernet as a vpn for our platform cloud infrastructure. Some parts of this infrastructure need to be spawned on demand, when the workload is high.

Is "on demand joining" with innernet even possible given that you cannot reuse a peers IP? I was thinking about writing a small authenticated service that creates new invitations when requested.

Or do I need to look at some other tooling to do the job?

@mcginty
Copy link
Collaborator

mcginty commented Jan 10, 2022

On-demand joining is something others have been using innernet for (when used with ansible for orchestration, for example), and it should be a case where innernet can provide the functionality you need, even with the restriction of peer IPs not being reusable. And yep, the API and clients have been designed in such a way that it shouldn't be too difficult to write your own authenticated invite generator, since that was definitely one of our goals when originally designing innernet. There was a good amount of work put in earlier to allowing the client binary to be called non-interactively/scriptably.

What seems to be the best setup in these cases is to use a large IPv6 network for your innernet root CIDR (RFC4193 is recommended: https://cd34.com/rfc4193/), such that you have 2^80 available IPs to play with, and they aren't precious when you need to assign, say, one septillion of them.

On top of that, you can ease the load on the coordinating server (and bandwidth load of peers) by remembering to disable old peers once they're "out of service", so that the server won't report their status to everybody.

I'd like to add the disclaimer that innernet hasn't been battle-tested at massive scale as it wasn't initially designed for it, and there are likely still bugs as you scale into the thousands or hundreds-of-thousands of peers, but I'm very interested in supporting those cases and working out any kinks you might find in the process.

Hope that helps, let me know if there was anything else I didn't answer.

@maaft
Copy link
Author

maaft commented Jan 11, 2022

Thank you for the hint towards IPv6.

Does this mean that every participant in that network has to use IPv6 or is it possible to route traffic from between IPv6 and IPv4 peers? I'm asking because we've never dealt with IPv6 before and I fear complications when migrating our whole infrastructure.

@mcginty
Copy link
Collaborator

mcginty commented Jan 11, 2022

Yeah, just to be extra verbose since this is confusing language, there are two types of IP traffic happening:

  1. Traffic over the internet that is the encrypted WireGuard traffic between peers, which will happen over what we'll call their external IPs, and
  2. Traffic over the "virtual" wireguard interface that exposes the decrypted traffic that you can treat like any other network interface, with IPs that we'll call their internal IP.

If you choose an IPv6 CIDR as your internal network, the internal IPs of peers will all be IPv6 addresses (no possibility to also assign an IPv4 address), and the traffic via that wireguard interface will have to be IPv6. However, the wireguard traffic can be routed externally to an IPv4 address on the internet no problem.

For example, say you choose an innernet root CIDR of fd00::/48 and you have a peer that has been assigned the address of fd00::5. That peer's wireguard "endpoint" can still be 1.2.3.4:51820, for example, so that if you ran ping fd00::5, that would send an IPv6 packet to your wireguard interface, the wireguard implementation would encrypt that packet and then send it over the normal internet to 1.2.3.4:51820 where it would then be decrypted and spit out of the other peer's wireguard interface.

Does that help answer your question?

@DanielJoyce
Copy link

DanielJoyce commented Jan 14, 2022

I guess I still don't understand this whole fixation on immutable IPs when it is the key+ip that matters. Tempered Networks / Airwall doesn't have this restriction. I also find ipv4 easier to remember, so for overlay it would be awesome to not run out of ip addresses if using ipv4.

If anything, the use of a key and key+ip should be forbidden.

@mcginty
Copy link
Collaborator

mcginty commented Jan 14, 2022

@DanielJoyce you could call the current design of innernet an experiment in seeing if we could take advantage of WireGuard's specific features to try to more closely link an identity to an IP address. In our case, it was designed for our engineers and colleagues to connect to dedicated always-on hardware where we wanted to have simple firewall rules that were easy to read and verify the correctness of.

But, let's say that it turns out this concept of using an IP as a form of authenticated identity is fatally flawed (depending on what you want "identity" to mean, it arguably is). I'm definitely open to the next major version of innernet (say, version 2.0) that has a different security model if it makes more sense and it feels right to call that experiment over :). This discussion of IP/key immutability also applies to discussions about key rotation and other "cryptographic hygiene" changes.

@maaft
Copy link
Author

maaft commented Jan 15, 2022

Couldn't we just allow both scenarios through a server config flag?

I'm not familiar with the code-base, so not sure how complicated that would be.

@mcginty
Copy link
Collaborator

mcginty commented Jan 15, 2022

Couldn't we just allow both scenarios through a server config flag?

That sounds like a recipe for disaster to be in both security model and maintainability. I think one should just be chosen and we stick to it...

@maaft
Copy link
Author

maaft commented Jan 15, 2022

Fair enough. Case closed (for now).

@maaft maaft closed this as completed Jan 15, 2022
@maaft
Copy link
Author

maaft commented Jan 16, 2022

For example, say you choose an innernet root CIDR of fd00::/48 and you have a peer that has been assigned the address of fd00::5. That peer's wireguard "endpoint" can still be 1.2.3.4:51820, for example, so that if you ran ping fd00::5, that would send an IPv6 packet to your wireguard interface, the wireguard implementation would encrypt that packet and then send it over the normal internet to 1.2.3.4:51820 where it would then be decrypted and spit out of the other peer's wireguard interface.

So just to be clear how I could do this without requiring every backend service to be ipv6 compliant:

I'd have to innernets:

  • One IPv4 for all of our backend services that talk to each other
  • One IPv6 for the workers that are dynamically spawned

My workers are talking only to one IPv4 backend service to get their data. So to make things work, I would need an IPv6 <-> IPv4 proxy on that particular service.

I'm not sure if the clients I use in my workers support IPv6 though. But when not, I could just spawn another local IPv4 <-> IPv6 proxy with every worker.

Would that work?

@mcginty
Copy link
Collaborator

mcginty commented Jan 18, 2022

@maaft yes, if you don't want any IPv6 stack for some of your backend services, then that setup makes sense :). I can't say I fully understand your setup, though, so can't guarantee that this is optimal or anything.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants