-
Notifications
You must be signed in to change notification settings - Fork 221
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
IPv6 support inside tunnels #28
Comments
On Thu, Oct 11, 2018 at 10:37:56PM -0700, cathugger wrote:
Currently we only support IPv4 inside tunnels.
IPv4 should stay.
IPv6 to IPv6 communication could be added without too much trouble.
But IPv6 to IPv4 (and back) translation could be very tricky.
How we could approach this? How would DNS resolving relate to this?
Or we should not add IPv6 support at all?
IPv6 would work better for mapping large amount of peers.
I was thinking of separting exit traffic to be ipv6 only and hidden services as ipv4 only.
I have 3 justifications for this descision:
1) there aren't any good deprecated or unused private ipv6 ranges left that I can use that I know of.
2) ipv4 only exits would be NAT'd HELL on earth.
3) would provide a clear separation between hidden service and exit traffic
…
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
#28
|
While I definitely want IPv6 support for exits, I'm unsure how to feel about IPv6-only. Some of things are still IPv4-only :\
could use some /48 or /64 subnet of fd00::/8 but yeah that's sorta ugly
end2end connectivity and possibility to host services would be sorta cool but unsure if it's critical use case
imo could be separated by using packet destination address; we'd still use source address of zeros, and leave OS to handle NAT. less clear but not something impossibly confusing. |
On Fri, Oct 12, 2018 at 05:45:55AM -0700, cathugger wrote:
While I definitely want IPv6 support for exits, I'm unsure how to feel about IPv6-only. Some of things are still IPv4-only :\
You probably want to give globally routable IPv6 address (based on client' public key) to interface.
Should we support NPTv6 use case too? Unsure if having constant ULA NPTv6 address or changing-when-exit-changes globally routable IPv6 would be better..
>1) there aren't any good deprecated or unused private ipv6 ranges left that I can use that I know of.
could use some /48 or /64 subnet of fd00::/8 but yeah that's sorta ugly
yup
>2) ipv4 only exits would be NAT'd HELL on earth.
end2end connectivity and possibility to host services would be sorta cool but unsure if it's critical use case
hence why ipv6 was chosen for exits, you can give each user their own /128
>3) would provide a clear separation between hidden service and exit traffic
imo could be separated by using packet destination address; we'd still use source address of zeros, and leave OS to handle NAT. less clear but not something impossibly confusing.
internally everything is represented as ipv6 anyways, including ipv4 addresses as hybrid dualstack addresses (::ffff:8.8.8.8)
that means we would probably use SIIT with NAT if we do any ipv4 exit traffic.
…
--
You are receiving this because you commented.
Reply to this email directly or view it on GitHub:
#28 (comment)
|
nvm that's really bad idea for fingerprintability (and also exits would need to keep state of peers either way so not much would be gained with that). |
SIIT sounds like more work to do, and also has some edges like MTU being different for IPv4 and IPv6 (how the hell would his handled for tun interfaces? multiple of them?) |
On Fri, Oct 12, 2018 at 01:45:40PM +0000, cathugger wrote:
>that means we would probably use SIIT with NAT if we do any ipv4 exit traffic.
SIIT sounds like more work to do, and also has some edges like MTU being different for IPv4 and IPv6 (how the hell would his handled for tun interfaces? multiple of them?)
for MTU we'll probably pick the smallest initially. it could be negotiated with the exit.
…
--
You are receiving this because you commented.
Reply to this email directly or view it on GitHub:
#28 (comment)
|
I don't like that.
more work to do. Is there anything really wrong with using same kind of IPv4 packets for exit traffic, other than "stuffing everything to IPv6 for that looks cleaner to me"? |
On Fri, Oct 12, 2018 at 01:54:30PM +0000, cathugger wrote:
>for MTU we'll probably pick the smallest initially.
I don't like that.
same
>it could be negotiated with the exit.
more work to do.
yup
Is there anything really wrong with using same kind of IPv4 packets for exit traffic, other than "stuffing everything to IPv6 for that looks cleaner to me"?
we use ipv4 for the communication with the relays right now so i don't want to touch the ipv4 routes at the moment, will have to add routes for the first hop relays for clients anyways if we provide ipv4 exit traffic at all.
stuffing it over ipv6 makes more sense right now (at least to me) as all the addresess are represented as ipv6 internally.
…
--
You are receiving this because you commented.
Reply to this email directly or view it on GitHub:
#28 (comment)
|
https://tools.ietf.org/html/rfc7915 (SIIT) seems like quite hell of work to do (and I don't want llarp to do this kind of work as it'll be possible bug source and even implemented properly still has some very undesirable properties like requiring lower MTU, handling of fragmentation), so I'd rather not have IPv4 support initially than have it implemented this way. |
On Fri, Oct 12, 2018 at 07:19:51AM -0700, cathugger wrote:
https://tools.ietf.org/html/rfc7915 (SIIT) seems like quite hell of work to do (and I don't want llarp to do this kind of work as it'll be possible bug source and even implemented properly still has some very undesirable properties like requiring lower MTU, handling of fragmentation), so I'd rather not have IPv4 support initially than have it implemented this way.
yup, that's basically how i feel too.
…
--
You are receiving this because you commented.
Reply to this email directly or view it on GitHub:
#28 (comment)
|
Well, my current proposal for IPv4 exit traffic would be packets with client' src address set to |
clarification: for hidden service to hidden service traffic for clearnet->client, src would be globaly routable IPv4 address, dst would be 0.0.0.0 |
no blockers on my end, i already require some inet6 driver installed on the host system just to read out RCs
|
i have implemented part of this in my ipv6-tun branch |
it's possible that https://github.com/onioncat/onioncat/blob/master/glob_id.txt are not used at all. |
closing since we have ipv6 code now |
Currently we only support IPv4 inside tunnels.
IPv4 should stay.
IPv6 to IPv6 communication could be added without too much trouble.
But IPv6 to IPv4 (and back) translation could be very tricky.
How we could approach this? How would DNS resolving relate to this?
Or we should not add IPv6 support at all?
IPv6 would work better for mapping large amount of peers.
The text was updated successfully, but these errors were encountered: