-
Notifications
You must be signed in to change notification settings - Fork 28
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Not enough people are using snow #1
Comments
One thing I find myself wondering is whether snow's goals might be better attained by implementing the IETF-backed HIPv2 protocol - for one, being an implementation of a Standards-track protocol might help with uptake. (Note that while HIPv2 only recently got published as an RFC, HIPv1 was published in 2008) |
@eternaleye had never heard of HIP before. I love this spec so much! Perfect for helping link-local communications establish & validate identity- thanks x 12 for the mention! The DHT to map from keys -> ip addresses is novel to Snow afaik (edit: I'm finding HIP-ists have already used DHTs). I'd be interested on some words comparing Snow's tunneling over DTLS versus using other tunneling protocols- HIP seems to work in tandem with Encapsulating Security Payload (ESP). Adoption has trailed, but Snow also brings to mind SocialVPN/GroupVPN, which uses XMPP to discover and exchange keys with peers, then initiates "TinCan" links- an applicationized libjingle. Substitutes a DHT for XMPP, uses a different encapsulation tech, but both peer exchange + encapsulation systems. |
While ESP is the only transport format currently specified, it's not the only one possible - there exists an expired draft for using SRTP keyed by HIP, and a similar document could be made for TLS/DTLS. Also, HIP is meant to work across the full public internet, not just link-local - there are even allocated DNS RRs for it. |
Figured I'd split this out. In terms of comparisons, ESP in BEET (Bound End-to-End Tunnel) mode (as is specified for HIP) adds 18 bytes of overhead to its contents, while DTLS adds 32. BEET, being end-to-end, encapsulates the endpoint IP addresses and such in the setup, and thus does not need to send the IP header on the inner packets. Also, while it can be wrapped in UDP (for NAT hole-punching, among other things), it does not require it by any means. So far as I can tell, Snow does send the inner IP header, and it uses DTLS over UDP exclusively (while DTLS can be sent directly over IP, basically nobody does so). Thus, the best-case protocol stacks for each look like this: Snow: Square brackets denote overhead, and "Application" is whatever the application is making use of normally - UDP, TCP, SCTP, etc. The minimum overheads of the various protocols are:
Thus, while HIP's ESP transport faces a best-case overhead of 18 bytes (or 26 bytes, if wrapped in UDP) regardless of IPv4 or IPv6, Snow carries 8 + 32 + 20 = 60 bytes for IPv4, or 80 bytes for IPv6. |
The nature of snow is that you give it a public key hash and it gives you a virtual IP address where you can send packets that go to the machine with that public key. The lookup method can be anything and there can be more than one. The transport method can be anything and there can be more than one. Right now it uses a DHT and DTLS but those are just implementation details that don't affect the interface visible to applications. In theory we could use HIP for these things. Maybe someday we will. But IPSec is trouble. Transport mode requires dealing with platform-specific OS kernel interfaces and invites interference from stupid middleboxes. Can you explain where your 18 bytes came from? Any of the usual HMAC algorithms are bigger than that by themselves. Doesn't tunnel mode end up looking pretty much like DTLS by the time it provides the same security properties? The existing transport method clearly has room for improvement regarding packet size. It's sending 8 bytes of zeros as the source and destination address of every packet and some of the internal IP header is probably unnecessary. But I'm not sure I like the idea of putting anything from the internal IP header on the outside of the packet where it isn't encrypted. Stupid middleboxes can be very, very stupid. |
This is exactly the purpose of the HIT - the neat trick there is it uses ORCHIDv2 in order to allow the hash of the public key to be the virtual IP(v6) address itself. It also supports dynamically-allocated machine-local "LSI" addresses, which are IPv4-compatible.
HIP does the same - HIPv1 had specified rendezvous-based (meant to make mobility easier), DNS-based, DHT-based, and others; they'll likely return for HIPv2.
While only ESP has been published as an RFC, HIP explicitly allows for other transports; a draft was written for SRTP (though it has lapsed). If you want a TLS or DTLS transport, a good option might be to base it on TLS-PSK, keyed with the material generated by the HIP exchange. Submit it to the HIP working group - I suspect it'd get some interest!
Perhaps, but OpenHIP has already demonstrated that neither is insoluble - it runs on the Big Three plus at least one BSD, and IIRC supports both bare ESP and middlebox-hole-punching UDP-encapsulated ESP; there have been multiple tests of such functionality as well. However, neither OpenHIP nor InfraHIP yet support HIPv2 - though OpenHIP has stated an intent to release a version that supports it this summer.
I tried to find the fixed overheads of each - i.e. excluding the MAC length, etc. Using the same MAC in each would expand both by roughly the same amount. (And note: HMAC is a MAC, but not all MACs are HMAC. Poly1305 is a notable example.)
BEET mode does not have to put anything from the internal IP header on the outside. What I meant was that things which are fixed for the session (src IP, dst IP, etc) are part of the tunnel establishment, and protected by HIP - but as they are fixed for the session, they are elided. Per-packet options that cannot be elided are instead sent with a pseudo-header on the inside of the tunnel. This was found to be rare in practice, and overhead remains low. |
ORCHID makes me nervous. The address space is just big enough for people to assume there won't be collisions and just small enough for that assumption to be violated. So you end up having to deal with the possibility of collisions even though they're uncommon, and failure to do so gets worse over time as attackers get faster computers while the address space is permanently fixed. If more than one public key can map to the same ORCHID IPv6 address in practice then maintaining the contrary illusion only encourages security bugs in applications that don't appropriately handle a rare edge case.
DTLS is more of a lesser of evils than something I actually want. OpenSSL doesn't make my life any easier. Tunnel mode IPSec may even be the better alternative in theory, but is there a good cross-platform userspace library? I'm more tempted to replace DTLS with NaCl. Think the working group would be interested in that? Adding HIP as an alternative back end is likely to happen eventually, especially if both become popular. It's just a lot of integration work when there are higher priorities like porting snow to more platforms. |
I'd say the math actually bears out ORCHID being safe. It allocates 96 bits to the hash. Let us (generously) assume that there are 2^33 (~8.5 billion) humans on the planet, and that they average out to having 2^7 (128) "services" each (after all, when you can allocate a separate HIP identifier for each thing you're running, why not use it and avoid the complexity of application-layer vhosting). One can approximate the likelihood of a single collision when choosing n members from an alphabet of size m as Plugging in m = 2^96 and n = 2^40, we get 0.0000076293654; approximately a one in 130k chance of there being a single collision, ever. Now, this only holds if the hash function used is collision-resistant, but ORCHID allocates four bits to the "OGA" - ORCHID Generation Algorithm, i.e. the hash function used - so if a hash is broken the ability to migrate is right there. Wouldn't even require replacing keys - just re-hash. And if that gets used up? Allocate another range - ORCHIDv1 used a different block of IPv6 than ORCHIDv2 does; that could be repeated.
StrongSwan has a pure-userspace libipsec library that's pretty light. Look at
NaCl, despite being a really nice simple library for message crypto, is still nontrivial to build secure protocols on top of. Especially regarding key exchanges and such. Trevor Perrin's 'Noise' may help some, but it's early yet. |
Accidental collisions between honest peers are less the issue than that an attacker can generate keys until one of them collides with one of the honest peers. Adding only a couple of bits to the keyspace wouldn't help much either. Most current recommendations are to use 256-bit hash algorithms for new applications and the entire IPv6 address space is only 128-bit.
Seems to be GPL, which sadly would interfere with an iOS port if I'm not mistaken: http://www.fsf.org/news/2010-05-app-store-compliance
Reading the documentation I got the impression that all I would have to do is crypto_sign_keypair() to generate a long-term key pair, then for each session generate a session key pair with crypto_box_keypair(), sign the session public key with the long term private key using crypto_sign() and send the session public key and signature to the peer to verify with crypto_sign_open(). Then encrypt packets with crypto_box() and decrypt with crypto_box_open(). Have I missed something important? That seems a lot less complicated than the string of kludges it took to make OpenSSL do public key authentication. |
In order for an attacker to achieve a 50% probability that two ORCHIDs, somewhere, collide, they'd need to generate |
So they would have to do it ~256 times, requiring ~2^56 total work. Brute forcing 56 bits was proved practical by distributed.net... in 1997. Today there are Bitcoin mining ASICs that do over a trillion hashes per second, which is more than 2^56 per day. Brute forcing key hashes is not exactly the same thing and AFAIK there aren't publicly available ASICs to do it, but can you see why it makes me nervous? Even if we like the opportunity cost argument that it's more profitable to mine Bitcoin than break a random ORCHID public key hash, there is zero security margin. |
Brute-forcing 56 bits is a very, very different problem from finding collisions among the hashes of 2^56 public keys, and the math works out very differently. The biggest difference is in generating the iteration values - for brute-forcing the DES keyspace, that can in fact be made incredibly minimal by way of ordering the key space according to a Gray code thus bringing the cost of getting another value for testing to "flip one bit." Generating keypairs is more costly by a large margin... and however fast you can hash them, not having anything to hash is a problem. EDIT: Even without the Gray code, it's a single-cycle increment instruction to get another DES key to try. Generating an ECC keypair requires generating a private key (which, since some encodings are optimally compact, is the same because you can just increment the last word)... but also requires generating the public key from that, which is an ECC scalar multiplication, and is at best around 64k cycles. That's another RSA is even worse. In addition, a single-block DES computation is several times cheaper than hashing a public key. Finally, ORCHIDs permit specifying new generation algorithms - if this really becomes a crucial issue, key stretching techniques come into play, allowing the work factor for generating the hash to be driven up arbitrarily. Note that in this case, the size of the "enhanced key" is not a concern - as the generation algorithm would specify the iteration count, the attacker cannot iterate fewer times and still validate, because the client will iterate fully. |
Don't think like someone who wants to generate one unpredictable private key. Think like someone who wants to generate many public keys really fast. Multiplication is expensive? Pick a point/prime, hold it fixed and choose the next scalar/other prime in order so that your slow multiplication becomes a not so slow addition from the previous attempt. Or fix the modulus entirely and produce a new hash by changing the exponent. It's not as fast as Gray code but it's not so much slower that I would be ready to declare victory. Key stretching could work in theory but it's a grisly trade-off. There is a 1:1 proportionality between the amount you delay the attacker and the amount you delay the legitimate user. If you stretch the hash by very much at all you have to concede embedded devices, consume significant battery on mobile devices, significant resources and power on servers, create a resource exhaustion denial of service vector and add a human-perceptible amount of latency to each new connection. |
The tradeoff's not nearly so grisly as that - for one, a valid user only needs to compute the hash to verify the public key as part of a handshake, so as long as it's at most equal to the cost of an ECDH key agreement + ECDSA sig then it's only a 2x penalty for honest users. Meanwhile, an attacker bruteforcing ORCHIDs would never do either normally, so it's a far greater proportional penalty to them. |
True enough, equivalent proportionality doesn't actually start until the hash becomes the dominant cost of the handshake. That doesn't take much though. Hash functions are only a couple orders of magnitude faster than elliptic curve operations to begin with. |
One thing I just realized I never brought up - all of my calculation has been about collisions, but in order to impersonate a specific service (as opposed to a 50% chance of being able to impersonate one random valid service, which as above came out between |
I agree that impersonating a specific service appears impractical today, but being able to impersonate a random service is still very bad. If an attacker can steal or profit from impersonating a service then they may not much care which one it is. And if it's practical to create one random collision then it's generally practical to create ten or twenty or a hundred, so even if only a single digit percentage of targets are "interesting" the attacker can have multiple opportunities to hit one of them. |
Keep in mind - hosts are actively recommended to keep anonymous IDs for initiating connections which drives the service/ORCHID ratio down, most services aren't profitable to impersonate even when the costs are much lower (plaintext HTTP), they'd still need to get their HIT mapped to the target name in DNS if they want to intercept the normal domain-name workflow, HIP has support for pinning the full key after first contact, and a bunch of other obstacles.
This, however, does not follow. This is true of collisions from algorithmic breaks, but far less so for stochastic/birthday-bound collisions. The chances of a second collision existing hit 50% a good while after the first collision (EDIT: given And the OGA field in the ORCHID protects against algorithmic breaks. EDIT: Relevant: http://math.stackexchange.com/a/35867 Factored out, we can say that we expect
Subbing in |
Now you're defending HIP rather than ORCHID. That you need a way to map the IP address to the full key or a stronger hash of the key was my point to begin with.
This isn't birthday, it's essentially meet in the middle. Generating additional unique collisions is only made less likely by the possibility that multiple collisions could be uselessly generated against the same honest address, but if the attacker has generated so many honest collisions that there is a non-negligible probability of generating a collision against an existing honest collision then the game is over and the attacker has won. ORCHID purports to be a public key equivalent but it doesn't have enough bits for that. If you need a naming authority like DNS to map it to the full public key then you might as well use a friendly name instead of random numbers. If you want to be able to, for example, use the public key hash to authenticate the lookup response from an untrusted DHT then it needs to be more strongly collision resistant. Being almost but not quite good enough is precarious. It tempts people to use it as the public key equivalent that it almost is, until new research or new hardware makes the expensive algorithm-independent attack less expensive. Then it becomes a design flaw that it takes a compatibility-breaking change to fix. |
I do feel that judgments of a system's security need to take into account its usage, but fair enough.
Meet in the middle is a specific term referring to an attack on block ciphers. If this was going to be characterized as something other than a pure collision attack (and thus birthday), it'd be moving it up the complexity slope towards preimage (though not second preimage) due to needing a collision with one-of the honest HITs. And I gave both the formula and the actual probabilities of getting collisions against honest HITs.
It does not purport to be such. ORCHID is a block for flat namespaces in IPv6, with such namespaces distingushed by the "context" value, and does not itself say they are public-key equivalent (or even cryptographically generated). That's left up the the standards that consume the block via the OGA. HIP uses its portion of that space as IPv6-compatible "Host Identity Tags" which are more like key fingerprints. However, this is as much a compatibility measure as the IPv4-like LSIs (Local Scope Identifiers) - the ORCHIDv2 RFC outright states that in the long run, new APIs should be made by which peers use the true HI, rather than HITs or LSIs. From RFC 7343, § 1.1:
Doing so in the short term, however, Just Isn't Going To Happen. The transition to IPv6 showed how long that kind of thing takes.
You need some manner of mapping it to the IP address anyway, and DNS is honestly the one that is weakest against some new peer stealing ownership of the HIT → (HI, IP) mapping. In a DHT, for example, one can mandate that HIT → (HI, IP) updates must be signed by the HI to be accepted, preventing an adversarial peer from replacing one that already exists. At that point it's "first come first served", and an honest user would know if the HIT they tried to take was already in use.
And using the key as the high-level name, rather than the identifier the program feeds into the network stack, has just as many problems - not the least of which is that the application is responsible for mapping high-level names to addresses. You can't reliably interpose it. Look at any package that depends on c-ares, or libunbound, or any number of other DNS client implementations. It's a tradeoff - neither is perfect. I do think HIP made the correct one: Keep the formats of locators and identifiers the same to ease migration, while explicitly noting an intent to separate both the formats and APIs in the long term. |
It is literally preimage. The honest addresses provide a space-time trade off similar to meet in the middle. Create a hashtable with all of the known honest addresses in it and do the O(1) lookup for each key you generate to look for a match. The more honest nodes there are the less work the attacker has to do to hit one. You can suss the same math out of birthday as you've done by doing some extra calculation but it only shows that generating additional collisions requires linearly more work. Five times as much work yields five times as many honest collisions. It seems what we disagree about is the security impact of an attacker being able to generate collisions with a hundred or more random honest addresses.
Fair enough, though in that case I'm curious why you're so keen to defend its collision resistance against attacker-generated keys. It's still the same trouble though. If it isn't securely globally unique that compromises much of its apparent utility and it becomes an attractive nuisance to people who will still want to capitalize on that utility when it isn't secure. It might even be better to have just allocated a /64 and made the need to address possible collisions more obvious.
I completely agree that the long-term solution is to use the full public key in the API, and likewise that doing so everywhere immediately is just not happening. So what remains is the merits of our compatibility hacks. :)
If you have a domain name and a secure naming authority then you aren't mapping a public key equivalent to a public key at all, you're mapping a domain name to a public key equivalent. You can map to as strong a hash as you like or the full public key, and assign any locally unused IP address. There is no need to encode the public key into the IP address when the IP address is what you're mapping to. But we would also like to be able to obtain a public key equivalent [somehow], e.g. using any unspecified lookup service or copy and paste from somewhere, and plug it into the existing infrastructure of applications and libraries that know how to deal with names and sockets. That means encoding the public key equivalent into a name or IP address. Encoding it into an IPv6 address leads to what we've been discussing and excludes IPv4-only applications/libraries. (Securely encoding it into an IPv4 address is clearly hopeless.) And having the DHT resolve conflicts requires trusting the DHT. A DHT capable of securely implementing first come first served name reservations would square Zooko's Triangle. I would be very interested to hear about it if you find an implementation capable of that with better than blockchain-level inefficiency.
Interposition is the point. Nothing knows what to do with a public key. Everything knows what to do with names and IP addresses. A name can be a strong public key equivalent when an IP address can't. Look at DNS even. HIP provoked new DNS record types. Snow can just use CNAME (or SRV as soon as I write the code). And if you set /etc/resolv.conf (or equivalent) to 127.0.1.1 and run a DNS server there then it doesn't matter what DNS library the application uses. Some Linux distributions are already starting to do this with dnsmasq because it provides several benefits like local DNS caching and the ability to forward company DNS domains over a VPN tunnel while using closer internet DNS servers for internet domains. Creating a uniform portable service to do the same thing on multiple platforms could be useful in general, potentially with a pluggable architecture that could be used to resolve other pseudo-TLDs like .bit. |
My point was more that if you're already mapping a public-key equivalent (HIP terminology "Identifier") to an IP address (HIP terminology: "Locator"), then you necessarily have some mapping DB already. Making it map an additional value is thus a minor storage cost, not an infrastructural change.
No, because while it can't be encoded into an IPv4 address, the HIP implementation can hook lookups, get the appropriate HIP-specific RR shunted to it, and say "If someone contacts this IP address, snarf it and wrap it in HIP with this HIT" And for IPv4-only servers (not clients), HIP actually provides the ability to support both v4 and v6 via LSIs (Local Scope Identifiers, IPv4-like values that are machine-local and map 1:1 to currently-connected HIs, including your own)
Well, it's arguable that such inefficiency is intolerable (there's certainly high-visibility projects that don't mind it :P), and without that constraint Zooko's triangle was squared some time ago by Aaron Swartz.
110% agreed. My argument is that because of the division of responsibilities in current systems, correct interposition of names is infeasible... and is only getting worse.
A name can be a strong**er** public key equivalent. However, the IP address is a transitional measure, and I feel it can live long enough to serve its purpose sufficiently. And eventually, one wants a separate API anyways.
Yes, HIP provoked new RRs - specifically so that they could be deployed on the global DNS, without breaking existing clients.
Except then this cannot be deployed on the global DNS, meaning your only option is an (effective) MITM attack on the DNS. Which is exactly what DNSSEC is supposed to catch, the DPRIVE working group is trying to make DNS opaque to wire inspection, can break legacy applications because CNAME is resolved client-side and not server-side, may be ignored because SRV deployment is still pretty crap, would actively break the use case of tunneling all DNS through a VPN, etc. Sure, distros (actually NetworkManager, not distros) use dnsmasq to support more advanced routing of DNS queries. But rewriting is a whole different ball game. In comparison, interposing some range of addresses is trivial - an iptables rule with TPROXY or NFQUEUE, a tun device with a route, etc. And in addition, HIP's opportunistic mode provides a use case for actually interposing all of them - and that, too, is doable, because it treats any application data as opaque - something that cannot be done with a name-based system. |
What I'm saying is, if you have some trustworthy mapping DB already then encoding the public key fingerprint into the IP address is irrelevant. It doesn't hurt anything but in that context it also provides no apparent benefit. You could just as well use any arbitrary number (like the hash of the domain name) because encoding a truncated public key fingerprint into the IP address provides no additional information over the full public key you already know from the same place you got the IP address. The interesting (in theory) use case for encoding a public key equivalent into an IP address is to go the other way. To be able to start with only the IP address and securely get back to the full public key and any other information about the host. You could still use DNS or Namecoin or another naming authority to map to such an IP address if you like, but you wouldn't have to use any specific one, or modify it it any way, or even use any naming authority at all. Interposition made easy. But that requires strong cryptographic security properties. If an attacker can plausibly produce a collision then you're back to needing a naming authority that can explicitly provide a strong public key equivalent to resolve collisions, and if you have that there is no need to encode the public key into the IP address in the first place. If people start using IP addresses like that regardless, then when some evildoers eventually start producing collisions at scale, you're stuck having to break compatibility to increase the collision space.
Snow does both together. So you can have a DNS name with a CNAME to a key name even if the target has no static IP address in the DNS (or at all). Meanwhile a server can do a PTR lookup on the IP address of a client (and, if they're smart, verify that the forward lookup matches), and the PTR record will resolve back to the key name, which gives the server a way to identify or log who the client is, using a mechanism that existing servers already use.
The original formulation of Zooko's Triangle is flawed because one of the corners is nominally centralization when what really stands under it is a naming authority, as distinct from a direct mathematical relationship between the name and the target. If I had a public key then I could know the target is the target by verifying a signature using that public key. If I have a domain name then I could know the target is the target by verifying a signature using Verisign's public key (and trusting Verisign). What Aaron Swartz showed is that you can have a decentralized naming authority. The name to key mapping gets "signed" by the blockchain instead of Verisign. And there are certainly uses for that. But naming authorities are an exercise in trading several things against several other things. You want your naming system to be authenticated, globally unique, decentralized, trustless, easy to use, easy to implement, reliable, compute-efficient, memory-efficient, bandwidth-efficient, low-latency, query privacy-protecting, zone privacy-protecting, etc. No way no how that everybody everywhere will always want to prioritize each of those things the same way and there is nothing that perfectly provides all of them at once. So we have DNS, mDNS, Namecoin, phone books, address books, Facebook, etc., etc. Sometimes you can use DNS or Namecoin, sometimes you need something else. And if you can use Namecoin then you have a naming authority and you're back to making data encoded into the IP address irrelevant because you're mapping to it rather than from it.
"Making it worse" is the definition of wrong behavior. Somebody should probably stop whoever is doing that. But I'm not convinced that it's happening, or at least that there is any good reason for it to. We're in a similar transitional situation with names as we are with IP addresses. Regardless of whether you encode public keys into names, there are already non-DNS namespaces like .bit for Namecoin and .local for mDNS. DNS itself has the need for locally resolved names built in because it has to be able to resolve the PTR records for RFC1918 IP addresses. Some kind of uniform platform-independent nsswitch-like service is in order, ideally with an appropriate API for new applications, but also with the same kind of transition kludge to appear as DNS to applications that only understand DNS. And DNS has to be able to accomodate that; but it can.
The thing about transition measures is that they always last longer than you would like them to. Even once you have most of installed base of software using the new API, which will surely take decades if the IPv6 transition is anything to go by, a large minority will still be using the transition measure, and will keep using it no matter how much you berate them as long as it still works and changing it requires effort. So give me the one that can last a long time if it has to, because if it does anyway then it had better be able to.
I think I need to better explain how snow name resolution works. You can put a CNAME to a key name in the global DNS. When you install snow it comes with a small local DNS server that you can set as your system DNS server so that applications will send queries there. So an application will send a query for the A record of example.com to the local resolver when example.com has a CNAME to abc...xyz.key. The local resolver will send the query to an upstream resolver (or authoritative nameserver if operating rescursively). The upstream resolver will respond with an answer containing a CNAME record for example.com to abc...xyz.key, no A record, and rcode NXDOMAIN because the upstream resolver couldn't resolve the target key name. The local resolver can resolve key names, so it will see that the CNAME points to a key name and provide a response to the client with an answer containing the CNAME pointing to the key name, the local A record for the key name, and rcode NOERROR. If an application on a machine without a local snow resolver resolves the same name then it will get the NXDOMAIN response from the upstream resolver, just as it would if it tried to resolve a key name without a CNAME, which is all can do anyway when the target may not even have a unique public IP address. Every recursive DNS resolver is a MITM between the client and the authoritative servers. That is how DNS is designed. It doesn't hurt DNSSEC any because the recursive resolver is the thing that validates DNSSEC signatures. The local resolver is the nearst one, so it would either be on the already-validated side of a DNSSEC query or it would be the thing validating DNSSEC. Either way it knows to ignore an NXDOMAIN response for a name in the .key TLD because it can be resolved locally. Much the same is likely true of anything DPRIVE comes up with. Unless they completely redesign DNS, a local resolver works like any other. Nor is any of this is incompatible with sending DNS traffic over a VPN. You configure the local resolver as the system DNS server as usual, create an iptables rule forwarding all stray DNS packets not sent over the VPN interface to the local resolver and then set the local resolver's upstream resolver as the one on the other end of the VPN tunnel. If the local resolver is integrated with NetworkManager or similar then this is what you would be doing anyway. Potentially putting this entire category of thing (DNSSEC validation, DPRIVE, advanced DNS routing, forwarding/resolution for other pseudo-TLDs like .bit or .local) into a portable local DNS resolver that would automatically provide it all to all local DNS clients is useful in general. And a major use case for SRV records here is that if a device has a DNS CNAME then it can have a SRV record instead of relying on a DHT or similar to map the target key name to an IP address, port and transport method. But if the SRV record fails to resolve then we can still use the DHT. They each backstop each other if one fails. Another possible use for SRV is to provide snow as an alternative transport for a server that does have its own public IP address, in which case the domain name would have an A/AAAA record instead of CNAME and the SRV record would contain the key name and snow port. Then a client machine without snow would ignore the SRV and still be able to use the A/AAAA record directly. So if the SRV record failed to resolve then the A/AAAA record would still work as it would on a client without snow. (That use case is probably not as common anyway though, because being able to do without snow for clients that don't have it implies that you don't need it very much. But it could sometimes be useful for e.g. client address mobility.) But if you're worried about registrars or resolvers not supporting SRV, aren't the even newer HIP records going to be even more trouble? |
Except that mapping HITs to (HI, IP) pairs is not the same thing as a naming system. In fact, Swartz' exact system would be sufficient for making HITs public-key equivalent, if it was a HIT -> HI mapping - enforcing first-to-claim on HITs would nuke collisions entirely, and the IP mapping can then be signed by the HI in anything, including a DHT.
Er, no. DNS was not designed for interposition that alters values. Definitionally, that's an attack on the system. And it's an attack on the system that is being actively defended against now. It was designed for performance-enhancing midlayers, but they're not supposed to alter the database in any way.
And putting a CNAME to one of those in the global DNS namespace is... not advisable, because that's a cross-namespace reference, which breaks stuff, especially due to how CNAMEs are awful.
Yes. And if a name has a CNAME record, it is a standards violation for it to have any other record at all. Even a second CNAME. This is because CNAME is Canonical Name - all such records must be looked up from the referent. Congratulations; your domain is now inaccessible to any standards-compliant client that's not clued in to your special snowflake namespace. HIP, on the other hand, uses a proper new record type, so that stuff which understands the new namespace works, and stuff that doesn't remains unbroken. Your entire workflow from there is predicated on CNAME, and thus falls apart because every domain would have to choose between supporting new clients and supporting the old ones - making gradual deployment impossible is a good way to never get deployment.
Er, no. It's not. Or rather, it's a passive MITM only - altering, injecting, or deleting records is defined as an attack on the system, and is exactly what DNSSEC is meant to prevent.
Incorrect. There's a mode of DNSSEC where the recursive resolver says "just trust me," true. It's considered completely useless and every piece of software I've seen that actually cares about DNSSEC (including GnuTLS, FYI) uses a different resolver, such as libunbound, to validate DNSSEC itself.
Nope! Instant standards violation. A domain with a CNAME may not have any other records. |
Unless you're relying on the cryptographic properties of the HIT, anything that could map a HIT to a (HI, IP) pair could map a name to a (HI, IP) pair.
Swartz' exact system is a naming authority. Its purpose is to map from actual names. If you're willing to claim it as a dependency then you can use friendly names and don't need HITs.
What is the DNS PTR record for 1.1.1.10.in-addr.arpa? Which nameservers are globally authoritative for it? A domain that is locally-resolved and not DNSSEC-signed is not without precedent. There is no reason DNS can't treat .key (and .bit and .local and .onion and ...) the same way it treats .10.in-addr.arpa.
Cross-namespace references are the primary purpose of naming authorities. Every A record is a cross-namespace reference; it assumes you have IPv4 connectivity and can't be used if you don't. CNAMEs are awful, not least because they exclude all other records, but they're also on the short list in RFC1035. The combination of using snow and not needing anything else creates the cross section in which creating a CNAME to a key name is useful. In particular, it contains the entire class of users who would use snow because other alternatives don't satisfy their needs. Not being able to use other records doesn't cost you anything when that's where you were to begin with.
CNAME works where CNAME works. You don't have to use it but you can. You can also use SRV or resolve friendly names to key names using something other than DNS.
The SRV record for example.com doesn't go at example.com, it goes at e.g. _ldap._tcp.example.com. That isn't the same domain name.
If snow is the only transport you support for that name then CNAME is fine. If it isn't then you can use SRV without CNAME and use the other transport in case SRV support is broken. It should even be possible to mix them together and use CNAME like SRV when SRV is broken. We could put the CNAME to the key name at _snow._key.example.com and then applications that optionally support snow could check for it when they look up example.com. Or snow itself could check for it and then add a route for the IP address of example.com into the tun interface... Wait. If the attacker owns evil.example.com and points its A/AAAA record(s) at the victim but supplies the attacker's HI and rendezvous server records, how is HIP preventing any HIP client that resolves evil.example.com from forwarding traffic for the victim's IP address directly to the attacker, when it would otherwise be going over a trusted path like a secure LAN or VPN tunnel? At best if the victim supports HIP then when HIP sees the inconsistent HIs for the same IP address it can halt and catch fire, but what if the actual victim IP address doesn't support HIP or wasn't resolved using DNS?
Not from the perspective of a network administrator. Using a VPN tunnel or whatever you like to secure the connection from your clients to your DNSSEC-validating server gets you secure DNS without having to do anything to any of the clients or client applications. That clearly becomes completely useless if you don't operate/trust the DNS server (and the network link to it), but you don't have to be a large organization to put a DNSSEC-validating resolver on your local machine and set it as your DNS server, which gets you the same thing less efficiently. But no less efficiently than having individual client applications fetch all the DNSSEC records from the root down themselves.
The vast majority of software doesn't actually care about DNSSEC, it just uses whatever comes out of gethostbyname() or getaddrinfo(). TLS libraries are a special case because of DANE but there are no DANE records in the .key TLD. All of the records that need to be locally resolved are in the .key TLD. The signed CNAME or SRV record in the global DNS never gets modified. All it would take is for the root servers to mark .key as not DNSSEC-signed as they do for the RFC1918 reverse lookup zones and the entire DNSSEC validation issue evaporates. |
No description provided.
The text was updated successfully, but these errors were encountered: