Join GitHub today
GitHub is home to over 50 million developers working together to host and review code, manage projects, and build software together.Sign up
proposal: use DNS TXT records for vanity import paths #26160
Using HTTP and TLS for vanity import paths presents a number of problems (bugs in bespoke servers for serving the required meta tag, timeouts, overloaded servers when a package becomes popular, etc.). Using DNS for this has been discussed in the past, but no formal proposal was submitted.
This is a tracking issue for the proposal at https://golang.org/design/26160-dns-based-vanity-imports
A bunch of thoughts that might be worth clarifying or stating they've been considered to avoid others wondering too.
Is there a limit on the size of a DNS TXT record that could cause problems?
I'd love for this to happen.
In the proposal text, you write:
What do you mean here? You can (obviously) deny service if you can intercept all queries but how would you foist a forged response onto the victim? I don't think you can without hoping for the victim to lose patience and pass the -insecure flag (just like with HTTPS).
Thanks, I will try to clarify some of these.
There's a time component missing from that statement; IIRC there was a proposed attack a while back that relied on DNSSEC clients not shipping the root public keys with the client from the get go (in the way you would ship a trusted certificate store that TLS libraries would use, for example), since those had to be fetched first if you could intercept all DNS traffic from the very beginning you could make up a key and fake responses. However, I don't remember the attack well enough to make any intelligent statements (so take the description I just gave with a grain of salt) and the risk was low enough that it's probably not worth mentioning. I included it because I vaguely remember someone complainng that it was "less secure" because of this the last time this discussion happened.
EDIT: I have removed it since I don't really remember what the discussion was about last time.
Thanks for the thoughtful proposal. I don't quite see the connection between the problem you are trying to solve and the solution, though.
The problem statement as far as I can see is:
This proposal would add a second mechanism, DNSSEC, in addition to the HTTPS get. The implication seems to be that some of the following are true:
None of these ring true to me. I'm personally far more comfortable setting up a static HTTP page than configuring DNS records, let alone DNSSEC. HTTP pages are far more easily automated, stored in git repos, etc, whereas DNS configuration always seems to be typing into tiny boxes on registrar web pages with poor support for automation.
Also, since we can't abandon the HTTPS fetch, we'd have to do both in parallel and take the first one that succeeds? What if they both succeed and say different things? Then non-determinstically we flip back and forth depending on which is faster?
It sounds like there is potentially a problem worth solving in how difficult it might be to set up HTTPS servers, and I'd like to know more specific details about that problem. I am skeptical that the answer is to abandon HTTPS and add a second entirely different mechanism. It seems like a more targeted fix to the specific problems would be better than a whole new thing with its own set of (possibly much less familiar) problems.
Ten years ago I think it would have been unreasonable to require HTTPS - and in fact goinstall did not - but setting up HTTPS servers gets easier with each passing year. Hosting sites are starting to have automatic HTTPS certificate creation for free via Let's Encrypt. In a couple more years I think it will be completely standard for essentially any hosting service and every popular web server engine (if not already).
If you are looking for a way to serve the (tiny) redirects, all it takes is to mix rsc.io/go-import-redirector/godoc with golang.org/x/crypto/acme/autocert, or to host it on a server that already provides auto-cert, like Google Cloud and presumably others (example). For a "no assembly required" experience, try @rakyll's https://github.com/GoogleCloudPlatform/govanityurls.
Working to make it as easy as possible to serve this information seems substantially better for the ecosystem and the user experience than adding an entirely new second mechanism. What can we do to make serving these HTTPS pages even more trivial?
Also note that for the specific case of load on a public server for a very popular package, the long term plan is to establish a shared proxy that is used by default. That will take care of the load with no effort required by people setting up tiny servers.
Apologies that this ended up being a little long, but there were a lot of points to address.
DNSSEC does somewhat mitigate the simplicity aspect, granted.
Since the user doesn't have to manage any of the keys beyond initial setup this seems likely. There is a company (whoever owns the TLD) managing the certs for you and signing records. They (hopefully) have the infrastructure to deal with this and aren't an individual trying to setup some cron jobs with no monitoring.
Again, since some company is presumably running your DNS server (with a team of people to fix it when it goes down) this seems true to me. We've taken the end user out of the equation for the most part. If I want to run my own DNS I can still do that, and then we're back to the same problems (I am not a system admin, I don't have a whole team of people to fix it when the server goes down, etc.). There are HTTP servers that do this too, but they all still require a lot of work on my part (more on this later, but if you know a truely easy way to setup Go HTTP redirects I'd love to know about it).
If you're making a DNS requestion and it timed out, it would break the HTTP case too. We are adding another request (I think?) that has to be made, so it's possible that one could succeed and one could time out whereas if we'd just made one we would have succeeded and gone on to contact the HTTP sever, but this sort of race doesn't seem likely. If the server was up the first itme it will probably be up the second time since it's most likely the same recursive resolver we're contacting each time (as opposed to a recursive resolver and an unrelated HTTP server).
This seems true to me. Many DNS providers are free (and you had to have one already anyways), and many of them provide DNSSEC. Google provides a DNS service which makes this very easy, Cloudflare is another that makes this simple, various registrars support it, etc. If you could already setup DNS A/AAAA or CNAME records to point at an HTTP server, you can probably turn on DNSSEC.
This part is definitely trickier, if anyone who is more experienced managing DNSSEC than me (I've had most of my domains signed for a while but never had any problems, but I'm sure they exist) I'd love to know about common failure modes. It may be that it's easy to debug them, or it may not.
If you are setting up an HTTP server to serve vanity imports, presumably you have already setup DNS records for it, so this doesn't seem to be a problem since you have to know how to do it anyways. DNSSEC is a tiny bit more complicated, but in many systems it's just a button click or two and copy/pasting a key around. That's definitely more overhead than just setting up an A record, but much less than setting up an A record and an HTTP server.
I like the idea of trying to fetch them in paralllel and using the first one to come back.
Can you give some examples? The only one I know of that's free is GitHub, but that requires that I have a GitHub account (on top of whatever DNS provider I'm already using) and requires that I setup and figure out a lot of extra stuff (git, some sort of static site generation or copy/pasting and editing tons of HTML pages around, figuring out their impossible docs to setup Lets Encrypt which gave me a ton of trouble to start because I had some minor problem in my setup that it didn't want to warn me about specifically, etc.)
Both of these require that I, a user that is not a system adminsitrator, purchase (at great cost, depending on your financial situation) a VPS and manage it myself. While this may be great for a tiny portion of people, it's likely to hurt my users when it goes down because I didn't know how to apply security updates properly. If I am a programmer and not an ops or systems person, this doesn't seem practical.
It seems like we need services that specifically host your packages and/or redirects for you. This hasn't really materialized in any substantial way thus far, and I'm not really sure why. I suspect it will be a bit better once we have zip based packages, however we'll still need some way to mark a domains canonical package server and I think that has to be DNS based.
The short version of all of this is that no matter how much easier we make it to set up an HTTP server, not everyone with a domain is running an HTTP server already but everyone with adomain is obviously using DNS already, which in my mind makes it the solution with the least amount of overhead.
Maybe it would be worth waiting for ZIP based proposals and only using DNS to determine the canonical package repo for a domain? That way the current legacy mechanism remains HTTP and the new mechanism is only DNS for finding the package server (which it would have to be anyways) and then HTTP (or Git, we'll see how that plays out) for fetching packages. This feels cleaner to me.
I am looking forward to having this tooling, but I don't see how we could turn it on by default for all Go users without making a lot of people angry that all of their package requests are now being routed through Google (or maybe it will just be me; but that can be debated in another issued).
I'm sorry but we're not going to add DNSSEC here. If it were day 1 and we were redesigning, maybe there would be an argument for DNSSEC instead of HTTPS. I doubt it but maybe. But certainly we cannot drop HTTPS, and there is not a good argument for doing DNSSEC+HTTPS together. Two choices is always more complex than one, it's just not difficult enough to set up HTTPS, and it's getting easier.
GitHub Pages does static HTTPS hosting with zero-config SSL. So does Firebase Hosting, for free. Google App Engine's free tier is also fine for this, and it does the zero config thing too. I believe AWS has a similar service for zero-config HTTPS serving. And there's also Netlify, which is Hugo-based. In short there are many options, more than I've listed, and they're only going to keep sprouting up. Let's encourage using one of those instead of working through all the kinks of maintaining a second access mechanism next to the first.