Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Subdomain, Local DNS support: Don't require hosts file manipulation #416

Closed
1 of 3 tasks
rickmanelius opened this issue Jul 19, 2017 · 48 comments
Closed
1 of 3 tasks
Labels
Prioritized We expect to do this in an upcoming release

Comments

@rickmanelius
Copy link
Contributor

rickmanelius commented Jul 19, 2017

Summarizing the acceptance criteria in this thread:

  • Replace naming convention to *.ddev.site.
  • *.ddev.site records pointing to 127.0.0.1.
  • Test and fallback to /etc/host when unable to lookup public DNS records.
@Schnitzel
Copy link

Here some links how we do it at amazee.io:
Basically we create a dnsmasq container that listens on port 53 on the host that just returns 127.0.0.1 for any request with the domain *.docker.amazee.io:
https://github.com/amazeeio/cachalot/blob/master/cli/amazeeio_cachalot/dnsmasq.rb

OS X has resolv.conf settings per domain, so we just set one for docker.amazee.io that then defines a nameserver to be 127.0.0.0:53:
https://github.com/amazeeio/cachalot/blob/master/cli/amazeeio_cachalot/resolver.rb

Linux has no nameservers per domain, so we just add ourselves to the /etc/resolv.conf, our nameserver will not return any requests other than docker.amazee.io so the system will just use the next nameserver in line, which is the one defined via DHCP
https://github.com/amazeeio/pygmy/blob/master/lib/pygmy/resolv.rb#L46

on the haproxy side, we have not a full matching to the incoming domain, but match also subdomains:
https://github.com/amazeeio/docker-haproxy/blob/master/haproxy.tmpl#L40

so far this works very well, sometimes it's a bit tricky on Linux as other systems also change the /etc/resolv.conf but restarting pygmy and with that checking the resolv.conf contents usually fixes that.

@rfay
Copy link
Member

rfay commented Sep 22, 2017

The problem with dnsmasq and all similar solutions is that it requires intervention with the workstation of the user, and in fact reconfigures their system. Most users don't appreciate things that do that, and there are almost always unanticipated consequences, and support issues we don't want or need.

I strongly recommend that we move to a legitimate resolvable domain name (like local.drud.com ?) and use wildcard DNS there. That would make it so that any user connected to the internet would immediately have a working DNS lookup and no hosts file manipulation would be required. This approach is outlined in #175. When users did not have DNS (traveling or otherwise disconnected) the traditional hosts file manipulation could be done.

@rfay rfay changed the title Subdomain, Local DNS support Subdomain, Local DNS support: Don't require hosts file manipulation Sep 22, 2017
@Schnitzel
Copy link

Schnitzel commented Sep 22, 2017

I strongly recommend that we move to a legitimate resolvable domain name (like local.drud.com ?) and use wildcard DNS there.

and how do you know the IP that should be resolved to? Just assume that it is localhost always?

@rfay
Copy link
Member

rfay commented Sep 22, 2017

Yes, it would resolve to 127.0.0.1, so in the example, *.local.drud.com would resolve to 127.0.0.1

@Schnitzel
Copy link

then I would suggest to use nip.io and not invent something new :)

@Schnitzel
Copy link

the only problem: what if the docker host is not running on localhost?

@rfay
Copy link
Member

rfay commented Sep 22, 2017

I love the idea of nip.io (and never ever knew about it, thanks!), but

  • it's complex for people, it has great wildcard mapping, but you have to understand what an IP address is to use it.
  • It's under somebody else's control and maintenance, so could go away in a moment (or malfunction) and we'd have absolutely no recourse.

As far as docker host not running on localhost: We only support (currently) docker-for-mac and docker-for-windows, which are innately on the same machine. Linux could conceivably bite us, but we really don't have any plans to do anything more exotic than a local docker server. (I did originally test older docker versions for Windows, and was able to make them work, but we didn't ever see a need to support docker-machine and friends.)

@Schnitzel
Copy link

but you have to understand what an IP address is to use it.

why? we assume it's 127.0.0.1 anyway always?

It's under somebody else's control and maintenance, so could go away in a moment (or malfunction) and we'd have absolutely no recourse.

correct, but do you feel ready to create a DNS infrastructure that can handle *.local.drud.com, what do you do if other people decide to use that? Running DNS is not thaaat easy.

the best would probably be https://tools.ietf.org/html/draft-west-let-localhost-be-localhost-06 but this will take a bit.

@rfay
Copy link
Member

rfay commented Sep 22, 2017

  • nip.io only returns 127.0.0.1 if you use *.127.0.0.1.nip.io, as I read it. That means people need to use a long hostname, and it has funny numbers in it.
  • I was just talking about creating a wildcard record on our DNS for drud.com (or another domain). Not setting any infrastructure up.

@rfay
Copy link
Member

rfay commented Sep 22, 2017

Responding to "let localhost be localhost", thanks for that link. AFAICT, that's about the localhost names (localhost, .localhost, etc). We're not using those anywhere or in any way, just the reserved address 127.0.0.1. Maybe I misunderstood your intent with that.

@Schnitzel
Copy link

I was just talking about creating a wildcard record on our DNS for drud.com (or another domain).

I think a wildcard itself will not be enough, you would need an ORIGIN directive so foo.bar.local.drud.com will also be resolved.

Not setting any infrastructure up.

Just saying, having thousand of users all over the world using .local.drud.com will put a tiny bit load on your DNS servers :), that's for example the problem with xip.io which struggled over time quite hard with too many requests and with that made people very sad because suddenly their local development environment did not work anymore.

@rickmanelius
Copy link
Contributor Author

Along with #415, I think we may need to get on a call with all 3 of us because this doesn't feel like we're in sync and this might just be a quicker item to resolve with additional clarity.

@Schnitzel
Copy link

Schnitzel commented Oct 4, 2017

@rickmanelius see https://doodle.com/schnitzel and send me a meeting request, I'm in Austin (central time) currently.

@rickmanelius
Copy link
Contributor Author

rickmanelius commented Oct 19, 2017

We met and now have a clear and agreed upon path to actionability. Summarizing:

  • The default behavior will be a valid, wildcard domain (i.e. *.ddev.site, which we just purchased) that is powered by a scalable nameserver (i.e. Route 53) that effectively points back to 127.0.0.1.
  • The fallback behavior (when access to the internet is disabled or prohibited) will be a prompting the end-user for the permission to modify /etc/hosts.

@rickmanelius
Copy link
Contributor Author

We're now using the official Drupal 8 release here #503. We probably want to start doing the same for WordPress and Drupal 7.

@rickmanelius
Copy link
Contributor Author

An additional need for this:

  • Properly configure *.ddev.site with the appropriate records on Route 53.

@beathorst
Copy link

is there any workaround for now? because for us it is necessary to have this feature.

@rfay
Copy link
Member

rfay commented Mar 16, 2018

@beathorst to help us prioritize, can you please say why it is necessary for you to have the DNS vs /etc/hosts feature? For most people it's just an annoyance that it requires /etc/hosts to be manipulated.

We were wondering if maybe you were talking about the ability to have multiple hostnames point to one project? That's in #620 and it is high priority. If that's what you're after we'd love to have you chime in there with the reasons it's important to you.

@beathorst
Copy link

@rfay thanx for your fast answer.
well it is necessary because for us it is a showstopper to use ddev for the hole company.
I would like to use it and get ride of our vagrant boxes but we have a lot of TYPO3 projects with lots of sites in one installation. Every site needs its own (sub)domain. Otherwise it is only possible to reach one of the sites inside the typo3 installation

@beathorst
Copy link

sorry you are right it is #620 what helps me

@rfay
Copy link
Member

rfay commented Apr 16, 2018

I suspect we're going to never do this. We're all used to the /etc/hosts manipulation, and people like it working offline. The only think we really have to do is provide a tool that will make it easier for Windows users.

@Schnitzel
Copy link

We're all used to the /etc/hosts manipulation

and to handle subdomains we tell people to just create the subdomains in config.yml?

@rfay
Copy link
Member

rfay commented Apr 16, 2018

You can already do subdomains easily (they're simulated subdomains, but work fine). See the docs about additional_hostnames which gives exactly this example (subdomains for multiple languages). You can go to any level in the dotted hostnames there; they're certainly not DNS subdomains, but they behave the same from a name resolution perspective.

@Schnitzel
Copy link

So I think there is now a bit confusion, there is

@rfay saying:

I suspect we're going to never do this.

and @rickmanelius saying:

To that end, let's default to a public DNS entry for *.ddev.site as we had in our prvious PR.

Can we get some clarity on this?

From my side, I'm fine with not having a publich DNS and relying on /etc/hosts if:

@rfay
Copy link
Member

rfay commented May 21, 2018

@SqyD
Copy link

SqyD commented Jun 20, 2018

May I suggest an alternative approach, which I used for the drupaltrainingday event (in dutch, sorry):

  • Serve a proxy auto configuration file from the ddev router
  • Populate it like this so it configures the browser to use the ddev router ip and port for all ddev managed domains.
  • Provide per browser instruction on how to configure this (3 steps max). (permanent)
  • OR: start a Chrome/Chromium browser with the --proxy-pac-url commandline option (one time)

This will avoid hostfile mods, doesn't require any elevated permissions and would allow the use of any domain without adding any services or overhead.

@rickmanelius
Copy link
Contributor Author

While we didn't quite get to a public DNS lookup first before modifying /etc/hosts, we do have FQDN support, which allows both the change of naming convention as well as the ability to use public DNS records that point back. To that end, I'm suggesting we close this thread. Conversations regarding switching the default TLD pattern was discussed here and can be re-opened. The desire to default to public DNS with /etc/hosts as a fallback is still on the table, but I'm wondering if the FQDN support already makes enough inroads towards this.

@rickmanelius
Copy link
Contributor Author

Actually, in re-reading my notes I think I vote to keep this open because we still want to address the change from *.ddev.local to *.ddev.site, which would be a prerequisite to switching out the default behavior if we proceed.

@rfay
Copy link
Member

rfay commented Oct 23, 2018

From https://letsencrypt.org/docs/certificates-for-localhost/

You might be tempted to work around these limitations by setting up a domain name in the global DNS that happens to resolve to 127.0.0.1 (for instance, localhost.example.com), getting a certificate for that domain name, shipping that certificate and corresponding private key with your native app, and telling your web app to communicate with https://localhost.example.com:8000/ instead of http://127.0.0.1:8000/. Don’t do this. It will put your users at risk, and your certificate may get revoked.

@rfay rfay removed this from the v1.6.0 milestone Jan 11, 2019
@rfay rfay added Prioritized We expect to do this in an upcoming release and removed hibernate labels May 8, 2019
@rfay
Copy link
Member

rfay commented May 15, 2019

Now that we have mkcert for trusted local https, it's not important to have a "real" certificate. That means we have plenty of flexibility here.

A simple option is to

  • Make the ddev domain configurable (use ddev.site instead of ddev.local, allow transition on this)
  • CNAME *.ddev.site to a name like "ddev.hostsfile".
  • ddev.hostsfile is in local /etc/hosts, placed there by ddev, pointing to 127.0.0.1 (or correct IP on docker toolbox)
  • Look up name before acting on the /etc/hosts. If it resolves, go with it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Prioritized We expect to do this in an upcoming release
Projects
None yet
Development

No branches or pull requests

9 participants