Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

dns servers hardcoded in generic/ubuntu1804 #54

Open
furlongm opened this issue May 30, 2019 · 27 comments · May be fixed by #59
Open

dns servers hardcoded in generic/ubuntu1804 #54

furlongm opened this issue May 30, 2019 · 27 comments · May be fixed by #59

Comments

@furlongm
Copy link

furlongm commented May 30, 2019

The following servers are hardcoded in the generic/ubuntu1804 image:

# systemd-resolve --status
Global
         DNS Servers: 4.2.2.1
                      4.2.2.2
                      208.67.220.220

Wouldn't it be expected that the local DNS from DHCP should have priority over these servers? If running in an environment where access is blocked to external DNS servers, then this image requires further work to use, rather than working out of the box, like say, bento/ubuntu-18.04 or ubuntu/bionic64.

@ladar
Copy link
Member

ladar commented May 31, 2019

See #11 for a long discussion on this topic and possible fix.

@furlongm
Copy link
Author

If we get an IP from DHCP, we can expect to get functional DNS servers from DHCP too?

A quick check shows I have 37 other boxes, and every single one of those gets DNS servers from DHCP. The only thing I need to modify in my typical Vagrantfile is the box name, and things work as expected. The DHCP assigned DNS severs are not "random", they are known good for the network the box is launched on.

These boxes are hardly "generic" if they override the (sane) defaults that most other boxes use and require users to modify Vagrantfiles in order to get DNS working?

Would it make sense for the default to be to use DHCP supplied DNS servers, and to provide an override for those who require/prefer that? The override could be triggered by setting an environment variable, e.g.:

Vagrant.configure(2) do |config|
  ...
  if ENV['OVERRIDE_DHCP_DNS_SERVERS'] == 'true'
    node.vm.provision 'shell', path, 'override-dhcp-dns-servers.sh', args '4.2.2.1 4.2.2.2 208.67.220.220'
  end
  ...
end

where override-dhcp-dns-servers.sh applies the network changes as necessary after the DHCP DNS servers have been set up?

This would provide a means of giving users the expected defaults, with an optional override for situations where opinionated provisioning is required.

@ladar
Copy link
Member

ladar commented Jun 3, 2019

The DNS servers provided by default are "known good" public DNS servers. (Level 3 and OpenDNS) That being said, for the OSes that support it, we could setup the hard coded DNS servers to be fallbacks, but that will take some effort, as every distro does network configurations differently, and not all of them support a "fallback" entry.

I'd then simply need to add an override to the configuration for the magma builds, which I don't want using the DHCP servers (various unit tests fail if the DNS server provided via DHCP responds to queries for a non-existent domain with an IP, which is why I hard coded the servers in the first place.

@netsec
Copy link

netsec commented Jun 3, 2019

I vote we use Unbound
It uses a built-in list of authoritative nameservers for the root zone (.), the so-called root hints. On receiving a DNS query it will ask the root nameservers for an answer and will in almost all cases receive a delegation to a top level domain (TLD) authoritative nameserver. It will then ask that nameserver for an answer.
It will recursively continue until an answer is found or no answer is available (NXDOMAIN). For performance and efficiency reasons that answer is cached for a certain time (the answer's time-to-live or TTL). A second query for the same name will then be answered from the cache. Unbound can also do DNSSEC validation.

   To use a locally running Unbound for resolving put

         nameserver 127.0.0.1

   into resolv.conf(5).

   If authoritative DNS is needed as well using nsd(8), careful setup is required because authoritative nameservers and resolvers are using the same port number (53).

@netsec
Copy link

netsec commented Jun 3, 2019

or this works too...

#Vagrantfile config format, the one which starts:
Vagrant.configure("2") do |config|

config.vm.provider :virtualbox do |vb|
vb.customize ["modifyvm", :id, "--natdnshostresolver1", "on"]
end

#If you're using Vagrant 1.1+, you can append this at the end of the file:

Vagrant.configure("2") do |config|
config.vm.provider :virtualbox do |vb|
vb.customize ["modifyvm", :id, "--natdnshostresolver1", "on"]
end
end

@furlongm
Copy link
Author

furlongm commented Jun 3, 2019

On networks that block access to external DNS servers, unbound takes a long time to start (~30+ seconds until it times out trying to update the trust anchors), and unbound also cannot contact the root servers. So unless the DHCP supplied DNS server is added as the primary forwarder, unbound suffers from the same issue as described above.

Having the hardcoded public DNS servers as fallbacks seems to be a good solution, and maintains "compatibility" with most other vagrant boxes.

@netsec
Copy link

netsec commented Jun 3, 2019

Walking the line of protecting metadata leakage, (I'm going to be extreme, but that leakage leads to his/her death -- be it warranted or not) is a tremendous sacrifice for, 30 seconds you said? Unbound uses bridges and aws relays to protect confidentiality similar to Magma's libdime / DMTP or TOR.

@netsec
Copy link

netsec commented Jun 3, 2019

A DNS query should execute similar to how a TLS 1.2 cipher is negotiated and decided upon.

@aeikum
Copy link

aeikum commented Jun 3, 2019

I noticed today that the current DNS setup in these boxes results in bad domains resolving to a bogus search page hosted by Level3 instead of a proper NXDOMAIN. If nothing else happens, it would at least be nice to remove this crummy DNS from the list.

@furlongm
Copy link
Author

furlongm commented Jun 3, 2019

The main point about unbound was that it does not solve the issue of using public DNS servers on private networks that have no access to the hardcoded DNS servers. In fact unbound suffers from the same issue, as the user cannot reach the root servers.

@nvtkaszpir
Copy link

Maybe provide additional boxes without the hardcoded dns entries?

@dw
Copy link

dw commented Jun 27, 2019

The crux of the original ticket appears to be:

As for the DNS issue you mentioned, I setup the default DNS servers, because if I let random DNS get assigned via DHCP, I sometimes end up with servers that return responses for domains which don't exist. And that in turn breaks several of my unit tests, and the unit tests for some of my dependencies. So I went with the 'known good' by default strategy.

It seems these images have a surprising (and for all intents and purposes broken) configuration to work around a problem with one site out of the ~493k (according to Vagrantcloud) that may be running it. It is not possible to consider the hard-coded addresses known-good without knowing the state of all firewalls they will ever find themselves running behind.

That generic public resolvers can be mixed into a pool of targets alongside private resolvers means these images are at risk of functioning incorrectly in any environment that has a private or split view of DNS -- such as in nested virtualization on Amazon EC2, GCE, or within the network of almost every large company.

In my case, I work on a tool that supports .box files, and expects DNS servers offered by DHCP to be used as could be reasonably expected from any guest image. It is explicitly not the tool's job to damage images before the user has their own opportunity, so working around this through mutation is not an option. This issue means the tool must either print a loud warning when running these images, or DNAT the guest's traffic away from random third party nameservers just to ensure it will boot correctly in all environments, and resolve names correctly.

Using DNAT is not a real option -- it breaks guests who legitimately must contact those IP addresses, which leaves printing misconfiguration warnings.

As things stand now, every DNS lookup from the guest is being transmitted in quadruplicate: once to the internal DNS, twice to Level 3, and once to OpenDNS. Each of these administrative domains has the opportunity to return a conflicting answer for the same query, or have its traffic dropped, presumably inducing random timeouts for some subset of users.

Please reconsider the existing configuration, as it doesn't make any sense

Thanks

@furlongm furlongm linked a pull request Jul 9, 2019 that will close this issue
@gildas
Copy link

gildas commented Sep 24, 2019

Just my $.02....

At my company, DNS 4.2.2.1 and 4.2.2.2 are blocked by our outbound firewall for security reasons.
I have heard we are not the only ones...

@qmfrederik
Copy link

Another 2 cents:

I was seeing random DNS-related issues (domains which could not be resolved, in particular domains hosted by Ubuntu). It turns out that 4.2.2.1 and 4.2.2.2 are not good default options for us.

Additionally, Level 3 (the company that hosts the 4.2.2.1/2 DNS service) doesn't seem to advertise this as a free-for-all DNS server.

Either just not hard-coding any DNS services, or using the Google (8.8.8.8/8.8.4.4) DNS servers would be better options for me.

@skyscooby
Copy link

Just out of curiosity what is driving people to using these boxes instead of the Official Ubuntu ones? Are those broken in ubuntu/bionic64 for example? . It's pretty clear some less than generic decisions have been made by the core developer of these ones and it's not certain we are going to get fixes in any timely manner.

@nvtkaszpir
Copy link

Just out of curiosity what is driving people to using these boxes instead of the Official Ubuntu ones? Are those broken in ubuntu/bionic64 for example? . It's pretty clear some less than generic decisions have been made by the core developer of these ones and it's not certain we are going to get fixes in any timely manner.

official images support only virtualbox provider, while robox has much more providers, such as vmware_desktop, virtualbox, parallels, libvirt, hyperv.

@gildas
Copy link

gildas commented Sep 27, 2019

switched back to bento boxes... even if they are a tad bit behind, at least they work.

@albers
Copy link

albers commented Oct 2, 2019

Our internal DNS servers add resolution for several internal domains, which obviously are not available on external DNS servers. Therefore I cannot use this box.

@nvtkaszpir
Copy link

just add override for this, something like here:
https://github.com/nvtkaszpir/workstation/blob/master/Vagrantfile#L49
and script
https://github.com/nvtkaszpir/workstation/blob/master/scripts/fix.generic-ubuntu-dns.sh

@skyscooby
Copy link

@ladar Why not change the images back to what is generally expected generic behaviour for Vagrant images. Then package a script like mentioned by @nvtkaszpir into these images so you can easily add a line into your Vagrant files to change your DNS behaviour as desired for your specialized case.

It's not really fair to build a fancy splash page advertising your images for public consumption and not be somewhat accountable to the needs of the community.

@albers
Copy link

albers commented Oct 2, 2019

@nvtkaszpir This worked, thanks a lot.
But I still do not see why a box called "generic" should require such a hack.

@nvtkaszpir
Copy link

box name generic is pretty unfortunate.

@mloskot
Copy link

mloskot commented Oct 31, 2019

@gildas

At my company, DNS 4.2.2.1 and 4.2.2.2 are blocked by our outbound firewall for security reasons.

I confirm

@qmfrederik

or using the Google (8.8.8.8/8.8.4.4) DNS servers would be better options for me

I can confirm this too.

@skyscooby

Just out of curiosity what is driving people to using these boxes instead of the Official Ubuntu ones?

Up to date with Hyper-V support.


In #11 (comment) I posted my DNS workaround which has been working well with generic/ubuntu1904 for me.

@kawaway
Copy link

kawaway commented Mar 20, 2020

#11 (comment)

@brokep
Copy link

brokep commented Dec 15, 2020

What is a good privacy-aware DNS provider we can all agree on here?
Obviously google is out.

@furlongm
Copy link
Author

As noted above, whatever DNS server is chosen on the internet, it may be inaccessible for others. Relying on the DHCP-supplied DNS servers is probably what will work for most people. So the best privacy-aware DNS servers may just be your own?

@mcarbonneaux
Copy link

mcarbonneaux commented Jun 22, 2023

your box are very fine and for many os ! but... when is about the hardcoding of the dns in the is less fine...

majority of time the dns who host vagrant is working stock, and the usage of the box without hardcoded dns working like that is generic...
by hardcoding the dns you force a specific usage of the box, for internet connected only usage...

i use vagrant for many scenario and not all are connected to internet, in reality majority of my time i use only private dns (i'm connected to my corporate network)...

the fact is if that dns are hardcoded, we are oblige quasi all the time to modify network configuration to remove this dns...
if rely on dhcp dns, we need to modify network only when the host dns is not usable for specific test (vagrant file that manage there proper dns for example)...

i think the idal way is to rely by default on the dns sended by the dhcp....plus, plugin exist to manage more complexe usage with dns like https://github.com/BerlinVagrant/vagrant-dns.

and "I'd then simply need to add an override to the configuration for the magma builds, which I don't want using the DHCP servers (various unit tests fail if the DNS server provided via DHCP responds to queries for a non-existent domain with an IP, which is why I hard coded the servers in the first place." if normaly you have already configured that dns on your vagrant host.... normaly work with dhcp without hardcoding....

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging a pull request may close this issue.