-
Notifications
You must be signed in to change notification settings - Fork 14
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Thank you and dns.nextdns.io #7
Comments
my results:
cat /var/www/html/DOHservers/DOHipv4.txt | grep 194.110.115.97 cat /var/www/html/DOHservers/DOHipv4.txt | grep 45.128.133.120 Both addresses are are available in the result file. DNS queries to obtain the IP address are retrieved, using a recursive unbound resolver, the resulting IP address is than verified, using a reverse lookup query on the OpenDNS server(s). I live in Belgium, Is it possible you get different results (IP addresses) in your region? |
I live in Switzerland. 195.186.1.110 is a DNS server of one of the main ISP in Switzerland
With 8.8.8.8
So it seems this dns.nextdns.io has a lot of different IPs, however there might some basic principle I'm missing ? At the moment I just added this hostname in custom unbound options
|
When looking at my unbound logs, I can clearly see the resolvers that finally provide my answer (ns1.nextdns.io and ns2.nextdns.io) step 1: find the IP for ns1.nextdns.io
step 2: lookup the addresses for dns.netxtdns.io, using the correct nameserver
you can repeat the test, using the second nameserver (ns2.nextdns.io = 45.90.30.1), you'll get the same results. My conclusion (assuming you get the same results fot the NS servers): Your provider (195.186.1.110) is changing the results into something they want you to use, as opposed to the real thing. This is a known practice some providers use to change or block access to some resources on the internet. A typical example would be blocking access to torrent sites (piratebay), mandatory in some countries. It appears your provider is using this method to let you believe you're actually using dns.nextdns.io, but you're really using something else, or there is an instance in between you and nextdns to manipulate some results. |
This is quite interesting. Using a recursive DNS server, or even asking directly I'm not very familiar with the concept, but might this be due to anycast DNS (and some round robin ?)
I then went to https://dnslookup.online/ and various IPs were reported, often very different. In any case, using my own recursive DNS server is probably a good solution I should consider. |
Even more interesting : https://dnschecker.org/#A/dns.nextdns.io Adding |
Strange, different answers from the same nameservers in different regions... There is really nothing I can do about this, since I always get the same result in my region, so it's probably a very good idea to add your unbound config
Since I don't know the reason, I've opened a topic regarding this problem, you can read the question and possible answers here. |
Indeed, I also believe in this case the best solution is the nxdomain one. Thank you very much for your time and, again for the list you are maintaining |
I see different IPs than both of you. Are you willing to share the scripts you have written to parse the lists used to create your four lists so that individual users can create "localized" lists? |
I'm currently in the process to consolidate and cleanup the scripts, as the code isn't very readable (read professional) in it's current state. For now you will have to download the lists as is, and add the IP addresses for dns.nextdns.io for your location. The function I currently use to retrieve the IP addresses:
The resolvers I use, are local unbound listen addresses on port 5552, you need to change these into something that suits your needs. Keeping your request in mind... |
In order to allow you to build your localized IP lists, I've added a sqlite3 database to the repository. The database contains the information, I use to execute the dig requests. A description of the content can be found in the updated manual. Hope this helps a bit... |
Thank you very much. I was not expecting anything anytime soon. It will be this coming weekend before I can do anything with it on my end.
… On Feb 28, 2021, at 04:56, jpgpi250 ***@***.***> wrote:
In order to allow you to build your localized IP lists, I've added a sqlite3 database to the repository. The database contains the information, I use to execute the dig requests. A description of the content can be found in the updated manual.
Hope this helps a bit...
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub, or unsubscribe.
|
This is what I have come up with so far: https://github.com/jrschat/PublicStuff/blob/master/DOH_database_script.sh |
Some of the DOH lists have been updated recently, as a result, the content of the database changes. I noticed (checked the database) that some entries are no longer in the lists, but still have an entry in the database (intentional, the timestamp field). I would recommend to use the timestamp field to build the lists, as adding domains (and the resulting IPs) with an expired timestamp can (might OR might not) result in blocking IPs you don't want to block. Examples on how to use the timestamp field in sqlite3 queries are provided in the manual (section 5.2). |
I think I did right.
|
I have a few more changes I will probably make before I call it done but I think I am getting close. Thanks again for looking at it and for making the database. |
Looked at your script. dig +short adblock.lux1.dns.nixnet.xyz -> dig +short lux1.nixnet.xyz. -> The IPs are the same because adblock.lux1.dns.nixnet.xyz. is a CNAME for lux1.nixnet.xyz. The entries in the cnameinfo table are only useful for users that which to create a DNS based list for use with sofware, such as pihole. The end result of your script (including the domains in the cnameinfo table) is the same, but you're doing unnecessary dig requests. |
Interestingly, I get a different number of IPs (1 more if I include the CNAMES) in my list if I include the "duplicate" lookups. |
Only reason I can think of for this to happen is a query timeout for one of the digs. What is your resolver (@192.168.0.8)? |
It is my development PiHole instance that points upstream to my production pair that are both recursive unbound resolvers. Changing it to one of the directly and increasing the tries to 3 brought the number up to the same. I haven't turned my IDS rules off to see if my full list of numbers will equal yours since I am blocking a couple TLDs. |
You shouldn't use pihole as resolver. If there is an entry in a blocklist (or a regex), you get 0.0.0.0 as a result, which you remove with your script (grep -E -v '^0..{0,11}$' DOHdup.txt > DOHip4.txt). You should use a dig command that directly queries unbound. IDS may impact the results, if there are active rules that block specific TLDs. |
Cool, that brought it closer, up to 271 from 268. My IDS was catching a few. Four more to go to match quantity. Would you qualify any of these at query errors? |
Doesn't look like there are errors in your result. using todays db, earlier, I listed the function I use to get the IPs, the result is stored in an array, which you can than test to see if it's containing any elements. Dig +short doesn't always return a result -> empty result. This issue is all about regional differences, so it's very possible you will never get the same result (count). Also, the db is always todays result of the script (script runs at 06 am CET), the IP lists are always one day old (yesterdays result), this gives me the option to make a correction, if something is faulty in the resulting lists. |
Cool, I think I am happy with what I have for my purposes then. Now just to figure out how to script the push of the list back into GitHub from my linux box. |
Latest entry is over 2 months old. |
Thank you for the consolidated list you provide, this is very useful !
Server dns.nextdns.io is not blocked (although listed in https://github.com/curl/curl/wiki/DNS-over-HTTPS )
Is it deliberate for some reason ?
The text was updated successfully, but these errors were encountered: