-
-
Notifications
You must be signed in to change notification settings - Fork 558
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
admin panel ajax error #130
Comments
Just from playing about with this.. reading the log file line by line with |
It is definitely slower than The bigger question might be: do you even need the entire log file in memory in the first place? If you are parsing per line you can possibly ignore lines that do not match what you are looking for. You could also stop parsing the log when you come across timestamps of a certain age. I haven't played around with that yet. Another possibility might be to use
You can specify an offset and a length for |
can you move whole file to different language? like java or python? i don't need pretty graphs what interests me is only output. it could be even in txt format. |
The next iteration will have a full API, but for now there is https://github.com/pi-hole/pi-hole#api that will provide you with some basic information. Does that help? |
The current API (api.php) gets all its data from the functions in data.php, data.php will try to read the entire log file into memory at once. So probably not. Is there some recommended way for running a development instance of Pi-Hole outside of an actual RPi? E.g. a virtual box? I would love to give memory efficient log reading a go. |
It will run on any Debian or CentOS/RHEL environment. So if you'd like, just spin up a virtual Debian or Ubuntu and run the script. There isn't anything left that is truly Raspberry Pi dependent, you can run it on x86/64. Just need |
I do all my dev work on my live pi, despite having two or three spares lying around..! But then I'm a cowboy like that. Also handy is having an IDE like Idea (with php plugin) with sftp folder mapping set up for easy deployment/testing! For this though, you'll either need to |
how do i catch @Mcat12 here so he can help me with php log? |
Hey, @xxcriticxx Looking at the size of your log, I think it's most likely an issue our end, which we're going to work on getting fixed with @Zegnat 's kind help! |
@PromoFaux I would report that i have few nest thermostats and they generate about 15000 dns queries each unit per day |
Ah! Yeah, that's going to build up a lot of log entries! |
Yep, currently a large log file will do that sort of thing, since we haven't quite optimized how we read it yet. |
i dont have much to show because i just did flush but in about 10 min i got this |
Suggestion...If you don't get value out of those nest lookups being in the logs I'd suggest removing them from the log. A filter for incoming log lines isn't easy as far as I know but removing the logs afterwards is simple:
note: $ prefix is a command, no prefix is the output of the example file |
and i have to flush log full again ... |
Lets stop using the web interface to diagnose why your log is so big...we can use the command line easily. To get the top Type of query you can do this:
Forwards and replies are probably the top ones if you're like me, replies outnumber forwards by almost double. Next check what domains are the top 10 replies:
(i'm kind of curious how quick that runs when your log is really large too...) Next you might want to see how many forwards are happening for each reply too: Hope this helps. It is still my opinion if preventing a device (nest) on your network from spamming DNS requests is not possible then finding out what domains it targets and just removing them from your log seems like an easy solution. |
I looked into properly filtering incoming dnsmasq log-query messages and found this thread, which pointed out what I figured it would - the job should be done by your syslog facility instead of the app. http://lists.thekelleys.org.uk/pipermail/dnsmasq-discuss/2015q2/thread.html#9664 -> |
cat /var/log/pihole.log | awk '{ print $5 }' | sort | uniq -c | sort -n grep reply /var/log/pihole.log | awk '{ print $6 }' | sort | uniq -c | sort -n | tail grep -c 'nest.com' /var/log/pihole.log |
Ah, DNSSEC is enabled. That will account for some of the increased overhead due to the nature of how DNSSEC follows the chain up to the certificate roots nameserver. See 1.5. How Does DNSSEC Change DNS Lookup? May have some increased performance from increasing the timeout variable to a value greater than 300 as the more DNSSEC TXT records that are in cache means less traversing the chain to find the root. Try doubling the value and see if that cuts down the number of forwarded queries? (I do run DNSSEC, but only on my Unbound edge server, I have Pi-hole just doing regular lookups using the Unbound install as the upstream, so the validation role doesn't pollute the Pi-hole records and all the heavy lifting is done by the Unbound process that I access via a secured channel.) |
@dschaper now you lost me what command do i use to change DNSSEC value |
Are you running stock Pi-hole, without any changes to the @diginc Beauty log parsing there! |
I'll give that a try. Maybe tomorrow, else during the weekend. |
@dschaper yes i am running stock pi-hole on ubuntu 16.04 lts. my router is netgear r7000 running Firmware: DD-WRT v3.0-r29155M kongac (02/25/16) this is the setting in my router |
can upstream DNS force DNSSec on clients? |
If the client is a DNSSEC aware resolver (not just a normal client in the regular sense that we use it, but an actual resolver like The test I always use, as goofy as it is, is http://dnssec.vs.uni-due.de/. Upstream can't "Force" DNSSEC on clients, it can only return a record with either the AD bit set or unset, indicating trust or no-trust. It's up to the clients (and to some extent the resolver queried by the client) to determine what to do next. As for the DD-WRT, I personally use OpenWRT (building LEDE) and the build environment I use enforces DNSSEC at the router level, and will give a NXDOMAIN if any of the certificates fail validation. What causes a lot of the extra lookups is when domains don't have any SEC records and you sometimes end up going up the chain to the Root TLD server and back down to the Auth NS server for no really good reason. That's why I offload all the validation to another process and just let Pi-hole handle resolution tasks, then none of the validation traffic shows in the Pi-hole logs. And Unbound is set to such a low log level that only errors show. (If you go for even medium level logging with DNSSEC enforced you get very large logs, so it's not really recommended unless you are trying to troubleshoot a bad trust-anchor or cert errors along the validation path.) |
my log is about 72 mb |
For how many hours of logging? |
about 2 or 3 h and i had 75000 dns queries in that same time and then ajax error |
Until Network Manager is disabled, we will still have this issue. |
I had this issue. It was resolved by making sure that the owner/group of ALL logs were "www-data" to align with the user in lighttpd.conf. There was one log that was being owned by dnsmasq that caused a denied access error. This is likely to crop up more often as SELinux becomes the default. |
I don't know if it's related, but at ~50k queries the query log shuts down, I presume that as the logs continue to grow it will start to affect the other php scripts.
|
Check out #149 and pi-hole/pi-hole#731 for a very alpha preview of a way around this issue... |
@PromoFaux something change with query log in latest version ( Pi-hole Version v2.9.5 Web Interface Version v1.4.4.2 )? i cant open them and i only got about 10k query today. getting same ajax error everything else works regular. |
@xxcriticxx do you have anything of interest in |
error on line 56 in api.php
|
Yep, your |
my query log worked one version back with 700k queries with no problem |
Which branch, I remember there was some testing development branch and db branch, so if you could let us know if it worked on the previous Master or another branch. (Yes, we've taken out the ability to run development code without some work on the users part because we've changed our approach and developmental code will break now and should not be run as everyday code.) |
last master |
Thanks, and just for edification 26d54cf was the last time line 56 was modified, and that was back in release v1.3. If you have a copy of that 700k log file, we can go back to the code as it was in the previous release and run it through to see if we might be able to find some kind of change that would affect things. We may have asked, but can you run a fresh debug and get us the token so we can see where things are right now? |
DNS Queries Today 497,362 log size 194.5 MB (194,465,830 bytes) Your debug token is : hf4xby7w5b copy of pihole.log
|
Just a quick note: This also happens on my Pi B+ version 1. with only roughly 21,000 total DNS queries in the file. My problem is that the Rapberry Pi 1 is just so terribly slow...
I get 500 Errors instead of the expected results because the backend is not responding in that time and fast-CGI exceeds its maximum execution time. |
@PromoFaux and @dschaper something happen to my overhead i went from 750 k queries to only about 7500 overnight. i also got secondary graph showing up that i think was temperature. |
@xxcriticxx There is no way it could have shown temperature since we do not log it. |
@xxcriticxx |
Compare the colors to the summary shown above the diagram 😉 green = DNS Queries Today |
@dschaper ::: Your debug token is : zzpi4btixc |
@DL6ER my dns queries part is not green looks more like gray |
@xxcriticxx I meant the line color, not the fill style. |
@DL6ER just busting your chops :) |
@xxcriticxx There will be tool tips helping the user to understand the meaning of the graphs. |
@DL6ER not to mix issues here do you have any plans for advanced search option for future? |
This thread is 215 comments long. Closing and locking. Please start a new thread with you new question and please keep the issues to one per thread. Adding in more issues to old threads makes it very difficult to track ongoing problems. And please use discourse.pi-hole.net for any feature requests. |
@xxcriticxx commented on Mon Aug 29 2016
am getting this error while looking on my query logs. some of the graphs dont appear they just keep spinning.
DataTables warning: table id=all-queries - Ajax error. For more information about this error, please see http://datatables.net/tn/7
Want to back this issue? Post a bounty on it! We accept bounties via Bountysource.
The text was updated successfully, but these errors were encountered: