Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

admin panel ajax error #130

Closed
PromoFaux opened this issue Aug 30, 2016 · 215 comments
Closed

admin panel ajax error #130

PromoFaux opened this issue Aug 30, 2016 · 215 comments

Comments

@PromoFaux
Copy link
Member

PromoFaux commented Aug 30, 2016

@xxcriticxx commented on Mon Aug 29 2016

am getting this error while looking on my query logs. some of the graphs dont appear they just keep spinning.

DataTables warning: table id=all-queries - Ajax error. For more information about this error, please see http://datatables.net/tn/7

capture1
capture2


Want to back this issue? Post a bounty on it! We accept bounties via Bountysource.

@PromoFaux
Copy link
Member Author

Just from playing about with this.. reading the log file line by line with fopen/fgets is slow as hell...

@Zegnat
Copy link

Zegnat commented Aug 30, 2016

It is definitely slower than file or file_get_contents, yes. Because you will be working with the file on disk rather than in memory. Picking up a file and dumping it in memory is pretty fast, but with big log files probably not what you want.

The bigger question might be: do you even need the entire log file in memory in the first place? If you are parsing per line you can possibly ignore lines that do not match what you are looking for. You could also stop parsing the log when you come across timestamps of a certain age. I haven't played around with that yet.

Another possibility might be to use file_get_contents in a batch-like way. This might be faster than fopen/fgets, from the PHP documentation:

file_get_contents() is the preferred way to read the contents of a file into a string. It will use memory mapping techniques if supported by your OS to enhance performance.

You can specify an offset and a length for file_get_contents to read, so you might be able to read the log file in intervals of x kb.

@xxcriticxx
Copy link

can you move whole file to different language? like java or python?

i don't need pretty graphs what interests me is only output. it could be even in txt format.

@dschaper
Copy link
Member

The next iteration will have a full API, but for now there is https://github.com/pi-hole/pi-hole#api that will provide you with some basic information. Does that help?

@Zegnat
Copy link

Zegnat commented Aug 31, 2016

The current API (api.php) gets all its data from the functions in data.php, data.php will try to read the entire log file into memory at once. So probably not.

Is there some recommended way for running a development instance of Pi-Hole outside of an actual RPi? E.g. a virtual box? I would love to give memory efficient log reading a go.

@dschaper
Copy link
Member

It will run on any Debian or CentOS/RHEL environment. So if you'd like, just spin up a virtual Debian or Ubuntu and run the script. There isn't anything left that is truly Raspberry Pi dependent, you can run it on x86/64. Just need apt or yum/dnf to grab the dnsmasq and lighttpd packages along with PHP.

@PromoFaux
Copy link
Member Author

PromoFaux commented Aug 31, 2016

I do all my dev work on my live pi, despite having two or three spares lying around..! But then I'm a cowboy like that.

Also handy is having an IDE like Idea (with php plugin) with sftp folder mapping set up for easy deployment/testing! For this though, you'll either need to chown the /var/www/html/admin directory, or log in as root.

@xxcriticxx
Copy link

how do i catch @Mcat12 here so he can help me with php log?

@PromoFaux
Copy link
Member Author

Hey, @xxcriticxx Looking at the size of your log, I think it's most likely an issue our end, which we're going to work on getting fixed with @Zegnat 's kind help!

@xxcriticxx
Copy link

@PromoFaux I would report that i have few nest thermostats and they generate about 15000 dns queries each unit per day

@PromoFaux
Copy link
Member Author

PromoFaux commented Aug 31, 2016

Ah! Yeah, that's going to build up a lot of log entries!

@AzureMarker
Copy link
Contributor

Yep, currently a large log file will do that sort of thing, since we haven't quite optimized how we read it yet.

@xxcriticxx
Copy link

i dont have much to show because i just did flush but in about 10 min i got this
log-rts14-iad01.devices.nest.com 412
transport03.rts14.iad01.production.nest.com 403

@diginc
Copy link
Contributor

diginc commented Aug 31, 2016

Suggestion...If you don't get value out of those nest lookups being in the logs I'd suggest removing them from the log. A filter for incoming log lines isn't easy as far as I know but removing the logs afterwards is simple: sed -i '/.production.nest.com/d' /var/log/pihole.log in a cron running every 5 minutes should purge all those super noisy domains. so something like:

$ cat /etc/cron.d/nest_dns_cleanup
MAILTO=/dev/null
*/5 * * * * root /bin/sed -i '/.production.nest.com/d' /var/log/pihole.log

note: $ prefix is a command, no prefix is the output of the example file

@xxcriticxx
Copy link

2h running and i have 65722 connection only about 5000 is from nest. i think there is still something more behind this.

screenshot from 2016-08-31 17-35-51

@xxcriticxx
Copy link

and i have to flush log full again ...

@diginc
Copy link
Contributor

diginc commented Aug 31, 2016

Lets stop using the web interface to diagnose why your log is so big...we can use the command line easily.

To get the top Type of query you can do this:

cat /var/log/pihole.log | awk '{ print $5 }' | sort | uniq -c | sort -n

# output:
      4 /etc/hosts
    172 query[PTR]
    242 /etc/pihole/gravity.list
    275 cached
    686 query[AAAA]
   2422 query[A]
   4197 forwarded
  10956 reply

Forwards and replies are probably the top ones if you're like me, replies outnumber forwards by almost double.

Next check what domains are the top 10 replies:

grep reply /var/log/pihole.log | awk '{ print $6 }' | sort | uniq -c | sort -n | tail

# output (some redacted)
     80 d.v.dropbox.com
     87 redact.ed
    102 redact.ed
    137 plex.tv
    156 daisy.ubuntu.com
    238 redact.ed
    375 oauth.reddit.com
    405 ssl.reddit.com
    755 googleapis.l.google.com
   5262 lots.of.redact.ed

(i'm kind of curious how quick that runs when your log is really large too...)

Next you might want to see how many forwards are happening for each reply too: grep -c 'nest.com' /var/log/pihole.log will tell you how often the request to nest if forwarded instead of directly replied from dnsmasq's cache. If grep -c for a certain domain in your top 10 returns a lot higher number than the 2nd command did then you're not caching very much and instead forwarding requests constantly for that domain.

Hope this helps. It is still my opinion if preventing a device (nest) on your network from spamming DNS requests is not possible then finding out what domains it targets and just removing them from your log seems like an easy solution.

@diginc
Copy link
Contributor

diginc commented Aug 31, 2016

I looked into properly filtering incoming dnsmasq log-query messages and found this thread, which pointed out what I figured it would - the job should be done by your syslog facility instead of the app.

http://lists.thekelleys.org.uk/pipermail/dnsmasq-discuss/2015q2/thread.html#9664 ->
http://unix.stackexchange.com/questions/52860/how-to-exclude-given-lines-in-syslog-ng

@xxcriticxx
Copy link

xxcriticxx commented Aug 31, 2016

cat /var/log/pihole.log | awk '{ print $5 }' | sort | uniq -c | sort -n
2 query[SRV]
6 config
25 query[DNSKEY]
61 query[DS]
219 Maximum
313 query[PTR]
815 query[SOA]
1091 /etc/pihole/gravity.list
15210 cached
36894 query[AAAA]
113435 query[A]
155530 forwarded
363881 reply

grep reply /var/log/pihole.log | awk '{ print $6 }' | sort | uniq -c | sort -n | tail
5513 time-ios.g.aaplimg.com
5730 ec2-54-162-184-226.compute-1.amazonaws.com
6344 kodi.filmkodi.com
7130 e6858.dscc.akamaiedge.net
7559 log-rts14-iad01.devices.nest.com
7699 gsp10-ssl.ls-apple.com.akadns.net
8328 freedisc.pl
9027 bob-fed.gameloft.com
14526 pop-namer-se-courier.push-apple.com.akadns.net
15118 devices-rts14-production-937225294.us-east-1.elb.amazonaws.com

grep -c 'nest.com' /var/log/pihole.log
43489

@dschaper
Copy link
Member

dschaper commented Sep 1, 2016

Ah, DNSSEC is enabled. That will account for some of the increased overhead due to the nature of how DNSSEC follows the chain up to the certificate roots nameserver. See 1.5. How Does DNSSEC Change DNS Lookup? May have some increased performance from increasing the timeout variable to a value greater than 300 as the more DNSSEC TXT records that are in cache means less traversing the chain to find the root.

Try doubling the value and see if that cuts down the number of forwarded queries? (I do run DNSSEC, but only on my Unbound edge server, I have Pi-hole just doing regular lookups using the Unbound install as the upstream, so the validation role doesn't pollute the Pi-hole records and all the heavy lifting is done by the Unbound process that I access via a secured channel.)

@xxcriticxx
Copy link

@dschaper now you lost me

what command do i use to change DNSSEC value

@dschaper
Copy link
Member

dschaper commented Sep 1, 2016

Are you running stock Pi-hole, without any changes to the dnsmasq configs? I don't think we enable DNSSEC validation by default, it's a rather overhead intensive process and is something mostly paranoid security folks like myself enable. It's also possible that one of your clients (192.168.1.1) has DNSSEC validation enabled, and that is the station that is requesting the DS and DNSKEY records, which would also possibly account for the high number of forwards in the query by type breakdowns.

@diginc Beauty log parsing there!

@Zegnat
Copy link

Zegnat commented Sep 1, 2016

It will run on any Debian or CentOS/RHEL environment. So if you'd like, just spin up a virtual Debian or Ubuntu and run the script.

I'll give that a try. Maybe tomorrow, else during the weekend.

@xxcriticxx
Copy link

@dschaper yes i am running stock pi-hole on ubuntu 16.04 lts. my router is netgear r7000 running Firmware: DD-WRT v3.0-r29155M kongac (02/25/16)

this is the setting in my router

capture1
capture

@diginc
Copy link
Contributor

diginc commented Sep 1, 2016

can upstream DNS force DNSSec on clients?

@dschaper
Copy link
Member

dschaper commented Sep 1, 2016

If the client is a DNSSEC aware resolver (not just a normal client in the regular sense that we use it, but an actual resolver like dnsmasq) then it could internally traverse the security chain, following the certificate up until it gets a CA that it trusts and then walk back down to the actual authoritative DNS server for the domain requested. So really, most clients aren't DNSSEC aware, it's the resolver that will either return a NXDOMAIN / SERVFAIL for domains that fail the chain of trust, or otherwise flag the response as untrusted.

The test I always use, as goofy as it is, is http://dnssec.vs.uni-due.de/. Upstream can't "Force" DNSSEC on clients, it can only return a record with either the AD bit set or unset, indicating trust or no-trust. It's up to the clients (and to some extent the resolver queried by the client) to determine what to do next.

As for the DD-WRT, I personally use OpenWRT (building LEDE) and the build environment I use enforces DNSSEC at the router level, and will give a NXDOMAIN if any of the certificates fail validation. What causes a lot of the extra lookups is when domains don't have any SEC records and you sometimes end up going up the chain to the Root TLD server and back down to the Auth NS server for no really good reason. That's why I offload all the validation to another process and just let Pi-hole handle resolution tasks, then none of the validation traffic shows in the Pi-hole logs. And Unbound is set to such a low log level that only errors show. (If you go for even medium level logging with DNSSEC enforced you get very large logs, so it's not really recommended unless you are trying to troubleshoot a bad trust-anchor or cert errors along the validation path.)

@xxcriticxx
Copy link

my log is about 72 mb

@dschaper
Copy link
Member

dschaper commented Sep 1, 2016

For how many hours of logging?

@xxcriticxx
Copy link

xxcriticxx commented Sep 1, 2016

about 2 or 3 h and i had 75000 dns queries in that same time and then ajax error

@dschaper
Copy link
Member

Until Network Manager is disabled, we will still have this issue.

@Nuramori
Copy link

Nuramori commented Oct 25, 2016

I had this issue. It was resolved by making sure that the owner/group of ALL logs were "www-data" to align with the user in lighttpd.conf. There was one log that was being owned by dnsmasq that caused a denied access error. This is likely to crop up more often as SELinux becomes the default.

@brontide
Copy link

I don't know if it's related, but at ~50k queries the query log shuts down, I presume that as the logs continue to grow it will start to affect the other php scripts.

2016-10-26 15:53:21: (log.c.194) server started 
2016-10-27 18:37:21: (mod_fastcgi.c.2520) FastCGI-stderr: PHP Fatal error:  Allowed memory size of 134217728 bytes exhausted (tried to allocate 32 bytes) in /var/www/html/admin/api.php on line 56
2016-10-27 18:49:39: (mod_fastcgi.c.2520) FastCGI-stderr: PHP Fatal error:  Allowed memory size of 134217728 bytes exhausted (tried to allocate 129 bytes) in /var/www/html/admin/api.php on line 56
2016-10-27 18:49:59: (mod_fastcgi.c.2520) FastCGI-stderr: PHP Fatal error:  Allowed memory size of 134217728 bytes exhausted (tried to allocate 129 bytes) in /var/www/html/admin/api.php on line 56
root@pihole-debian-sid:~# wc -l /var/log/pihole.log 
186548 /var/log/pihole.log
root@pihole-debian-sid:~# ls -alh /var/log/pihole.log 
-rw-r--r-- 1 dnsmasq root 13M Oct 27 18:50 /var/log/pihole.log
ewarnke@eric-cb:~/github/pihole-test$ curl -vv http://10.195.42.227/admin/api.php?getAllQueries
*   Trying 10.195.42.227...
* Connected to 10.195.42.227 (10.195.42.227) port 80 (#0)
> GET /admin/api.php?getAllQueries HTTP/1.1
> Host: 10.195.42.227
> User-Agent: curl/7.47.0
> Accept: */*
> 
< HTTP/1.1 500 Internal Server Error
< X-Pi-hole: The Pi-hole Web interface is working!
< X-Frame-Options: DENY
< Content-type: application/json
< Content-Length: 0
< Date: Thu, 27 Oct 2016 18:49:59 GMT
< Server: lighttpd/1.4.39
< 
* Connection #0 to host 10.195.42.227 left intact

@PromoFaux
Copy link
Member Author

Check out #149 and pi-hole/pi-hole#731 for a very alpha preview of a way around this issue...

@xxcriticxx
Copy link

@PromoFaux something change with query log in latest version ( Pi-hole Version v2.9.5 Web Interface Version v1.4.4.2 )?

i cant open them and i only got about 10k query today. getting same ajax error everything else works regular.

@brontide
Copy link

brontide commented Nov 1, 2016

@xxcriticxx do you have anything of interest in /var/log/lighttpd/error.log ?

@xxcriticxx
Copy link

error on line 56 in api.php

http://pastebin.com/TgMxvqev

@brontide
Copy link

brontide commented Nov 1, 2016

Yep, your /var/log/pihole.log file is too large for the api. You can wait for fixes or, of the host is not a Pi, you can try increasing the memory_limit in the /etc/php.ini file but that is only a temporary workaround as it delays how quickly the problem occurs and how long it takes for the api to load. Honestly the current code was never meant to deal with high-volume logs and the alpha code will work it's way up to the main branch eventually.

@xxcriticxx
Copy link

xxcriticxx commented Nov 1, 2016

my query log worked one version back with 700k queries with no problem

@dschaper
Copy link
Member

dschaper commented Nov 1, 2016

Which branch, I remember there was some testing development branch and db branch, so if you could let us know if it worked on the previous Master or another branch. (Yes, we've taken out the ability to run development code without some work on the users part because we've changed our approach and developmental code will break now and should not be run as everyday code.)

@xxcriticxx
Copy link

last master

@dschaper
Copy link
Member

dschaper commented Nov 1, 2016

Thanks, and just for edification 26d54cf was the last time line 56 was modified, and that was back in release v1.3. If you have a copy of that 700k log file, we can go back to the code as it was in the previous release and run it through to see if we might be able to find some kind of change that would affect things. We may have asked, but can you run a fresh debug and get us the token so we can see where things are right now?

@xxcriticxx
Copy link

DNS Queries Today 497,362 log size 194.5 MB (194,465,830 bytes)

Your debug token is : hf4xby7w5b

copy of pihole.log

https://www.wetransfer.com/downloads/95d78916e3a7cbac079c06b886bafec520161101234122/b36189543bdf532c9a42bd7ac9c6cfe620161101234122/b76209

@DL6ER
Copy link
Member

DL6ER commented Nov 22, 2016

Just a quick note: This also happens on my Pi B+ version 1. with only roughly 21,000 total DNS queries in the file. My problem is that the Rapberry Pi 1 is just so terribly slow...

pi@raspberry:~# tail /var/log/lighttpd/error.log
2016-11-22 13:06:44: (mod_fastcgi.c.2702) FastCGI-stderr: PHP Fatal error:  Maximum execution time of 30 seconds exceeded in /var/www/html/admin/api.php on line 60
2016-11-22 13:08:52: (mod_fastcgi.c.2702) FastCGI-stderr: PHP Fatal error:  Maximum execution time of 30 seconds exceeded in /var/www/html/admin/data.php on line 336
2016-11-22 13:09:18: (mod_fastcgi.c.2702) FastCGI-stderr: PHP Fatal error:  Maximum execution time of 30 seconds exceeded in /var/www/html/admin/data.php on line 214

I get 500 Errors instead of the expected results because the backend is not responding in that time and fast-CGI exceeds its maximum execution time.

@xxcriticxx
Copy link

@PromoFaux and @dschaper something happen to my overhead i went from 750 k queries to only about 7500 overnight. i also got secondary graph showing up that i think was temperature.

@DL6ER
Copy link
Member

DL6ER commented Dec 9, 2016

@xxcriticxx There is no way it could have shown temperature since we do not log it.

@dschaper
Copy link
Member

dschaper commented Dec 9, 2016

@xxcriticxx pihole -d please.

@xxcriticxx
Copy link

dont know what is that second graph in blue

capture

@DL6ER
Copy link
Member

DL6ER commented Dec 9, 2016

Compare the colors to the summary shown above the diagram 😉

green = DNS Queries Today
blue = Ads Blocked Today

@xxcriticxx
Copy link

@dschaper ::: Your debug token is : zzpi4btixc

@xxcriticxx
Copy link

@DL6ER my dns queries part is not green looks more like gray

@DL6ER
Copy link
Member

DL6ER commented Dec 9, 2016

@xxcriticxx I meant the line color, not the fill style.

@xxcriticxx
Copy link

@DL6ER just busting your chops :)

@DL6ER
Copy link
Member

DL6ER commented Dec 9, 2016

@xxcriticxx There will be tool tips helping the user to understand the meaning of the graphs.
screenshot at 2016-12-09 17-25-51

@xxcriticxx
Copy link

@DL6ER not to mix issues here do you have any plans for advanced search option for future?

@dschaper
Copy link
Member

dschaper commented Dec 9, 2016

This thread is 215 comments long. Closing and locking. Please start a new thread with you new question and please keep the issues to one per thread. Adding in more issues to old threads makes it very difficult to track ongoing problems. And please use discourse.pi-hole.net for any feature requests.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

9 participants