Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MySQL backend restart issue #3824

Closed
maniaque opened this issue May 5, 2016 · 77 comments
Closed

MySQL backend restart issue #3824

maniaque opened this issue May 5, 2016 · 77 comments
Labels
Milestone

Comments

@maniaque
Copy link

maniaque commented May 5, 2016

Hi there!

Having problems with restarting MySQL backend while PowerDNS is running (Ubuntu 16.04, PowerDNS 4.0.0~alpha2-3build1)

After restart both MySQL and PowerDNS are started, I see following statements in journalctl

May 06 00:09:37 vova pdns[31743]: gmysql Connection successful. Connected to database 'pdns' on
May 06 00:09:37 vova pdns[31743]: gmysql Connection successful. Connected to database 'pdns' on
May 06 00:09:37 vova pdns[31743]: gmysql Connection successful. Connected to database 'pdns' on

Afterwards I restart MySQL. It breaks everything down in pieces:

May 06 00:09:11 vova pdns[30268]: Backend reported condition which prevented lookup

There is only one cure - restart PowerDNS. Doesn't look like good solution.

@pieterlexis
Copy link
Contributor

hi manique, this is fixed in the current master (in #3666). We will release Alpha3 soon that should land in Ubuntu Xenial. If you want to use PowerDNS 4, I would recommend for now using the master packages at https://repo.powerdns.com/

@willtorres
Copy link

I was experiencing the issue in #3535, so I installed the alpha3 version. It seems like PDNS loses connection to MySQL for some reason, I do query, it returns REFUSED, I do another query, it returns the correct data. Is it supposed to be losing the db connection so frequently?

@Habbie
Copy link
Member

Habbie commented May 12, 2016

No, it's not supposed to lose it frequently. It is also not supposed to use REFUSED for that. Reopening this ticket - if you have any logs or other information, please provide them.

@Habbie Habbie reopened this May 12, 2016
@Habbie
Copy link
Member

Habbie commented May 12, 2016

If you are also experiencing #3535 in alpha3, please let us know.

@willtorres
Copy link

willtorres commented May 13, 2016

Here's my logfile.
log.txt

The resulting queries:

$ nslookup admin-02.internal 192.168.99.100
Server:     192.168.99.100
Address:    192.168.99.100#53

Name:   admin-02.internal
Address: 10.10.5.30

$ nslookup admin-02.internal 192.168.99.100
Server:     192.168.99.100
Address:    192.168.99.100#53

** server can't find admin-02.internal.iad.buffalo-ggn.net: REFUSED

$ nslookup admin-02.internal 192.168.99.100
Server:     192.168.99.100
Address:    192.168.99.100#53

Name:   admin-02.internal
Address: 10.10.5.30

@Habbie
Copy link
Member

Habbie commented May 13, 2016

@willtorres we need output from reliable tools like dig, drill or delv. nslookup output is not useful for debugging (same goes for host). Thank you.

@willtorres
Copy link

willtorres commented May 13, 2016

These were done in succession, about 2 seconds apart

$ dig @192.168.99.100 admin-02.internal

; <<>> DiG 9.8.3-P1 <<>> @192.168.99.100 admin-02.internal
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: SERVFAIL, id: 62152
;; flags: qr aa rd; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0
;; WARNING: recursion requested but not available

;; QUESTION SECTION:
;admin-02.internal.     IN  A

;; Query time: 9 msec
;; SERVER: 192.168.99.100#53(192.168.99.100)
;; WHEN: Fri May 13 11:01:05 2016
;; MSG SIZE  rcvd: 35

$ dig @192.168.99.100 admin-02.internal

; <<>> DiG 9.8.3-P1 <<>> @192.168.99.100 admin-02.internal
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: SERVFAIL, id: 22069
;; flags: qr aa rd; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0
;; WARNING: recursion requested but not available

;; QUESTION SECTION:
;admin-02.internal.     IN  A

;; Query time: 5 msec
;; SERVER: 192.168.99.100#53(192.168.99.100)
;; WHEN: Fri May 13 11:01:08 2016
;; MSG SIZE  rcvd: 35


$ dig @192.168.99.100 admin-02.internal

; <<>> DiG 9.8.3-P1 <<>> @192.168.99.100 admin-02.internal
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 26634
;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0
;; WARNING: recursion requested but not available

;; QUESTION SECTION:
;admin-02.internal.     IN  A

;; ANSWER SECTION:
admin-02.internal.  86400   IN  A   10.10.5.30

;; Query time: 4 msec
;; SERVER: 192.168.99.100#53(192.168.99.100)
;; WHEN: Fri May 13 11:01:12 2016
;; MSG SIZE  rcvd: 51

@Habbie
Copy link
Member

Habbie commented May 13, 2016

Ok - now we see SERVFAIL which is of course sad but not in itself a bug. If you manage to spot REFUSED with dig in such a situation, please post again. Also, please put ``` around your pastes. Thanks!

@maniaque
Copy link
Author

Well, in my case this was also SERVFAIL, I think, we need more system logs here.

@maniaque
Copy link
Author

BTW, can anyone please say, which exact Debian package version have this fix? I still would like to move 4.x to production ;)

@Habbie
Copy link
Member

Habbie commented May 13, 2016

As said, alpha3 has fixed #3535. #3666 which is also in Alpha3 should have fixed gsql reconnection behaviour. However, if a database goes down as often as yours appears to do, I'm not sure we can do more..

@maniaque
Copy link
Author

In my case, it goes down not very often. Will look for Alpha3 in Debian. Thanks.

@Habbie
Copy link
Member

Habbie commented May 13, 2016

alpha3 is now in Debian sid, and of course also at https://repo.powerdns.com/

@wk
Copy link
Contributor

wk commented May 17, 2016

Recently performed an in-place upgrade of PowerDNS 3.3 (Ubuntu 14.04/Trusty build) to PowerDNS 4.0.0-alpha3 (PowerDNS Repository build) on an Ubuntu 14.04/Trusty host; also seeing intermittent MySQL backend drops:

May 17 10:15:50 DNS001-A pdns[31618]: Backend reported permanent error which prevented lookup (GSQLBackend lookup query:Could not execute mysql statement: SELECT content,ttl,prio,type,domain_id,disabled,name,auth FROM records WHERE disabled=0 and type=? and name=?: Lost connection to MySQL server during query), aborting
May 17 10:15:50 DNS001-A pdns[31618]: TCP nameserver had error, cycling backend: GSQLBackend lookup query:Could not execute mysql statement: SELECT content,ttl,prio,type,domain_id,disabled,name,auth FROM records WHERE disabled=0 and type=? and name=?: Lost connection to MySQL server during query

Have not previously experienced this with PowerDNS 3.3, and no other moving parts of the setup have changed.

MySQL Libraries on client:

ii  libmysqlclient18:amd64               5.5.49-0ubuntu0.14.04.1          amd64        MySQL database client library
ii  mysql-client                         5.5.49-0ubuntu0.14.04.1          all          MySQL database client (metapackage depending on the latest version)
ii  mysql-client-5.5                     5.5.49-0ubuntu0.14.04.1          amd64        MySQL database client binaries
ii  mysql-client-core-5.5                5.5.49-0ubuntu0.14.04.1          amd64        MySQL database core client binaries
ii  mysql-common                         5.5.49-0ubuntu0.14.04.1          all          MySQL database common files, e.g. /etc/mysql/my.cnf

MySQL Server:

ii  libmysqlclient18:amd64              5.5.49-0ubuntu0.14.04.1          amd64        MySQL database client library
ii  mysql-client-5.5                    5.5.49-0ubuntu0.14.04.1          amd64        MySQL database client binaries
ii  mysql-client-core-5.5               5.5.49-0ubuntu0.14.04.1          amd64        MySQL database core client binaries
ii  mysql-common                        5.5.49-0ubuntu0.14.04.1          all          MySQL database common files, e.g. /etc/mysq /my.cnf
ii  mysql-server                        5.5.49-0ubuntu0.14.04.1          all          MySQL database server (metapackage depending on the latest version)
ii  mysql-server-5.5                    5.5.49-0ubuntu0.14.04.1          amd64        MySQL database server binaries and system database setup
ii  mysql-server-core-5.5               5.5.49-0ubuntu0.14.04.1          amd64        MySQL database server binaries

The servers this has been occurring on are very lightly loaded (average of under 5 queries/sec), and the backend drops have been occurring at a frequency of no more than once a day so far. The connectivity appears to be recovered automatically on a subsequent query.

@Habbie Habbie reopened this May 17, 2016
@mind04
Copy link
Contributor

mind04 commented May 17, 2016

5 qps and lost connections for tcp looks like a mysql server is closing idle tcp backend connections after mysql wait_timeout.

@willtorres
Copy link

@mind04 Thank you so much! I've increased the wait_timeout to the maximum, and my connections are staying alive.

@wk
Copy link
Contributor

wk commented May 18, 2016

Upon further investigation, it appears that the second log line is significant:

May 17 10:15:50 DNS001-A pdns[31618]: TCP nameserver had error, cycling backend: GSQLBackend lookup query:Could not execute mysql statement: SELECT content,ttl,prio,type,domain_id,disabled,name,auth FROM records WHERE disabled=0 and type=? and name=?: Lost connection to MySQL server during query

While the server in question handles an average of 5 queries/sec, the bulk of those queries are UDP.

The 5 queries/second load keeps the MySQL connections of the 3 default distributor threads sufficiently utilized to prevent time-outs even with a fairly aggressive wait_timeout setting (although this may not be the case on even quieter servers).

The TCP receiver, however, maintains its own backend thread. In the above scenario, it sees just a fraction of the load, the time-out is hit fairly regularly, and the connection is lost. It is receiving the first TCP query in a long while that triggered the issue.

This leaves open the question of why a lost connection would result in an error in the first place. After all, the gmysqlbackend makes use of the libmysqlclient MYSQL_OPT_RECONNECT option, which should result in a transparent reconnect after a time-out, rather than in an error. The answer to that may lie in the documentation of this feature itself. Amongst the caveats, the following is listed:

[...]
The connection-related state is affected as follows: 
[...]
* Prepared statements are released. 
[...]

A cursory look through the code suggests that gsql's setDB function prepares the statements ahead of time when a database backend is initialized.

Perhaps when a previously prepared statement is executed after a silent re-connect by libmysqlclient, the prepared statement, as per the documentation, is no longer available, and an error occurs - resulting in the behaviour outlined in this issue?

@Habbie
Copy link
Member

Habbie commented May 18, 2016

Yes, it certainly is looking like we may need to work on re-preparing the statements, or perhaps even just handling the reconnection ourselves.

@wk
Copy link
Contributor

wk commented May 18, 2016

There is no obvious way to trigger on the reconnection attempt and perform a re-preparation of the queries in-query since the raison d'être of MYSQL_OPT_RECONNECT is to hide what has transpired from the application.

There are, however, a number of ways to work around this.

Switch from query preparation at backend thread init to a just-in-time model.
The minimal re-factoring here would probably be something along the lines of turning gmysqlbackend's implementation of the prepare function into a dummy that returns a pointer, and doing the real work as part of the execute function just prior to actual execution.

Ping the database with mysql_ping() before executing each query.
According to the mysql_ping() documentation, this will, with a bit of extra work, allow identifying when a reconnect has occurred.

If mysql_ping()) does cause a reconnect, there is no explicit indication of it. To determine whether a reconnect occurs, call mysql_thread_id() to get the original connection identifier before calling mysql_ping(), then call mysql_thread_id() again to see whether the identifier has changed. 

Thus, by keeping track and comparing the mysql_thread_id before and after mysql_ping() a reconnect can be detected, and the queries can be re-prepared prior to execution.

Drop MYSQL_OPT_RECONNECT and mask the problem with regular mysql_ping().
The current default server-side wait_timeout value is 28800 seconds, but there may be valid reasons (or at least a wide-spread practice in shared web hosting environments with legacy PHP applications) to set this variable globally to values as short as 30 seconds.

By simply firing off a mysql_ping() on a timer with intervals shorter than wait_timeout, it should be possible to prevent session time-outs in the first place. In such a scenario, MYSQL_OPT_RECONNECT can be disabled, with the ultimate fallback becoming full backend restart.

Since the client can get and set the wait_timeout variable for its own session, gmysqlbackend could also potentially first determine the current value in order to configure its mysql_ping() timer, or override it during init to a sensible value in order to reduce noise.

@pieterlexis pieterlexis added this to the auth-4.0.0 milestone May 19, 2016
@cmouse
Copy link
Contributor

cmouse commented Jun 6, 2016

Partially fixed by #3937

@pieterlexis
Copy link
Contributor

Should be fixed in #3937

@CaptainQwark
Copy link

I'm still having this problem on 4.0.1 with TCP connections on my master server (only serving slaves). The first slave that connects for an AXFR after more than 'wait_timeout' seconds (mysql setting, default 8 hours) gets a disconnected TCP session and misses the update.

I will work around this by increasing the wait_timeout setting, but I think pdns-gmysql should prevent the connection/transfer from failing. E.g. by a keepalive-ping to mysql or a reconnect before failing the TCP session, just as @wk suggested.

logs from master:

Aug 22 11:59:47 ns-master pdns[5065]: AXFR of domain 'zeelandnet-lab1.nl' initiated by 192.168.222.193
Aug 22 11:59:47 ns-master pdns[5065]: AXFR of domain 'zeelandnet-lab1.nl' allowed: client IP 192.168.222.193 is in allow-axfr-ips
Aug 22 11:59:47 ns-master pdns[5065]: TCP nameserver had error, cycling backend: GSQLBackend lookup query:Could not prepare statement: SELECT content,ttl,prio,type,domain_id,disabled,name,auth FROM records WHERE disabled=0 and type=? and name=?: MySQL server has gone away
Aug 22 11:59:47 ns-master pdns[5065]: AXFR of domain 'zeelandnet-lab1.nl' initiated by 192.168.222.195
Aug 22 11:59:47 ns-master pdns[5065]: TCP server is without backend connections in doAXFR, launching
Aug 22 11:59:47 ns-master pdns[5065]: AXFR of domain 'zeelandnet-lab1.nl' allowed: client IP 192.168.222.195 is in allow-axfr-ips
Aug 22 11:59:47 ns-master pdns[5065]: AXFR of domain 'zeelandnet-lab1.nl' to 192.168.222.195 finished

logs from failing slave:

Aug 22 11:59:47 ns1-1 pdns[5054]: Received serial number updates for 1 zone, had 0 timeouts
Aug 22 11:59:47 ns1-1 pdns[5054]: Domain 'zeelandnet-lab1.nl' is stale, master serial 2016082201, our serial 2016081801
Aug 22 11:59:47 ns1-1 pdns[5054]: Initiating transfer of 'zeelandnet-lab1.nl' from remote '192.168.222.194'
Aug 22 11:59:47 ns1-1 pdns[5054]: Backend launched with banner: OK#011pdns-dynamic-backend.py ready
Aug 22 11:59:47 ns1-1 pdns[5054]: Starting AXFR of 'zeelandnet-lab1.nl' from remote 192.168.222.194:53
Aug 22 11:59:47 ns1-1 pdns[5054]: Unable to AXFR zone 'zeelandnet-lab1.nl' from remote '192.168.222.194' (resolver): Remote nameserver closed TCP connection

@Habbie Habbie reopened this Aug 22, 2016
@wk
Copy link
Contributor

wk commented Aug 22, 2016

This indeed appears to not function ideally in 4.0.1, and may be unintentional. I can replicate this behaviour.

What appears to have happened is this:

As a result of both these commits being merged, a situation emerged in which the precondition for MYSQL_OPT_RECONNECT being functional has in fact been resolved, but the feature itself has been forcefully disabled, leading to the outcome @CaptainQwark is seeing.

@martinsmatthews
Copy link

martinsmatthews commented Apr 5, 2017

Am seeing this on on pdns 4.0.3, mysql (or percona, have tried both) 5.7.17.

This causes the AXFR from the slave to fail - Slave log

Mar 30 18:37:30 Starting AXFR of 'example.com' from remote 192.168.0.1:53
Mar 30 18:37:30 Unable to AXFR zone 'example.com' from remote '192.168.0.1' (resolver): Remote nameserver closed TCP connection

Master log

Mar 30 18:37:30 AXFR of domain 'example.com' initiated by 192.168.0.2
Mar 30 18:37:30 AXFR of domain 'example.com' allowed: client IP 192.168.0.2 is in allow-axfr-ips
Mar 30 18:37:30 TCP nameserver had error, cycling backend: GSQLBackend lookup query:Could not execute mysql statement: SELECT content,ttl,prio,type,domain_id,disabled,name,auth FROM records WHERE disabled=0 and type=? and name=?: Lost connection to MySQL server during query
Mar 30 18:37:30 Removed from notification list: 'example.com' to 192.168.0.2:53 (was acknowledged)
Mar 30 18:37:30 Received unsuccessful notification report for 'example.com' from 192.168.0.1:53, error: Not Implemented
Mar 30 18:37:30 Removed from notification list: 'example.com' to 192.168.0.1:53 Not Implemented
Mar 30 18:37:32 No master domains need notifications

The next AXFR query is successful as it opens a new connection.

We already have a wait_timeout of 8 hours (the default), increasing this to 1 week seems like a bad option, but possibly our only choice if we want to stay with the mysql backend.

Any chance this will be fixed in 4.0.4? Are there any other work arounds other than increasing the wait_timeout?

Can the slave be configured to run the AXFR on a schedule that is more frequent than the connect timeout? I guess that doesn't guarantee that the connections in the retrieval pool will be fresh though. Would setting retrieval-threads to 1 help? We have just a single slave and no other clients allowed to run AXFR queries.

@Habbie
Copy link
Member

Habbie commented Apr 7, 2017

As we have been unable to figure out a reliable way to fix this, and it appears this only affects machines with very low traffic, I am sad to say I am removing the 4.1 milestone from this.

@Habbie Habbie modified the milestones: auth-4.2.0, auth-4.0.x Apr 7, 2017
@martinsmatthews
Copy link

martinsmatthews commented Apr 7, 2017

That's a shame, any chance we could get the slave to retry the AXFR query - it would only need to retry once from what I can see.

@madpsy
Copy link

madpsy commented Apr 7, 2017

In my case, although not particularly high traffic, dnsdist is performing aggressive caching so that exacerbates the issue.

@stecklars
Copy link

Just curious, are there any downsides of running for example
dig @localhost +tcp somedomainonmynameserver.tld ANY >/dev/null 2>&1
every few minutes in a cronjob on each nameserver?

That should fix the issue, shouldn't it?
Looking at my logs there are no more error messages.

@jph
Copy link

jph commented Apr 11, 2017

@stecklars that's what I've ended up doing, and it's working fine in my case. No more errors for over a week.

@martinsmatthews
Copy link

We do this as part of our monitoring to check a couple of records we are authoritative for and also some we are not to test that recursion is also working, but not specifically as a keepalive on the mysql connection.

The AXFR is trickier as there is a separate backend/thread pool for this. We were already checking that the SOA serial matches when querying the master and slave using dig. Now, when they don't, we run a pdns_control notify * on the master - this causes master to re-notify the slave and the slave to rerun the AXFR query which works this time as the failed backend has been cycled after the first failure. Nasty but we can't see any other options.

@Habbie
Copy link
Member

Habbie commented Apr 15, 2017

#5245 should be of extreme interest to you and testing would be appreciated.

@Habbie
Copy link
Member

Habbie commented Jun 13, 2017

Ping! We cannot fix bugs if you don't test our fixes!

@sparc
Copy link

sparc commented Jun 13, 2017

Not sure my problem has nothing to do with wait_timeout (i ahve it set at 10 sec) as i get this error at a frequency that is usually as high as one every one or three seconds:

Backend reported permanent error which prevented lookup (GSQLBackend lookup query:Could not execute mysql statement: SELECT content,ttl,prio,type,domain_id,disabled,name,auth FROM records WHERE disabled=0 and type=? and name=?: Lost connection to MySQL server during query), aborting

this started as soon as we upgraded to version 4 and it's happening since then and the dig trick makes no difference. The database is also used by php + webserver without any problem

@Habbie
Copy link
Member

Habbie commented Jun 13, 2017

A 10 second wait_timeout would definitely be a great way to cause this problem. Can you test the patch in #5245?

@sparc
Copy link

sparc commented Jun 14, 2017

how can i get the path once i have the repo setup ?

@jph
Copy link

jph commented Jun 15, 2017

I'm getting my lab set up now, and will test this fix as soon as I'm done. I will update this comment accordingly. Thanks, and sorry for the delay.

@Habbie
Copy link
Member

Habbie commented Jun 15, 2017

Testing packages (based on the master branch + the fixes in #5245) are now available at https://downloads.powerdns.com/autobuilt/. Browse to your flavour, then find the files with 'authsqlconnectionreset' in their name.

@iammattmartin
Copy link

I've been testing this the last few days.

Connection before was in minutes... we're now up to over a day without the same error. Looks good.

@Poil
Copy link

Poil commented Aug 24, 2017

Hi,

Sorry, but ... I'm unable to find if 4.0.4 have this fix ?

Best regards,

@Habbie
Copy link
Member

Habbie commented Aug 24, 2017

@Poil this fix is not in 4.0.4. It is on the master branch, and will be in 4.1.0. A release candidate for 4.1.0 should come within one or two weeks.

@Habbie Habbie closed this as completed Aug 24, 2017
@TCB13
Copy link

TCB13 commented Apr 20, 2018

I'm running PowerDNS Authoritative Server 4.0.3 from Debian repos so I still have this issue. Is there any workaround?

@Poil
Copy link

Poil commented Apr 20, 2018

When I was using powerDNS, I'd configured session timeout to 1 year. (I'm no more working for the same company, I didn't keep how I did that)

@TCB13
Copy link

TCB13 commented Apr 20, 2018

@Poil thanks for the tip. What is strange about this is that it only happens in zone transfers. The server is running without any issues and answering dig queries just fine, however in AXFR this error happens.

@Poil
Copy link

Poil commented Apr 20, 2018

Yes it because the slave doesn't do a retry (and get an error when the session timed out) it comes back only after the value of the SOA if there is no a new notify (a change on the zone)

@rgacogne
Copy link
Member

I'm running PowerDNS Authoritative Server 4.0.3 from Debian repos so I still have this issue. Is there any workaround?

Did you consider using the 4.1 packages we provide at https://repo.powerdns.com?

@hb9xar
Copy link

hb9xar commented Apr 21, 2018

Running PDNS 4.0.4 as autoritative DNS server for quite a number of domains. The issue only did happens on TCP queries (UDP queries are so frequent, they don't cause the session timeout to hit).

I use the following workaround to increase the session timeout for mysql connections used by PDNS:

in /etc/my.cnf, add a section

[powerdns]
init-command='SET wait_timeout=86400'

in pdns.conf, set "gmysql-group":

#################################
# launch        Which backends to launch and order to query them in
#
# launch=
launch=gmysql
gmysql-host=<mysql host>
gmysql-user=<database user>
gmysql-password=<database password>
gmysql-dbname=<database name>
gmysql-group=powerdns

This causes pdns to read the [powerdns] section in my.cnf to set a long session timeout. In my case, 1 day was enough to also cover the less often TCP queries.

UDP and TCP queries use individual database connections (running in separate threads?), so keeping UDP alive with sufficient queries still will not fix the issue with TCP time-outs. Since the above mentioned change, I did not observe any further database connection timeouts / reconnects.

/Thomas

@TCB13
Copy link

TCB13 commented Apr 22, 2018

@rgacogne so can I just pin the repos and run apt update? Anything else I should be aware of? Thank you.

@Habbie
Copy link
Member

Habbie commented Apr 22, 2018

@rgacogne so can I just pin the repos and run apt update? Anything else I should be aware of? Thank you.

Please visit https://www.powerdns.com/opensource.html to see how you can reach us for support.

@TCB13
Copy link

TCB13 commented Apr 22, 2018

@rgacogne @Habbie already upgraded. I was running as an authoritative server only, I decided to remove powerdns completely and then install version 4.1 from the repo. Placed my settings back into /etc/powerdns and everything is running fine.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests