Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add a PoolAvailableRule to easily add backup pools #7140

Merged
merged 3 commits into from Nov 6, 2018

Conversation

RobinGeuze
Copy link
Contributor

@RobinGeuze RobinGeuze commented Nov 3, 2018

Short description

This adds the PoolAvailableRule. Can be used to apply different pools based on availability.

Checklist

I have:

  • read the CONTRIBUTING.md document
  • compiled this code
  • tested this code
  • included documentation (including possible behaviour changes)
  • documented the code
  • added or modified unit test(s)

@RobinGeuze
Copy link
Contributor Author

Still need to add tests and documentation, will work on that now.

@RobinGeuze
Copy link
Contributor Author

I've added documentation and did some basic testing using this to check correct behavior in case of up:

addLocal("127.0.0.2", { doTCP=true, reusePort=true })

newServer({address="8.8.8.8", name="googledns", mustResolve=true, checkName="transip.nl", pool="upserver"})

newServer({address="8.8.8.9", name="randomip", mustResolve=true, checkName="google.nl", pool="downserver"})

addAction(PoolAvailableRule("upserver"), PoolAction("downserver"))

addAction(AllRule(), PoolAction("upserver"))

And this to see if it works properly when a pool is down:

addLocal("127.0.0.2", { doTCP=true, reusePort=true })

newServer({address="8.8.8.8", name="googledns", mustResolve=true, checkName="transip.nl", pool="upserver"})

newServer({address="8.8.8.9", name="randomip", mustResolve=true, checkName="google.nl", pool="downserver"})

addAction(PoolAvailableRule("upserver"), PoolAction("downserver"))

addAction(AllRule(), PoolAction("upserver"))

pdns/dnsdist-lua-rules.cc Outdated Show resolved Hide resolved
pdns/dnsdistdist/dnsdist-rules.hh Outdated Show resolved Hide resolved
@pavel-odintsov
Copy link

Nice! I like it!

Copy link
Member

@rgacogne rgacogne left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

@pavel-odintsov
Copy link

Yes! It looks great! :) I think I can test it.

@pavel-odintsov
Copy link

Hello!

I tested it in following configuration:

newServer({address="8.8.8.8:53", name="first", retries=1, pool="" })
newServer({address="1.1.1.1:53", name="second", retries=1, pool="" })
newServer({address="1.0.0.1:53", name="third", retries=1, pool=""})

newServer({address="208.67.222.222:53", weight=1, name="backup", retries=1,  pool="backup" })

-- Use default pool only of we have alive servers in it
addAction(PoolAvailableRule(""), PoolAction(""))

-- Otherwise use backup pool
addAction(AllRule(), PoolAction("backup"))

Then I manually disabled all three servers in default pool:

getServer(0):setDown()
getServer(1):setDown()
getServer(2):setDown()

And all requests were redirected to backup one:

Got query for yandex.ru.|A from 127.0.0.1:52001, relayed to third
Got answer from 1.0.0.1:53, relayed to 127.0.0.1:52001, took 2644.98 usec
Got query for yandex.ru.|A from 127.0.0.1:40043, relayed to third
Got answer from 1.0.0.1:53, relayed to 127.0.0.1:40043, took 2458.87 usec
Got control connection from 127.0.0.1:54372
Got query for yandex.ru.|A from 127.0.0.1:50675, relayed to backup
Got answer from 208.67.222.222:53, relayed to 127.0.0.1:50675, took 3688.15 usec

Then I returned one of the servers to active status and it worked well tool:

Got query for yandex.ru.|A from 127.0.0.1:55171, relayed to second
Got answer from 1.1.1.1:53, relayed to 127.0.0.1:55171, took 2601.94 usec

@pavel-odintsov
Copy link

I closed my PR about "backup pool for pool". This PR implement it much better way.

@Habbie Habbie merged commit 01ca1f8 into PowerDNS:master Nov 6, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants