Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

forward plugin "sequential" policy is not always respected #2000

Closed
mmiller1 opened this issue Jul 25, 2018 · 3 comments
Closed

forward plugin "sequential" policy is not always respected #2000

mmiller1 opened this issue Jul 25, 2018 · 3 comments

Comments

@mmiller1
Copy link

We are using the forward plugin on coredns 1.1.3 to resolve external dns names, we set the policy to sequential as to only hit the primary dns server provided, in our environment this is a load balanced endpoint that will always respond much faster than our alternative servers. See our configuration below:

apiVersion: v1
kind: ConfigMap
metadata:
  name: coredns
  namespace: kube-system
data:
  Corefile: |-
    .:53 {
        cache 30
        debug
        errors
        forward . /etc/resolv.conf {
          policy sequential
          max_fails 2
          health_check 1s
        }
        health
        kubernetes cluster.local in-addr.arpa ip6.arpa {
          pods insecure
          upstream
          fallthrough in-addr.arpa ip6.arpa
        }
        loadbalance round_robin
        log
        prometheus 0.0.0.0:9153
        reload
    }

our resolv.conf file looks as follows:

$ kubectl exec -ti coredns-65fc98d5c6-9r6rn cat /etc/resolv.conf -n kube-system
nameserver 10.130.150.10
nameserver 10.130.170.240
nameserver 10.130.170.241
search domain1 domain2 domain3

while on the nodes that the coredns pods are running on, the following command will show a good amount of dns traffic being routed to the alternative dns servers.

tcpdump -i bond0 port 53 and not host 10.130.150.10

This is also evident in the prometheus metrics.

sum(coredns_forward_request_count_total) by (to)
{to="10.130.150.10:53"} | 839073
{to="10.130.170.240:53"} | 26870
{to="10.130.170.241:53"} | 1529

There is nothing in the coredns logs that seem to indicate why the alternative servers are receiving traffic, any help understanding / resolving this problem is appreciated.

@miekg
Copy link
Member

miekg commented Jul 25, 2018 via email

@mmiller1
Copy link
Author

It would be awesome to see some additional logging in this area, as far as I can tell, we never had this issue with kubedns.

@miekg
Copy link
Member

miekg commented Mar 23, 2019

intent to close this soon

@miekg miekg closed this as completed Apr 1, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants