Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pointing to an ExternalName service without a DNS record can overload the DNS service #6523

Closed
lucianjon opened this issue Nov 25, 2020 · 24 comments · May be fixed by #10989
Closed

Pointing to an ExternalName service without a DNS record can overload the DNS service #6523

lucianjon opened this issue Nov 25, 2020 · 24 comments · May be fixed by #10989
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. needs-priority needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one.

Comments

@lucianjon
Copy link

NGINX Ingress controller version: v0.41.2

Kubernetes version (use kubectl version):

Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.10", GitCommit:"f3add640dbcd4f3c33a7749f38baaac0b3fe810d", GitTreeState:"clean", BuildDate:"2020-05-20T14:00:52Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.9", GitCommit:"94f372e501c973a7fa9eb40ec9ebd2fe7ca69848", GitTreeState:"clean", BuildDate:"2020-09-16T13:47:43Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"}

Environment:

  • Cloud provider or hardware configuration: kops managed cluster on AWS
  • OS (e.g. from /etc/os-release):
Distributor ID:	Ubuntu
Description:	Ubuntu 20.04.1 LTS
Release:	20.04
Codename:	focal
  • Kernel (e.g. uname -a): Linux ip-10-60-10-234 5.4.0-1024-aws #24-Ubuntu SMP Sat Sep 5 06:19:55 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux

What happened:

If an ingress definition is created that points to an ExternalName service, which in turn produces a DNS lookup error, an endless loop of DNS requests is created that can bring the system down.

We noticed this when migrating from v0.19.0 -> v0.41.2, we have both controllers running in parallel. One of our teams was prepping for this and creating routes that pointed to yet to be created DNS records. It appears the old controllers were unaffected but there was huge amounts of DNS lookups generated by the routes on the new controller. It doesn't require actual requests to the routes, just creating the ingress and service definition is enough.

Eventually this overwhelmed dnsmasq and brought down our cluster's DNS, the concurrent requests were limited by dnsmasq but we were looking at thousands of requests per second. Was there some behaviour change between the two versions that could introduce this behaviour and is this expected? My naive guess is there would typically be some kind of exponential backoff on a DNS lookup error.

This is the error produced by the controller:

2020/11/25 20:18:52 [error] 1707#1707: *51723 [lua] dns.lua:152: dns_lookup(): failed to query the DNS server for foo.unknown.com:
server returned error code: 3: name error
server returned error code: 3: name error, context: ngx.timer

What you expected to happen:

DNS lookup failures to be handled with some form of backoff.

How to reproduce it:

These two definitions should be enough to reproduce the issue, assuming a proper class and namespace:

---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: dns-issue-repro
  namespace: default
  annotations:
    kubernetes.io/ingress.provider: "nginx"
    kubernetes.io/ingress.class: "external"
spec:
  rules:
    - host: foo.unknown.com
      http:
        paths:
          - path: /
            backend:
              serviceName: bad-svc
              servicePort: 80

---
apiVersion: v1
kind: Service
metadata:
  name: bad-svc
  namespace: default
spec:
  type: ExternalName
  externalName: foo.unknown.com

/kind bug

@lucianjon lucianjon added the kind/bug Categorizes issue or PR as related to a bug. label Nov 25, 2020
@aledbf
Copy link
Member

aledbf commented Nov 25, 2020

The behavior changed here #4671

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 23, 2021
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Mar 25, 2021
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-contributor-experience at kubernetes/community.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-contributor-experience at kubernetes/community.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@adamcharnock
Copy link

adamcharnock commented Feb 15, 2022

Yep, this just got me too while working on a new cluster. Nginx Ingress essentially DOSed CoreDNS, which caused all kinds of wierdness in the cluster.

Edit: Running k8s.gcr.io/ingress-nginx/controller:v1.1.1

@unnikm8
Copy link

unnikm8 commented Feb 18, 2022

I am getting this issue too.

Running k8s.gcr.io/ingress-nginx/controller:v1.1.0

@VsevolodSauta
Copy link

I'm also affected by this issue. Hope on some activity on it.
/reopen

@k8s-ci-robot
Copy link
Contributor

@VsevolodSauta: You can't reopen an issue/PR unless you authored it or you are a collaborator.

In response to this:

I'm also affected by this issue. Hope on some activity on it.
/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@javimosch
Copy link

I'm also getting this issue:
k8s.gcr.io/ingress-nginx/controller:v1.2.0

@karlhaworth
Copy link

Same issue.

@dexterlakin-bdm
Copy link

Why is this closed?

I am also seeing the same issue - has anyone here resolved it or has a workaround?

@longwuyuan
Copy link
Contributor

longwuyuan commented Oct 3, 2022 via email

@sravanakinapally
Copy link

Same issue
dns.lua:152: dns_lookup(): failed to query the DNS server for

@ahmad-sharif
Copy link

I am also having the same issue with v1.3.1 in some clusters

@qixiaobo
Copy link

qixiaobo commented Feb 7, 2023

Same problem, keep watching

@alv91
Copy link

alv91 commented Mar 10, 2023

+1

@fuog
Copy link

fuog commented Oct 26, 2023

We are experiencing identical issues on both GKE and AKS clusters while using ingress-nginx versions 1.9.1 and 1.9.3.

Occasionally, we encounter situations where the backend resides outside the cluster. The "ExternalName" record is dynamically resolved using endpoints controlled by Consul. However, if it happens to be a single backend service or the last one, and it deregisters due to reasons such as a reboot, the "ExternalName" encounters a non-existing CNAME record. This, in turn, causes ingress-nginx to goes completely crazy with such errors:

2023/10/26 18:16:18 [error] 432#432: *18134 [lua] dns.lua:152: dns_lookup(): failed to query the DNS server for my-not-existing-record.example.com:
server returned error code: 3: name error
server returned error code: 3: name error, context: ngx.timer

In situations where there are only a few occurrences, this behavior can sometimes be obscured by the sheer volume of logs. However, when a substantial number of endpoints become unreachable all at once, compounded by the current scale of Ingress-NGINX pods (which, in our scenario, includes both internal and external-facing ingress classes), the problem escalates significantly and places a severe burden on our coreDNS server, potentially overwhelming them.

What I would like to see is a restriction on the number of resolve attempts / limiting resolve-retry rates or, even more desirable, the implementation of a back-off mechanism.

@mjozefcz
Copy link

mjozefcz commented Oct 31, 2023

We're experiencing same behavior. With a few 'invalid' or 'temporaty invalid' svc ExternalName backend configurations we noticed a tons of messages like this and huge amount of DNS calls.

We tested the same scenario with traefik as a ingress controller - no issue at all, just 502 response on the client call.

@tao12345666333
Copy link
Member

/reopen

@k8s-ci-robot
Copy link
Contributor

@tao12345666333: Reopened this issue.

In response to this:

/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot reopened this Feb 17, 2024
@k8s-ci-robot k8s-ci-robot added the needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. label Feb 17, 2024
@k8s-ci-robot
Copy link
Contributor

This issue is currently awaiting triage.

If Ingress contributors determines this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale Mar 18, 2024
@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. needs-priority needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one.
Projects
None yet
Development

Successfully merging a pull request may close this issue.