Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TXT records with multiple targets are not handled properly #2762

Closed
golx opened this issue May 17, 2022 · 8 comments
Closed

TXT records with multiple targets are not handled properly #2762

golx opened this issue May 17, 2022 · 8 comments
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@golx
Copy link

golx commented May 17, 2022

What happened:
I have multiple TXT records for one of my domains, some of them are not managed by external-dns TXT registry. This results in TXT endpoints with several targets coming from the DNS provider, but TXT registry just takes the first target.
https://github.com/kubernetes-sigs/external-dns/blob/master/registry/txt.go#L107
This results in either:

  • A record not updated by external-dns as it fails to "see" the TXT record it looks for, as the target is not the first one in the list
  • or TXT record not related to external-dns removed by external-dns if external-dns managed TXT target is the first one in the list

What you expected to happen:
TXT registry should iterate over TXT targets instead of just picking the first one.

How to reproduce it (as minimally and precisely as possible):
Create a TXT record in addition to the one created by external-dns.

Environment:

  • External-DNS version (use external-dns --version): 0.11.1
  • DNS provider: DigitalOcean
  • Others:
@golx golx added the kind/bug Categorizes issue or PR as related to a bug. label May 17, 2022
@cuppett
Copy link

cuppett commented Jul 30, 2022

I think I see this as well in AWS.

I have a zone with multiple, existing TXT entries on the root domain:

        {
            "Name": "example.com.",
            "Type": "TXT",
            "TTL": 86400,
            "ResourceRecords": [
                {
                    "Value": "\"v=spf1 a mx a:completeupdates.com ~all\""
                },
                {
                    "Value": "\"google-site-verification=pKt0HRP...aomM\""
                },
                {
                    "Value": "\"google-site-verification=wsObL_...3gWME-Bj5E\""
                }
            ]
        },

and in the updates it's trying to make I get this in the log:

time="2022-07-30T00:59:57Z" level=info msg="Desired change: CREATE example.com A [Id: /hostedzone/ZZZZZZZZZZZ]"
time="2022-07-30T00:59:57Z" level=info msg="Desired change: CREATE example.com TXT [Id: /hostedzone/ZZZZZZZZZZZ]"
time="2022-07-30T00:59:57Z" level=error msg="Failure in zone example.com. [Id: /hostedzone/ZZZZZZZZZZZ]"
time="2022-07-30T00:59:57Z" level=error msg="InvalidChangeBatch: [Tried to create resource record set [name='example.com.', type='TXT'] but it already exists]\n\tstatus code: 400, request id: f31254aa-a72f-4e80-8008-c6fdda55774e"

@nickmonad
Copy link

I believe I am also seeing this issue, or one quite similar. If we have a TXT record created ahead of time (for example, as required by mailgun, when our application needs to send mail, TXT myapp v=spf1 include:mailgun.org ~all), external-dns reports,

level=warning msg="Preexisting records exist which should not exist for creation actions." dnsName=myapp.domain.com domain=domain.com recordType=TXT

and will continue to create new TXT records containing "heritage" information indefinitely at every polling loop.

@pjelar
Copy link

pjelar commented Nov 26, 2022

I'm seeing this issue using:

Environment:
External-DNS version (use external-dns --version): 0.13.1
DNS provider: DigitalOcean

I only have one nginx ingress.

spec:
ingressClassName: nginx
rules:

  • host: api.mydns.com
    http:
    paths:
    • backend:
      service:
      name: myservice
      port:
      number: 8000
      path: /
      pathType: Prefix

The dns is created correctly but on every loop it creates another TXT record and complains that one already exists.

@cuppett
Copy link

cuppett commented Nov 27, 2022

I found this workaround:

#449 (comment)

Using a --text-prefix that helps disambiguate and separate these out, it creates predictability here (and a few more records).

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 25, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Mar 27, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale Apr 26, 2023
@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

6 participants