Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Route53 records not updating after upgrading to 1.6 #2496

Closed
sethpollack opened this issue May 5, 2017 · 8 comments
Closed

Route53 records not updating after upgrading to 1.6 #2496

sethpollack opened this issue May 5, 2017 · 8 comments

Comments

@sethpollack
Copy link
Contributor

sethpollack commented May 5, 2017

I had an issue upgrading from 1.5 to 1.6 and had to rebuild my cluster. When the new cluster came up route53 records stopped upgrading and it seems like the old records were still there. I tried deleting the old records but it still wont update them anymore.

Here is an example of how I am setting the records:

apiVersion: v1
kind: Service
metadata:
  name: app
  annotations:
    dns.alpha.kubernetes.io/internal: app.example.com
spec:
  type: LoadBalancer
@justinsb
Copy link
Member

justinsb commented May 5, 2017

@geojaz @chrislovecnm the issue here is that after #2468 we're mapping ingress records, and they're conflicting with service records that had been set up to work around the fact we weren't mapping ingress records previously.

I think we need to either:

  • revert
  • document
  • make it optional but easy (a configuration option?)

Long term (3-6 months) I want to move to external-dns, which means probably making the dns controller chooseable - I don't think we should force either dns-controller or external-dns on people.

@geojaz
Copy link
Member

geojaz commented May 5, 2017

Dang that's a bummer. sorry about that.

If I understand correctly, if you don't have old service records hanging around that were created pre-watch ingress, things are ok? In that case, I think it's ok to drop a flag on it/make it optional and document usage. I can probably look at it tomorrow if this is acceptable.

And then external-dns is longer term thing to think about...

@sethpollack
Copy link
Contributor Author

No I think the issue is when you have hosts in ingress that are not managed by route53, it breaks the batch updates and nothing gets updated.

@sethpollack
Copy link
Contributor Author

@geojaz I fixed it locally by adding the --watch-ingress=false flag to my deployment.

@kerin
Copy link

kerin commented May 9, 2017

@sethpollack where did you specify --watch-ingress=false? I've just run into this, as our cluster uses wildcard DNS entries all pointing to a single ELB, which routes to nginx-ingress - after upgrading to 1.6 I now have a DNS entry for each ingress rule, resolving to a private IP, as there's no per-service load balancer.

Could I even just delete dns-controller entirely? From what I've read I don't think we need it at all, but I don't know if other parts of kops/kubernetes depend on it in 1.6...

@sethpollack
Copy link
Contributor Author

sethpollack commented May 9, 2017

I edited the deployment and added that flag to the command. Not sure about removing it entirely? Thats probably a better question for @justinsb

kubectl -n kube-system edit deployment dns-controller

      containers:
      - command:
        - /usr/bin/dns-controller
        - --dns=aws-route53
        - --zone=kubvrnetes.com
        - --zone=*/*
        - --watch-ingress=false
        - --v=4

k8s-github-robot pushed a commit that referenced this issue Aug 30, 2017
Automatic merge from submit-queue

Adds DNSControllerSpec and WatchIngress flag

This PR is in reference to #2496, #2468 and the issues referenced in there relating to use of the watch-ingress flag. 

This PR attempts to rectify this situation and gives users who want it, the option to turn on watch-ingress without forcing it on them. Also spits out a warning to the logs about potential side effects.

Includes notes in `docs/cluster_spec.md` to explain.
@chrislovecnm
Copy link
Contributor

What is the status on this issue? Did we figure it out?

@sethpollack
Copy link
Contributor Author

yup

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants