-
Notifications
You must be signed in to change notification settings - Fork 4.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Route53 issue #1386
Comments
I found the ip address ./upup/pkg/fi/cloudup/dns.go: PlaceholderIP = "203.0.113.123" |
I don't understand what the error is here. That the placeholder DNS records are not being replaced? |
Here are the records being created:
|
Still not happy |
This is happening to me on 1.5.0-alpha3. If I use an ELB for my master nodes, all the route53 DNS entries (except for api.$NAME) get set to 203.0.113.123. The issue does not happen every time. |
I had this occur on Version 1.5.0-alpha4 and all entries were set to 203.0.113.123. Just tried again using 1.5.0-beta1 and the api entry is set to an ELB with the other 3 entries pointing at the 203.0.113.123 ip. The most recent issue with the beta kops was related to an inability of the master / nodes to get to their DNS servers as specified in DHCP scope options. Once I fixed this I terminated the instances and let the autoscaler recreate them. Everything then correctly registered in DNS and the ELB. |
@obsequiouswoe so your aws kungfu is awesome. Can you ELI5 for me?? DNS is my weak point in life. |
@chrislovecnm the issue I saw was that the newly created instances were not able to connect to their assigned DNS servers. I run a custom DHCP Scope which assigns DNS servers in a different VPC. I needed to update the route table to get the relevant range routed via the peer. Once this was completed I terminated the instances so that they would be re-created by the autoscaler this then allowed them to see DNS, which once they could they were able to update their entries in Route53. |
I am experiencing the same issue with kops Version 1.5.0-beta1 (git-b419f20). |
@obsequiouswoe what is the fix and how do we document. DNS is always hard :( |
I think I understand what's happening here:
I think this is true because the issue always seems to resolve itself after about 15 minutes. I have not done a code dive to verify. |
That is awsome! You are correct! DNS takes a long time to propagate and create. We pre create records with the 203.0.113.123 address. The container name dns-controller then sets up the DNS for us correctly. The reason that we have to use DNS is two-fold.
|
@weaseal can I ask you to drop that into documentation? We need a FAQ page. Or should it go into a DNS doc ... You are the customer. Where would you expect it? |
@chrislovecnm @weaseal I faced similar issues. Currently, I am waiting for 10 minutes for this process. Should I wait for more? I am facing this issue with v1.5.2. Any suggestion and opinions? |
@voyalab I echo your experience, I've waited 15 minutes for DNS to update / propagate correctly. I'm not aware of any work arounds on this. |
seeing the same issue where the dns records are stuck at |
Hi Eric, |
Got the same Issue now. |
When I setup a different sub domain zone and setup the ns record in the parent domain, then I'll have this problem. However, if I leave everything in the parent domain then all the instances are registered fine in the ELB. I'm on the release branch, I was doing everything from the master branch and it was giving us all sorts of problem |
I understand the purpose of the placeholder IP and such, just chiming in to point out that
The output mentions that there was an error talking to the placeholder IP. Seems like it would be relatively straightforward to notice the placeholder IP and print a message saying that the masters haven't updated DNS yet. |
Hitting this error but have confirmed that my DNS NS records match my Route 53 NS records. I.e., that Any other diagnostics I can try? (kops 1.5.3) |
@richburdon I would open another issue or reach out of slack |
Thanks: #2384 (I tried slack...) |
@danopia great point - you kind filing an issue for that product enhancement? If nobody minds I am going to close this issue in a couple of days |
I am closing |
Hi @chrislovecnm, could you either re-open this ticket or direct me to a possible solution to this problem? I can view my master node just fine with
I believe this is a DNS issue that folks, myself included, are still observing. |
I'm having the same issue than @natemurthy only the master instance is recognized by kops and kubectl, but I can see all nodes running in my Google Cloud console. |
I am opening an issue to document diagnosis of route53 / google dns problems. Not going to reopen this issue. |
Hey all. I opened an issue to document how to diagnosis problem such as these, please comment on #3888 |
203.0.113.123 this is an error iam remembering in my dreams.. i barely am creating cluster with kops create and the sub domain is good. iam stuck at this IP for ever. what and how did you guys solve this ? am i dealing with an improper version of KOPS ? |
all that #3888 is good but its not helping me. |
This still seems to be problem with kops Version 1.9.1.Even after waiting for hours for the DNS update(using AWS Route53),k8s api DNS still points to placeholder ip 203.0.113.123 and cluster validation fails. |
I've had a time that after the creation it took me an overnight and the placeholder DNS still there not been replaced by the true IP. so eventually I delete the cluster... KOPS and AWS just not works that well! |
As my experience if you have difference in version of kops and kubectl and kubernetes plane version then Kops will never update the Route53 enteries you must need to have the same version for all in my case
|
I faced the same issue and tried the following steps, it got fixed.
|
I am running on master:
A few times back to back with a delete. I noticed that several Route53 records are set to an invalid ip address. API, etcd, internal api are all set to the incorrect public ip address.
The text was updated successfully, but these errors were encountered: