-
Notifications
You must be signed in to change notification settings - Fork 38.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
update docker's resolv.conf file with options ndots:5 #10266
Conversation
GCE e2e build/test passed for commit 9d00fb0aad750800e15c7b89335286b364787fc6. |
In this PR, we simply add the "options ndots:3" to the docker generated resolv.conf file. |
GCE e2e build/test passed for commit e7bfa71be37ad070ff19629c5e0da3dc7d76db33. |
The challenge is now that user code can race against the resolver - if someone has a very fast script that makes a curl call, we probably won't beat them. This makes it possible to use ndots, so it's an improvement, but I worry that it will be unreliable for users. |
In reality, all containers of the pod share the same dns resolv file, because docker sees that the other containers have marked the pause container as the container to use for networking namespace. The pause container gets created first and we update the resolv file for the pause container. So we should not have this problem. For the pause container too, (whose only purpose is to hold the networking namespace), the resolv.conf file gets updated in the same second that it got created, per kubelet log timestamps.. So pause serves has two important functions:
My guess is that the code we had to create resolv.conf per container of the pod was probably never working since docker always uses the pause container's resolv.conf file for other containers of the same pod. This is similar to how docker uses the host's resolv.conf file when dns options is not specified |
Is there no case today where we recreate the infra pod without tearing down the old containers? |
If the infra pod is teared down, the networking namespace goes with it and old containers of the same pod have to be recreated. You can verify this by manually killing a pause container and see that containers of that pod restart. |
4003cd3
to
8d0d712
Compare
GCE e2e build/test passed for commit 4003cd350411d7ec02254cbf53eebe3c43231189. |
GCE e2e build/test passed for commit 8d0d71279e66e1e9b2378d520e0c1473187da311. |
@@ -28,6 +28,7 @@ import ( | |||
"strconv" | |||
"strings" | |||
"sync" | |||
//"syscall" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove this
note to self: tests, address comments, add a lock to not update the same file twice. |
} | ||
|
||
func appendToFile(filePath, stringToAppend string) error { | ||
f, err := os.OpenFile(filePath, os.O_APPEND|os.O_WRONLY, 0644) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Will this fail if the file doesn't exist (due to another issue, like docker getting stopped and tearing down the container) and if so will it be a clean error?
What would prevent us from updating the file? Kubelet killed during restart? I would expect only one execution path here.
|
I haven't done review yet, but pathologically, we need to set ndots to 5 if On Wed, Jun 24, 2015 at 8:53 AM, Clayton Coleman notifications@github.com
|
d5951d9
to
e001aa2
Compare
GCE e2e build/test passed for commit 7e53e5ccbb7dbf6616a5b26749a07ff82eeb8c2b. |
26ecc0e
to
e68d5dd
Compare
@thockin @smarterclayton Thanks for the PR comments. I think I have addressed them all. |
LGTM. We can reduce the test case back to one when we do a new dnsutils container. |
@smarterclayton if you're not around, I'll get @dchen1107 to provide 2nd LGTM |
@thockin I already created a new dnsutils container. It is part of the PR. Its called jessie-dnsutils.. I didn't quite follow your new-container-comment.. |
@k8s-bot test this please |
I want to build a new dnsutils container that is not based on debian at On Wed, Jun 24, 2015 at 10:11 PM, Abhi Shah notifications@github.com
|
ok to test |
@thockin ok, thanks |
LGTM |
e68d5dd
to
23caf44
Compare
@k8s-bot test this please |
update docker's resolv.conf file with options ndots:5
http://kubekins.dls.corp.google.com/view/Critical%20Builds/job/kubernetes-e2e-gce/7202/ Jenkins GCE e2e passed. |
This change appears to have broken kubernetes in my environment w.r.t. accessing external DNS for reasons I do not fully understand yet. Is there a way to make this optional? |
@ghaskins only if you don't want kubernetes DNS to work. You can probably reduce it from 5 to 3, but that may change again in the future... |
This change seems to make external DNS lookups very slow.
Is there any way, I can use kubernetes DNS without slowing down external DNS resolution? |
Can you characterize "slow" I see low milliseconds for DNS lookups, even On Mon, Aug 8, 2016 at 2:50 PM, Karteek E notifications@github.com wrote:
|
I don't have exact numbers to quantify the slowness. This is primarily due to two reasons IMO
|
kube-dns has a dnsmasq in front of it doign caching, supposedly. On Mon, Aug 8, 2016 at 4:49 PM, Karteek E notifications@github.com wrote:
|
There is dnsmasq caching in 1.3 - could you run a describe on your kube-dns RC ? There are two questions in my mind that I need to verify -
|
I understand that there is dnsmasq caching. Probably thats why I didn't see much difference in dnsperf results even after enabling caching on kube-dns. This is my kube-dns RC
Edit: Removed irrelevant information. |
Proof-of-concept PR for #10161 issue.
superseded #10241 PR