-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support custom dnsPolicy
and dnsConfig
#62
Comments
See also http://blog.kubernetes.io/2017/04/configuring-private-dns-zones-upstream-nameservers-kubernetes.html for some interesting details on this issue. We should consider allowing the controller Destination service to be configured with a ConfigMap like that. |
dnsPolicy
and dnsConfig
conduit inject
should inject into pods that use incompatible dnsPolicy
and dnsConfig
Based on the solution to #366, we should change course. Instead of documenting that Conduit would potentially break pods using incompatible DNS configuration, we should instead by default avoid injecting the proxy into such pods. We should provide a documented way to get the proxy working with such pods. For example, we could document that one has to remove the incompatible DNS settings. And/or we could implement & document an explicit pod annotation that would override our default decision to assume that such DNS configurations are incompatible. |
conduit inject
should inject into pods that use incompatible dnsPolicy
and dnsConfig
conduit inject
should not inject into pods that use incompatible dnsPolicy
and dnsConfig
Based on the investigation in #392 and the current direction of things as described in the conversation at the end of #360 I think it's best to avoid doing anything for this for 0.3. Basically, if we spend the effort to, we should be able to transparently support any DNS policy, automatically. It'd be unfortunate that 0.3 wouldn't do that, but this isn't the most significant DNS transparency issue in 0.3, and we might fix the overall issue in 0.4. |
We should hold off on this until #421 is done |
For completeness, the errors that @jakerobb reported on Slack were:
|
We just updated our trust-dns dependency at the end of last week, so we'll retest this and work with upstream to address the issue if it still exists. |
Maybe the rest of you already realize this, but I've found a workaround for this issue that works as long as the hosts you need to resolve have static IP addresses. In the pod template section of your deployment YAML, under the "spec" element, you can create a 'hostAliases' element and define entries which will land in the container's
Hope this helps someone! |
Great to know. Thanks @jakerobb |
The most likely reason it failed: Maybe Trust-DNS won't fall back to resolving the single-label unqualified name X as "X." after it tries everything else in the search path? I suggest we add this to Trust-DNS's unit tests and see what happens. |
Maybe @bluejekyll knows? |
The issue @briansmith is mentioning around search order and ndots was fixed in both 0.8.2 and 0.9 of the Resolver. There is another issue of NameServer starvation that was only fixed in 0.9: https://github.com/bluejekyll/trust-dns/pull/457 Which version are you on currently? |
@bluejekyll this error was observed using 0.8.2 (I checked I'm planning on doing some testing and seeing if this also occurs with |
Interesting, after updating to ERR! admin={bg=resolver} conduit_proxy::control controller error: Error attempting to establish underlying session layer: operation timed out after 3s
WARN trust_dns_proto::xfer error notifying wait, possible future leak: Err(ResolveError { inner: ProtoError { inner: request timed out })
ERR! admin={bg=resolver} conduit_proxy::control controller error: Error attempting to establish underlying session layer: operation timed out after 3s
WARN trust_dns_proto::xfer error notifying wait, possible future leak: Err(ResolveError { inner: ProtoError { inner: request timed out }) in the logs from a pod with spec:
dnsPolicy: "Default"
containers:
- ...
|
I'm not familiar with this Edit: I just reviewed the docs,
Are these settings being passed directly into trust-dns-resolver config? meaning, is there some ordering required in the NameServers in the Resolver instance? |
@briansmith the results above are from a build of Conduit with #1032. I'm planning on doing some additional digging into this issue. |
Nevermind, just re-ran the tests with the master build of conduit, looks like cluster-local names were always broken with custom |
To clarify, they weren't working yet for custom configurations that aren't using |
@briansmith Yes, that's correct. Updated my original comment to make that clearer. |
@bluejekyll
This is in fact, what I was going to start looking into next. |
If constant ordering is required, we'll have to look into adding that as an option to the If this is undesirable, we should disable this |
@bluejekyll I did the test again using a custom build of |
In the configuration above in #62 (comment) there's only one nameserver. |
Is this issue still relevant? |
@wmorgan I'm not aware of anything that's happened recently that would fix it, but would have to test to confirm... |
trust-dns-resolver is a more complete implementation. In particular, it supports CNAMES correctly, which is needed for PR linkerd#764. It also supports /etc/hosts, which will help with issue linkerd#62. Use the 0.8.2 pre-release since it hasn't been released yet. It was created at our request. Signed-off-by: Brian Smith <brian@briansmith.org>
We're not bypassing DNS any longer and statefulsets work properly now. |
For background on Kubernetes
dnsPolicy
anddnsConfig
, see https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pods-dns-policyFor any pod managed by Conduit, the DNS configuration is completely bypassed. Assuming we want to continue to do this, we should:
conduit inject
, warn when a pod containsdnsConfig
or a non-defaultdnsPolicy
. (Note that the defaultdnsPolicy
is not "Default"; the default is "ClusterFirst."We might also consider whether we want to actually honor the pod's DNS policy and/or the DNS config. This would probably require us to implement DNS in the proxy.
The text was updated successfully, but these errors were encountered: