New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[WIP] DNS in kube-proxy #11599
[WIP] DNS in kube-proxy #11599
Conversation
Thanks for your pull request. It looks like this may be your first contribution to a Google open source project, in which case you'll need to sign a Contributor License Agreement (CLA). 📝 Please visit https://cla.developers.google.com/ to sign. Once you've signed, please reply here (e.g.
|
Can one of the admins verify that this patch is reasonable to test? (reply "ok to test", or if you trust the user, reply "add to whitelist") If this message is too spammy, please complain to ixdy. |
// TODO: return a real TTL | ||
// TODO: support reverse lookup (currently, kube2sky doesn't set this up) | ||
// TODO: investigate just using add/remove/update events instead of receiving | ||
// the whole sevice set each time |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
service
@googlebot I signed it! |
CLAs look good, thanks! |
FYI this was not quite decided on, but I'll take a look in a few days time. @uluyol too On Mon, Jul 20, 2015 at 1:30 PM, Solly notifications@github.com wrote:
|
@@ -0,0 +1,455 @@ | |||
package proxy |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull this out into its own package pkg/dns or pkg/proxy/dns
I think I'm pretty convinced on my end that this is the right way to handle A problem arises if we partition the service space, but even then we can The overhead per node is fairly low as well. |
Haven't looked at details but in favor of idea. |
newEntry := normalServiceRecord{portRecords: make(map[string]skymsg.Service)} | ||
srvPath := serviceSubdomain(name, "", "", "") | ||
for _, port := range service.Spec.Ports { | ||
newEntry.portRecords[makeServicePortSpec(port)] = skymsg.Service{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You have several of these that set a bunch of default values. Use a constructor.
209c2fa
to
d6a8e00
Compare
I switched over to using the cache. Essentially, the new code listens for service updates, and converts them into skydns entries, and stores those into the cache (and indexes them), pulling in endpoint information as needed. It also listens for endpoint updates, converting those into skydns services as well (when appropriate). It seems to preform fairly well compared to the first version as well as the current pod-based implementation. I added a couple of new methods to the cache code to facilitate bulk operations on the cache (add, replace entries matching index, delete entries matching index), since a single service yields multiple skydns entries. I still need to add support for a couple of wildcard options, but other than that it mirrors the responses of the current setup. |
1f19d34
to
d0dcb35
Compare
This commit adds in dependencies on more parts of skydns to facilitate a Kubernetes SkyDNS backend.
This commit adds a new interface, `BulkIndexer` to the client cache package. `BulkIndexer` builds on `Indexer`, adding a bulk-add operation, as well as bulk-update and bulk-delete operations (based on a given index). There are useful for modifying sections of a store in an atomic fashion.
d0dcb35
to
59d4954
Compare
This commit makes kube-proxy serve DNS requests. This is done with a custom SkyDNS backend that serves requests directly based on updates from Kubernetes about services and endpoints.
59d4954
to
48f9cde
Compare
The currently "missing" parts are the following wildcard patterns, as well as TTLs that actual "age". For comparison, OpenShift's DNS implementation leaves the TTL as 30 (like this PR currently does), and doesn't implement wildcards at all. The wildcards supported in this code are prefix wildcards (e.g. ..svc.cluster.local), as well as using a wildcard instead of "svc". Not supported are wildcards in other positions. Wildcard support for the others could be added with new indices or simple loop-checking, but I wanted to confirm that these are desired (they don't seem to be mentioned in the Kubernetes DNS docs AFAICT -- it's just a side-effect of using the SkyDNS etcd backend). |
On Jul 29, 2015, at 6:25 PM, Solly notifications@github.com wrote: The currently "missing" parts are the following wildcard patterns, as well We implicitly support wildcards beyond the length of the name (extra The wildcards supported in this code are prefix wildcards (e.g. — |
@@ -144,6 +145,14 @@ func (s *ProxyServer) Run(_ []string) error { | |||
}, 5*time.Second) | |||
} | |||
|
|||
go func() { | |||
// Note: Cannot restart SkyDNS without leaking goroutines | |||
err := dns.ServeDNS(client) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does ServeDNS block? If so we may want to make the next line fatalf instead
@karlkfi the idea was that kube-proxy already has a cache for the endpoints and service information, so it made sense to just reuse that. |
I would rather not have the node watching / caching twice. The kube proxy On Tue, Nov 17, 2015 at 10:06 AM, Solly Ross notifications@github.com
|
Thinking abstractly here, perhaps prematurely: |
In the DNS case, we need to manage an index (fairly atomically), so the On Tue, Nov 17, 2015 at 3:23 PM, Karl Isenberg notifications@github.com
|
Is this still the best place to track the progress of getting a dns service into kubernetes? |
We're discussing timing on it. For now, the standard path is still the On Sun, Jan 10, 2016 at 8:30 PM, Dusty Mabe notifications@github.com
|
I'm worried most about resources. Currently DNS watches all Services, all Other than that, having a local mirror of apiserver state (and an API to That said, why not jam it all into Kubelet? That also caches Service state On Sun, Jan 10, 2016 at 7:53 PM, Clayton Coleman notifications@github.com
|
I think we should stop mirroring pods :) On Mon, Jan 11, 2016 at 2:40 AM, Tim Hockin notifications@github.com
|
For instance we implemented the pod namespace without supporting the On Mon, Jan 11, 2016 at 10:30 AM, Clayton Coleman ccoleman@redhat.com
|
Sure, we don't have to mirror the whole pod structure, but it's still On Mon, Jan 11, 2016 at 7:31 AM, Clayton Coleman notifications@github.com
|
Why do you need the pod? I thought we specifically designed the DNS name On Mon, Jan 11, 2016 at 2:11 PM, Tim Hockin notifications@github.com
|
Ahh, that's a good point - we could just parse the question in the pod On Mon, Jan 11, 2016 at 11:22 AM, Clayton Coleman notifications@github.com
|
https://github.com/openshift/origin/blob/master/pkg/dns/serviceresolver.go#L85 On Mon, Jan 11, 2016 at 2:26 PM, Tim Hockin notifications@github.com
|
This PR has no activity for multiple months. Please reopen this PR once you rebase and push a new commit. |
@sross I have an alternate implementation in On Tue, Apr 26, 2016 at 3:15 AM, Erick Fejta notifications@github.com wrote:
|
Make that @DirectXMan12 😄 |
Oops On Tue, Apr 26, 2016 at 10:31 AM, Andy Goldstein notifications@github.com
|
Is there any plan to revive this issue? |
Right now, maybe not. In Openshift we're planning on delivering
something for this, but it may be 1.4 or 1.5 before we reach
agreementZ
|
_Work In Progress_
This makes kube-proxy serve DNS as per #7469, using a custom SkyDNS backend. It functions similarly to the proxier part of kube-proxy -- it listens for updates to the service and enpoint list (using the same config listeners as proxier.go and loadbalancer.go), and converts those into SkyDNS service entries.
The responses should mirror the current SkyDNS setup.