New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[WIP] DNS in kube-proxy #11599

Closed
wants to merge 3 commits into
base: master
from

Conversation

Projects
None yet
@DirectXMan12
Contributor

DirectXMan12 commented Jul 20, 2015

_Work In Progress_

This makes kube-proxy serve DNS as per #7469, using a custom SkyDNS backend. It functions similarly to the proxier part of kube-proxy -- it listens for updates to the service and enpoint list (using the same config listeners as proxier.go and loadbalancer.go), and converts those into SkyDNS service entries.

The responses should mirror the current SkyDNS setup.

@googlebot

This comment has been minimized.

googlebot commented Jul 20, 2015

Thanks for your pull request. It looks like this may be your first contribution to a Google open source project, in which case you'll need to sign a Contributor License Agreement (CLA).

📝 Please visit https://cla.developers.google.com/ to sign.

Once you've signed, please reply here (e.g. I signed it!) and we'll verify. Thanks.


  • If you've already signed a CLA, it's possible we don't have your GitHub username or you're using a different email address. Check your existing CLA data and verify that your email is set on your git commits.
  • If you signed the CLA as a corporation, please let us know the company's name.

@googlebot googlebot added the cla: no label Jul 20, 2015

@k8s-bot

This comment has been minimized.

k8s-bot commented Jul 20, 2015

Can one of the admins verify that this patch is reasonable to test? (reply "ok to test", or if you trust the user, reply "add to whitelist")

If this message is too spammy, please complain to ixdy.

@DirectXMan12

This comment has been minimized.

Contributor

DirectXMan12 commented Jul 20, 2015

// TODO: return a real TTL
// TODO: support reverse lookup (currently, kube2sky doesn't set this up)
// TODO: investigate just using add/remove/update events instead of receiving
// the whole sevice set each time

This comment has been minimized.

@ncdc

ncdc Jul 20, 2015

Member

service

@DirectXMan12

This comment has been minimized.

Contributor

DirectXMan12 commented Jul 20, 2015

@googlebot I signed it!

@googlebot

This comment has been minimized.

googlebot commented Jul 20, 2015

CLAs look good, thanks!

@googlebot googlebot added cla: yes and removed cla: no labels Jul 20, 2015

@thockin

This comment has been minimized.

Member

thockin commented Jul 20, 2015

FYI this was not quite decided on, but I'll take a look in a few days time.

@uluyol too

On Mon, Jul 20, 2015 at 1:30 PM, Solly notifications@github.com wrote:

Work In Progress

This makes kube-proxy serve DNS as per #7469
#7469, using a
custom SkyDNS backend. It functions similarly to the proxier part of
kube-proxy -- it listens for updates to the service and enpoint list (using
the same config listeners as proxier.go and loadbalancer.go), and converts
those into SkyDNS service entries.

The responses should mirror the current SkyDNS setup.

You can view, comment on, or merge this pull request online at:

#11599
Commit Summary

  • Godeps: Update to support DNS in kube-proxy
  • [WIP] DNS in kube-proxy

File Changes

Patch Links:


Reply to this email directly or view it on GitHub
#11599.

@@ -0,0 +1,455 @@
package proxy

This comment has been minimized.

@smarterclayton

smarterclayton Jul 21, 2015

Contributor

Pull this out into its own package pkg/dns or pkg/proxy/dns

@@ -146,6 +152,13 @@ func (s *ProxyServer) Run(_ []string) error {
}, 5*time.Second)
}
go func() {
err := proxy.ServeDNS(dnsHandler)

This comment has been minimized.

@smarterclayton

smarterclayton Jul 21, 2015

Contributor

This is typically in a go util.Forever loop (if ServeDNS doesn't spawn its own goroutines). If it does, this is fine.

This comment has been minimized.

@DirectXMan12

This comment has been minimized.

@smarterclayton

smarterclayton Jul 21, 2015

Contributor

Ok - add a comment to that effect here "cannot restart skydns without
leaking goroutines"

On Jul 21, 2015, at 2:08 PM, Solly notifications@github.com wrote:

In cmd/kube-proxy/app/server.go
#11599 (comment)
:

@@ -146,6 +152,13 @@ func (s ProxyServer) Run( []string) error {
}, 5_time.Second)
}

  • go func() {
  • err := proxy.ServeDNS(dnsHandler)
    

SkyDNS spawns goroutines in Run() --
https://github.com/skynetservices/skydns/blob/master/server/server.go#L149


Reply to this email directly or view it on GitHub
https://github.com/GoogleCloudPlatform/kubernetes/pull/11599/files#r35134201
.

func ServeDNS(handler *DNSHandler) error {
config := &skyserver.Config{
Domain: "cluster.local.",

This comment has been minimized.

@smarterclayton

smarterclayton Jul 21, 2015

Contributor

Should be domainSuffix

return fmt.Sprintf("%x", h.Sum32())
}
func (handler *DNSServiceHandler) OnUpdate(services []api.Service) {

This comment has been minimized.

@smarterclayton

smarterclayton Jul 21, 2015

Contributor

Why implement service lookup this way instead of simply using the cache directly to answer queries? Is it so you can handle the reverse record lookup?

I think that's an argument that we should be using client.Cache for the proxy instead of proxy.Config - that refactor is slightly larger, but that would dramatically reduce the code in the proxy to something more reasonable: cache.Reflector (indexer on cluster ip) for services, and a cache.Reflector for endpoints.

That would then allow us to answer queries in a single function vs having to materialize two separate structures.

This comment has been minimized.

@smarterclayton

smarterclayton Jul 21, 2015

Contributor

The config code here was written long before cache.Store existed, but the store solves both problems far more elegantly than the current setup.

This comment has been minimized.

@DirectXMan12

DirectXMan12 Jul 21, 2015

Contributor

Ah, ok. The original suggestion was that "proxy already has the information", so I tried to mirror the existing code in proxier.go and endpoints.go. I will take a look at cache.Store and friends.

This comment has been minimized.

@smarterclayton

smarterclayton Jul 21, 2015

Contributor

The DNS service resolver in Openshift could in theory be used whole cloth.
In the long run, the proxy should be using cache.Store. It may be good to
split that refactor if it looks feasible (proxy to cache.Store first, then
DNS). Im pretty sure the refactor is possible, but there may be a wrinkle
I'm missing. You will need the reverse cluster IP indexer anyway.

On Jul 21, 2015, at 10:31 AM, Solly notifications@github.com wrote:

In pkg/proxy/dns.go
#11599 (comment)
:

+type DNSServiceHandler struct {

  • *DNSHandler
    +}
    +
    +type DNSEndpointHandler struct {
  • *DNSHandler
    +}
    +
    +func getHash(text string) string {
  • h := fnv.New32a()
  • h.Write([]byte(text))
  • return fmt.Sprintf("%x", h.Sum32())
    +}
    +
    +func (handler *DNSServiceHandler) OnUpdate(services []api.Service) {

Ah, ok. The original suggestion was that "proxy already has the
information", so I tried to mirror the existing code in proxier.go and
endpoints.go. I will take a look at cache.Store and friends.


Reply to this email directly or view it on GitHub
https://github.com/GoogleCloudPlatform/kubernetes/pull/11599/files#r35108136
.

@smarterclayton

This comment has been minimized.

Contributor

smarterclayton commented Jul 21, 2015

I think I'm pretty convinced on my end that this is the right way to handle
resiliency of DNS (to outage).

A problem arises if we partition the service space, but even then we can
always forward that query up the chain if necessary.

The overhead per node is fairly low as well.

@bgrant0607

This comment has been minimized.

Member

bgrant0607 commented Jul 21, 2015

Haven't looked at details but in favor of idea.

newEntry := normalServiceRecord{portRecords: make(map[string]skymsg.Service)}
srvPath := serviceSubdomain(name, "", "", "")
for _, port := range service.Spec.Ports {
newEntry.portRecords[makeServicePortSpec(port)] = skymsg.Service{

This comment has been minimized.

@uluyol

uluyol Jul 23, 2015

Contributor

You have several of these that set a bunch of default values. Use a constructor.

@DirectXMan12 DirectXMan12 force-pushed the DirectXMan12:feature/dns-in-kube-proxy branch from 209c2fa to d6a8e00 Jul 24, 2015

@DirectXMan12

This comment has been minimized.

Contributor

DirectXMan12 commented Jul 24, 2015

I switched over to using the cache. Essentially, the new code listens for service updates, and converts them into skydns entries, and stores those into the cache (and indexes them), pulling in endpoint information as needed. It also listens for endpoint updates, converting those into skydns services as well (when appropriate). It seems to preform fairly well compared to the first version as well as the current pod-based implementation.

I added a couple of new methods to the cache code to facilitate bulk operations on the cache (add, replace entries matching index, delete entries matching index), since a single service yields multiple skydns entries.

I still need to add support for a couple of wildcard options, but other than that it mirrors the responses of the current setup.

@DirectXMan12 DirectXMan12 force-pushed the DirectXMan12:feature/dns-in-kube-proxy branch 2 times, most recently from 1f19d34 to d0dcb35 Jul 27, 2015

DirectXMan12 added some commits Jul 20, 2015

Godeps: Update to support DNS in kube-proxy
This commit adds in dependencies on more parts of skydns to
facilitate a Kubernetes SkyDNS backend.
Add Bulk Operations to the Client Cache
This commit adds a new interface, `BulkIndexer` to the client
cache package.  `BulkIndexer` builds on `Indexer`, adding
a bulk-add operation, as well as bulk-update and bulk-delete
operations (based on a given index).  There are useful for
modifying sections of a store in an atomic fashion.

@DirectXMan12 DirectXMan12 force-pushed the DirectXMan12:feature/dns-in-kube-proxy branch from d0dcb35 to 59d4954 Jul 28, 2015

[WIP] DNS in kube-proxy
This commit makes kube-proxy serve DNS requests.  This is done
with a custom SkyDNS backend that serves requests directly based
on updates from Kubernetes about services and endpoints.

@DirectXMan12 DirectXMan12 force-pushed the DirectXMan12:feature/dns-in-kube-proxy branch from 59d4954 to 48f9cde Jul 29, 2015

@DirectXMan12

This comment has been minimized.

Contributor

DirectXMan12 commented Jul 29, 2015

The currently "missing" parts are the following wildcard patterns, as well as TTLs that actual "age". For comparison, OpenShift's DNS implementation leaves the TTL as 30 (like this PR currently does), and doesn't implement wildcards at all.

The wildcards supported in this code are prefix wildcards (e.g. ..svc.cluster.local), as well as using a wildcard instead of "svc". Not supported are wildcards in other positions. Wildcard support for the others could be added with new indices or simple loop-checking, but I wanted to confirm that these are desired (they don't seem to be mentioned in the Kubernetes DNS docs AFAICT -- it's just a side-effect of using the SkyDNS etcd backend).

@smarterclayton

This comment has been minimized.

Contributor

smarterclayton commented Jul 29, 2015

On Jul 29, 2015, at 6:25 PM, Solly notifications@github.com wrote:

The currently "missing" parts are the following wildcard patterns, as well
as TTLs that actual "age". For comparison, OpenShift's DNS implementation
leaves the TTL as 30 (like this PR currently does), and doesn't implement
wildcards at all.

We implicitly support wildcards beyond the length of the name (extra
segments) in OpenShift. Are you referring to inside segments sending *?
That has security implications - we explicitly chose not to expose that to
prevent easy service enumeration.

The wildcards supported in this code are prefix wildcards (e.g.
..svc.cluster.local),
as well as using a wildcard instead of "svc". Not supported are wildcards
in other positions. Wildcard support for the others could be added with new
indices or simple loop-checking, but I wanted to confirm that these are
desired (they don't seem to be mentioned in the Kubernetes DNS docs AFAICT
-- it's just a side-effect of using the SkyDNS etcd backend).


Reply to this email directly or view it on GitHub
#11599 (comment)
.

@@ -144,6 +145,14 @@ func (s *ProxyServer) Run(_ []string) error {
}, 5*time.Second)
}
go func() {
// Note: Cannot restart SkyDNS without leaking goroutines
err := dns.ServeDNS(client)

This comment has been minimized.

@smarterclayton

smarterclayton Jul 29, 2015

Contributor

Does ServeDNS block? If so we may want to make the next line fatalf instead

@DirectXMan12

This comment has been minimized.

Contributor

DirectXMan12 commented Nov 17, 2015

@karlkfi the idea was that kube-proxy already has a cache for the endpoints and service information, so it made sense to just reuse that.

@smarterclayton

This comment has been minimized.

Contributor

smarterclayton commented Nov 17, 2015

I would rather not have the node watching / caching twice. The kube proxy
is really a service proxy - proxying / replying with the DNS capability is
also in its remit. It also means the service-proxy can provide a special
VIP DNS entry that doesn't require the node to serve DNS on a public port.

On Tue, Nov 17, 2015 at 10:06 AM, Solly Ross notifications@github.com
wrote:

@karlkfi https://github.com/karlkfi the idea was that kube-proxy
already has a cache for the endpoints and service information, so it made
sense to just reuse that.


Reply to this email directly or view it on GitHub
#11599 (comment)
.

@karlkfi

This comment has been minimized.

Contributor

karlkfi commented Nov 17, 2015

Thinking abstractly here, perhaps prematurely:
Would it make sense to have a "local caching k8s client" that can do watch coalescing more generally? Then other components could make localhost:<port> queries on-demand to avoid external network traffic. It would also simplify component design.

@smarterclayton

This comment has been minimized.

Contributor

smarterclayton commented Nov 17, 2015

In the DNS case, we need to manage an index (fairly atomically), so the
local caching client would also need to manage and maintain DNS specific
indices (and so not be easily made generic). The queries made by the DNS
server are answered directly by those indices.

On Tue, Nov 17, 2015 at 3:23 PM, Karl Isenberg notifications@github.com
wrote:

Thinking abstractly here, perhaps prematurely:
Would it make sense to have a "local caching k8s client" that can do watch
coalescing more generally? Then other components could make
localhost: queries on-demand to avoid external network traffic. It
would also simplify component design.


Reply to this email directly or view it on GitHub
#11599 (comment)
.

@dustymabe

This comment has been minimized.

dustymabe commented Jan 11, 2016

Is this still the best place to track the progress of getting a dns service into kubernetes?

@smarterclayton

This comment has been minimized.

Contributor

smarterclayton commented Jan 11, 2016

We're discussing timing on it. For now, the standard path is still the
story.

On Sun, Jan 10, 2016 at 8:30 PM, Dusty Mabe notifications@github.com
wrote:

Is this still the best place to track the progress of getting a dns
service into kubernetes?


Reply to this email directly or view it on GitHub
#11599 (comment)
.

@thockin

This comment has been minimized.

Member

thockin commented Jan 11, 2016

I'm worried most about resources. Currently DNS watches all Services, all
Endpoints, and all Pods. In a large cluster that gets pretty significant,
doesn't it? I don't have numbers on hand.

Other than that, having a local mirror of apiserver state (and an API to
access it) seems like a good optimization for read-only clients on the
node, but barring that I'm OK with modularizing kube-proxy to do more than
one agent-ish thing. Maybe a rename would be in order.

That said, why not jam it all into Kubelet? That also caches Service state
(to generate env vars) and pod state (duh). I know the answer about
availability and reliability, I am just illustrating that, as we add more
to kube-proxy, it too will be come critical and then we have the same
debates again.

On Sun, Jan 10, 2016 at 7:53 PM, Clayton Coleman notifications@github.com
wrote:

We're discussing timing on it. For now, the standard path is still the
story.

On Sun, Jan 10, 2016 at 8:30 PM, Dusty Mabe notifications@github.com
wrote:

Is this still the best place to track the progress of getting a dns
service into kubernetes?


Reply to this email directly or view it on GitHub
<
#11599 (comment)

.


Reply to this email directly or view it on GitHub
#11599 (comment)
.

@smarterclayton

This comment has been minimized.

Contributor

smarterclayton commented Jan 11, 2016

I think we should stop mirroring pods :)

On Mon, Jan 11, 2016 at 2:40 AM, Tim Hockin notifications@github.com
wrote:

I'm worried most about resources. Currently DNS watches all Services, all
Endpoints, and all Pods. In a large cluster that gets pretty significant,
doesn't it? I don't have numbers on hand.

Other than that, having a local mirror of apiserver state (and an API to
access it) seems like a good optimization for read-only clients on the
node, but barring that I'm OK with modularizing kube-proxy to do more than
one agent-ish thing. Maybe a rename would be in order.

That said, why not jam it all into Kubelet? That also caches Service state
(to generate env vars) and pod state (duh). I know the answer about
availability and reliability, I am just illustrating that, as we add more
to kube-proxy, it too will be come critical and then we have the same
debates again.

On Sun, Jan 10, 2016 at 7:53 PM, Clayton Coleman <notifications@github.com

wrote:

We're discussing timing on it. For now, the standard path is still the
story.

On Sun, Jan 10, 2016 at 8:30 PM, Dusty Mabe notifications@github.com
wrote:

Is this still the best place to track the progress of getting a dns
service into kubernetes?


Reply to this email directly or view it on GitHub
<

#11599 (comment)

.


Reply to this email directly or view it on GitHub
<
#11599 (comment)

.


Reply to this email directly or view it on GitHub
#11599 (comment)
.

@smarterclayton

This comment has been minimized.

Contributor

smarterclayton commented Jan 11, 2016

For instance we implemented the pod namespace without supporting the
existence check - we simply respond with an answer for the IP.

On Mon, Jan 11, 2016 at 10:30 AM, Clayton Coleman ccoleman@redhat.com
wrote:

I think we should stop mirroring pods :)

On Mon, Jan 11, 2016 at 2:40 AM, Tim Hockin notifications@github.com
wrote:

I'm worried most about resources. Currently DNS watches all Services, all
Endpoints, and all Pods. In a large cluster that gets pretty significant,
doesn't it? I don't have numbers on hand.

Other than that, having a local mirror of apiserver state (and an API to
access it) seems like a good optimization for read-only clients on the
node, but barring that I'm OK with modularizing kube-proxy to do more than
one agent-ish thing. Maybe a rename would be in order.

That said, why not jam it all into Kubelet? That also caches Service state
(to generate env vars) and pod state (duh). I know the answer about
availability and reliability, I am just illustrating that, as we add more
to kube-proxy, it too will be come critical and then we have the same
debates again.

On Sun, Jan 10, 2016 at 7:53 PM, Clayton Coleman <
notifications@github.com>
wrote:

We're discussing timing on it. For now, the standard path is still the
story.

On Sun, Jan 10, 2016 at 8:30 PM, Dusty Mabe notifications@github.com
wrote:

Is this still the best place to track the progress of getting a dns
service into kubernetes?


Reply to this email directly or view it on GitHub
<

#11599 (comment)

.


Reply to this email directly or view it on GitHub
<
#11599 (comment)

.


Reply to this email directly or view it on GitHub
#11599 (comment)
.

@thockin

This comment has been minimized.

Member

thockin commented Jan 11, 2016

Sure, we don't have to mirror the whole pod structure, but it's still
O(num-pods) storage costs, even if you reduce it by a large constant factor.

On Mon, Jan 11, 2016 at 7:31 AM, Clayton Coleman notifications@github.com
wrote:

For instance we implemented the pod namespace without supporting the
existence check - we simply respond with an answer for the IP.

On Mon, Jan 11, 2016 at 10:30 AM, Clayton Coleman ccoleman@redhat.com
wrote:

I think we should stop mirroring pods :)

On Mon, Jan 11, 2016 at 2:40 AM, Tim Hockin notifications@github.com
wrote:

I'm worried most about resources. Currently DNS watches all Services,
all
Endpoints, and all Pods. In a large cluster that gets pretty
significant,
doesn't it? I don't have numbers on hand.

Other than that, having a local mirror of apiserver state (and an API to
access it) seems like a good optimization for read-only clients on the
node, but barring that I'm OK with modularizing kube-proxy to do more
than
one agent-ish thing. Maybe a rename would be in order.

That said, why not jam it all into Kubelet? That also caches Service
state
(to generate env vars) and pod state (duh). I know the answer about
availability and reliability, I am just illustrating that, as we add
more
to kube-proxy, it too will be come critical and then we have the same
debates again.

On Sun, Jan 10, 2016 at 7:53 PM, Clayton Coleman <
notifications@github.com>
wrote:

We're discussing timing on it. For now, the standard path is still the
story.

On Sun, Jan 10, 2016 at 8:30 PM, Dusty Mabe <notifications@github.com

wrote:

Is this still the best place to track the progress of getting a dns
service into kubernetes?


Reply to this email directly or view it on GitHub
<

#11599 (comment)

.


Reply to this email directly or view it on GitHub
<

#11599 (comment)

.


Reply to this email directly or view it on GitHub
<
#11599 (comment)

.


Reply to this email directly or view it on GitHub
#11599 (comment)
.

@smarterclayton

This comment has been minimized.

Contributor

smarterclayton commented Jan 11, 2016

Why do you need the pod? I thought we specifically designed the DNS name
so that you don't need to know whether the pod exists to answer the query
(because the name includes the IP). We only need O(num-pods) storage today
because of etcd and the current skydns mechanism?

On Mon, Jan 11, 2016 at 2:11 PM, Tim Hockin notifications@github.com
wrote:

Sure, we don't have to mirror the whole pod structure, but it's still
O(num-pods) storage costs, even if you reduce it by a large constant
factor.

On Mon, Jan 11, 2016 at 7:31 AM, Clayton Coleman <notifications@github.com

wrote:

For instance we implemented the pod namespace without supporting the
existence check - we simply respond with an answer for the IP.

On Mon, Jan 11, 2016 at 10:30 AM, Clayton Coleman ccoleman@redhat.com
wrote:

I think we should stop mirroring pods :)

On Mon, Jan 11, 2016 at 2:40 AM, Tim Hockin notifications@github.com
wrote:

I'm worried most about resources. Currently DNS watches all Services,
all
Endpoints, and all Pods. In a large cluster that gets pretty
significant,
doesn't it? I don't have numbers on hand.

Other than that, having a local mirror of apiserver state (and an API
to
access it) seems like a good optimization for read-only clients on the
node, but barring that I'm OK with modularizing kube-proxy to do more
than
one agent-ish thing. Maybe a rename would be in order.

That said, why not jam it all into Kubelet? That also caches Service
state
(to generate env vars) and pod state (duh). I know the answer about
availability and reliability, I am just illustrating that, as we add
more
to kube-proxy, it too will be come critical and then we have the same
debates again.

On Sun, Jan 10, 2016 at 7:53 PM, Clayton Coleman <
notifications@github.com>
wrote:

We're discussing timing on it. For now, the standard path is still
the
story.

On Sun, Jan 10, 2016 at 8:30 PM, Dusty Mabe <
notifications@github.com

wrote:

Is this still the best place to track the progress of getting a
dns
service into kubernetes?


Reply to this email directly or view it on GitHub
<

#11599 (comment)

.


Reply to this email directly or view it on GitHub
<

#11599 (comment)

.


Reply to this email directly or view it on GitHub
<

#11599 (comment)

.


Reply to this email directly or view it on GitHub
<
#11599 (comment)

.


Reply to this email directly or view it on GitHub
#11599 (comment)
.

@thockin

This comment has been minimized.

Member

thockin commented Jan 11, 2016

Ahh, that's a good point - we could just parse the question in the pod
namespace.

On Mon, Jan 11, 2016 at 11:22 AM, Clayton Coleman notifications@github.com
wrote:

Why do you need the pod? I thought we specifically designed the DNS name
so that you don't need to know whether the pod exists to answer the query
(because the name includes the IP). We only need O(num-pods) storage today
because of etcd and the current skydns mechanism?

On Mon, Jan 11, 2016 at 2:11 PM, Tim Hockin notifications@github.com
wrote:

Sure, we don't have to mirror the whole pod structure, but it's still
O(num-pods) storage costs, even if you reduce it by a large constant
factor.

On Mon, Jan 11, 2016 at 7:31 AM, Clayton Coleman <
notifications@github.com

wrote:

For instance we implemented the pod namespace without supporting the
existence check - we simply respond with an answer for the IP.

On Mon, Jan 11, 2016 at 10:30 AM, Clayton Coleman <ccoleman@redhat.com

wrote:

I think we should stop mirroring pods :)

On Mon, Jan 11, 2016 at 2:40 AM, Tim Hockin <
notifications@github.com>
wrote:

I'm worried most about resources. Currently DNS watches all
Services,
all
Endpoints, and all Pods. In a large cluster that gets pretty
significant,
doesn't it? I don't have numbers on hand.

Other than that, having a local mirror of apiserver state (and an
API
to
access it) seems like a good optimization for read-only clients on
the
node, but barring that I'm OK with modularizing kube-proxy to do
more
than
one agent-ish thing. Maybe a rename would be in order.

That said, why not jam it all into Kubelet? That also caches Service
state
(to generate env vars) and pod state (duh). I know the answer about
availability and reliability, I am just illustrating that, as we add
more
to kube-proxy, it too will be come critical and then we have the
same
debates again.

On Sun, Jan 10, 2016 at 7:53 PM, Clayton Coleman <
notifications@github.com>
wrote:

We're discussing timing on it. For now, the standard path is still
the
story.

On Sun, Jan 10, 2016 at 8:30 PM, Dusty Mabe <
notifications@github.com

wrote:

Is this still the best place to track the progress of getting a
dns
service into kubernetes?


Reply to this email directly or view it on GitHub
<

#11599 (comment)

.


Reply to this email directly or view it on GitHub
<

#11599 (comment)

.


Reply to this email directly or view it on GitHub
<

#11599 (comment)

.


Reply to this email directly or view it on GitHub
<

#11599 (comment)

.


Reply to this email directly or view it on GitHub
<
#11599 (comment)

.


Reply to this email directly or view it on GitHub
#11599 (comment)
.

@smarterclayton

This comment has been minimized.

Contributor

smarterclayton commented Jan 11, 2016

https://github.com/openshift/origin/blob/master/pkg/dns/serviceresolver.go#L85

On Mon, Jan 11, 2016 at 2:26 PM, Tim Hockin notifications@github.com
wrote:

Ahh, that's a good point - we could just parse the question in the pod
namespace.

On Mon, Jan 11, 2016 at 11:22 AM, Clayton Coleman <
notifications@github.com>

wrote:

Why do you need the pod? I thought we specifically designed the DNS name
so that you don't need to know whether the pod exists to answer the query
(because the name includes the IP). We only need O(num-pods) storage
today
because of etcd and the current skydns mechanism?

On Mon, Jan 11, 2016 at 2:11 PM, Tim Hockin notifications@github.com
wrote:

Sure, we don't have to mirror the whole pod structure, but it's still
O(num-pods) storage costs, even if you reduce it by a large constant
factor.

On Mon, Jan 11, 2016 at 7:31 AM, Clayton Coleman <
notifications@github.com

wrote:

For instance we implemented the pod namespace without supporting the
existence check - we simply respond with an answer for the IP.

On Mon, Jan 11, 2016 at 10:30 AM, Clayton Coleman <
ccoleman@redhat.com

wrote:

I think we should stop mirroring pods :)

On Mon, Jan 11, 2016 at 2:40 AM, Tim Hockin <
notifications@github.com>
wrote:

I'm worried most about resources. Currently DNS watches all
Services,
all
Endpoints, and all Pods. In a large cluster that gets pretty
significant,
doesn't it? I don't have numbers on hand.

Other than that, having a local mirror of apiserver state (and an
API
to
access it) seems like a good optimization for read-only clients on
the
node, but barring that I'm OK with modularizing kube-proxy to do
more
than
one agent-ish thing. Maybe a rename would be in order.

That said, why not jam it all into Kubelet? That also caches
Service
state
(to generate env vars) and pod state (duh). I know the answer
about
availability and reliability, I am just illustrating that, as we
add
more
to kube-proxy, it too will be come critical and then we have the
same
debates again.

On Sun, Jan 10, 2016 at 7:53 PM, Clayton Coleman <
notifications@github.com>
wrote:

We're discussing timing on it. For now, the standard path is
still
the
story.

On Sun, Jan 10, 2016 at 8:30 PM, Dusty Mabe <
notifications@github.com

wrote:

Is this still the best place to track the progress of getting
a
dns
service into kubernetes?


Reply to this email directly or view it on GitHub
<

#11599 (comment)

.


Reply to this email directly or view it on GitHub
<

#11599 (comment)

.


Reply to this email directly or view it on GitHub
<

#11599 (comment)

.


Reply to this email directly or view it on GitHub
<

#11599 (comment)

.


Reply to this email directly or view it on GitHub
<

#11599 (comment)

.


Reply to this email directly or view it on GitHub
<
#11599 (comment)

.


Reply to this email directly or view it on GitHub
#11599 (comment)
.

@fejta

This comment has been minimized.

Contributor

fejta commented Apr 26, 2016

This PR has no activity for multiple months. Please reopen this PR once you rebase and push a new commit.

@fejta fejta closed this Apr 26, 2016

@smarterclayton

This comment has been minimized.

Contributor

smarterclayton commented Apr 26, 2016

@sross I have an alternate implementation in
openshift/origin#7807

On Tue, Apr 26, 2016 at 3:15 AM, Erick Fejta notifications@github.com wrote:

Closed #11599.


You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHub

@ncdc

This comment has been minimized.

Member

ncdc commented Apr 26, 2016

Make that @DirectXMan12 😄

@smarterclayton

This comment has been minimized.

Contributor

smarterclayton commented Apr 26, 2016

Oops

On Tue, Apr 26, 2016 at 10:31 AM, Andy Goldstein notifications@github.com
wrote:

Make that @DirectXMan12 https://github.com/DirectXMan12 😄


You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHub
#11599 (comment)

@cyphar

This comment has been minimized.

cyphar commented May 17, 2016

Is there any plan to revive this issue?

@smarterclayton

This comment has been minimized.

Contributor

smarterclayton commented May 17, 2016

@DirectXMan12 DirectXMan12 deleted the DirectXMan12:feature/dns-in-kube-proxy branch Oct 3, 2017

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment