New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Expose the service-cluster-ip-range CIDR through the API Server #25533
Comments
@thockin wdyt? |
@mikedanese this is one of the things I want in the per-cluster configmap. Mike is there any progress towards that sub-goal of the larger goal? @2opremio other than that idea, which I believe WILL come, but maybe not by 1.3, what ideas for exposing it? |
@thockin We are working it around by ... inferring a |
Any progress on this? I couldn't find anything in the 1.5 docs. Right now to get PTR support in CoreDNS we need to manually configure with the reverse zones. Would be nice to have the service and pod CIDRs available via an API so we could make it automatic. |
I think the configmap work is focusing on kubelet first, so nothing to say
here yet.
…On Wed, Jan 25, 2017 at 10:45 AM, John Belamaric ***@***.***> wrote:
Any progress on this? I couldn't find anything in the 1.5 docs. Right now
to get PTR support in CoreDNS we need to manually configure with the
reverse zones. Would be nice to have the service and pod CIDRs available
via an API so we could make it automatic.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#25533 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AFVgVKOHkNsm5uoB25MWplG9UPh_DmVkks5rV5hjgaJpZM4IdKUl>
.
|
Thanks Tim. |
Any updates? |
Just stumbled across this. Somehow I lost my |
Issues go stale after 90d of inactivity. Prevent issues from auto-closing with an If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/close Closing in favour of #46508, simply because one has to be closed and this one is already marked as rotten. |
hmmm so, i think this is incorrectly closed, the other issue is aboute ClusterCIDR, not ServiceCluster IP Range.... |
Which other issue are you referring to?
…On Thu, 17 Sep 2020, 12:26 pm jay vyas, ***@***.***> wrote:
hmmm so, i think this is incorrectly closed, the *other* issue is aboute
ClusterCIDR, not ServiceCluster IP Range....
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#25533 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAB5MS3BQ3VNIEB6SSLFU4TSGHW7FANCNFSM4CDUUUSQ>
.
|
@jayunit100 I agree, it says this was closed in favor of #46508 but that is for a different CIDR. That one is referring to the pod CIDRs and this one is referring to the service range CIDR. I think that since k8s has this information it would be super helpful to have it supply it through the API rather than relying on side-effects (like this: https://stackoverflow.com/questions/44190607/how-do-you-find-the-cluster-service-cidr-of-a-kubernetes-cluster/61685899#61685899) or by having every network component expose a flag (like this: https://github.com/cloudnativelabs/kube-router/pull/914/files#diff-d76a88afa98afa848ebc5641cd1adf2fR85) to ingest this over and over again which makes things less DRY and can cause drift within the cluster between core components that share responsibilities. Is there any chance that we can re-open this and k8s can support this functionality, even if it's in the future? @errordeveloper @thockin |
Having access to the service-cluster-ip-range CIDR would allow us to track connections more efficiently in Weave Scope. See weaveworks/scope#1490
AFAICT it's only stored in etcd and not exposed. See
kubernetes/pkg/master/master.go
Lines 430 to 437 in ef885d0
The text was updated successfully, but these errors were encountered: