Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Expose the service-cluster-ip-range CIDR through the API Server #25533

Closed
2opremio opened this issue May 12, 2016 · 14 comments
Closed

Expose the service-cluster-ip-range CIDR through the API Server #25533

2opremio opened this issue May 12, 2016 · 14 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. sig/network Categorizes an issue or PR as relevant to SIG Network.

Comments

@2opremio
Copy link

Having access to the service-cluster-ip-range CIDR would allow us to track connections more efficiently in Weave Scope. See weaveworks/scope#1490

AFAICT it's only stored in etcd and not exposed. See

serviceClusterIPAllocator := ipallocator.NewAllocatorCIDRRange(serviceClusterIPRange, func(max int, rangeSpec string) allocator.Interface {
mem := allocator.NewAllocationMap(max, rangeSpec)
// TODO etcdallocator package to return a storage interface via the storageFactory
etcd := etcdallocator.NewEtcd(mem, "/ranges/serviceips", api.Resource("serviceipallocations"), serviceStorage)
serviceClusterIPRegistry = etcd
return etcd
})
m.serviceClusterIPAllocator = serviceClusterIPRegistry

@adohe-zz
Copy link

@thockin wdyt?

@thockin
Copy link
Member

thockin commented May 12, 2016

@mikedanese this is one of the things I want in the per-cluster configmap. Mike is there any progress towards that sub-goal of the larger goal?

@2opremio other than that idea, which I believe WILL come, but maybe not by 1.3, what ideas for exposing it?

@2opremio
Copy link
Author

2opremio commented May 12, 2016

@thockin We are working it around by ... inferring a /32 network per service IP (yeah, I know) which seems to work for now. We can live with that for now but may change if it results in reports of bad performance.

@saad-ali saad-ali added sig/network Categorizes an issue or PR as relevant to SIG Network. team/cluster labels May 13, 2016
@johnbelamaric
Copy link
Member

Any progress on this? I couldn't find anything in the 1.5 docs. Right now to get PTR support in CoreDNS we need to manually configure with the reverse zones. Would be nice to have the service and pod CIDRs available via an API so we could make it automatic.

@thockin
Copy link
Member

thockin commented Jan 26, 2017 via email

@johnbelamaric
Copy link
Member

Thanks Tim.

@2opremio
Copy link
Author

Any updates?

@deitch
Copy link
Contributor

deitch commented Aug 10, 2017

Just stumbled across this. Somehow I lost my IPALLOC_RANGE var for weave, which defaults to a conflicting address. Oh well. Then I checked if weave can get the range from the API, but realized it isn't exposed. Plans on it?

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 2, 2018
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Feb 7, 2018
@errordeveloper
Copy link
Member

/close

Closing in favour of #46508, simply because one has to be closed and this one is already marked as rotten.

@jayunit100
Copy link
Member

hmmm so, i think this is incorrectly closed, the other issue is aboute ClusterCIDR, not ServiceCluster IP Range....

@errordeveloper
Copy link
Member

errordeveloper commented Sep 19, 2020 via email

@aauren
Copy link
Contributor

aauren commented Sep 24, 2020

@jayunit100 I agree, it says this was closed in favor of #46508 but that is for a different CIDR. That one is referring to the pod CIDRs and this one is referring to the service range CIDR.

I think that since k8s has this information it would be super helpful to have it supply it through the API rather than relying on side-effects (like this: https://stackoverflow.com/questions/44190607/how-do-you-find-the-cluster-service-cidr-of-a-kubernetes-cluster/61685899#61685899) or by having every network component expose a flag (like this: https://github.com/cloudnativelabs/kube-router/pull/914/files#diff-d76a88afa98afa848ebc5641cd1adf2fR85) to ingest this over and over again which makes things less DRY and can cause drift within the cluster between core components that share responsibilities.

Is there any chance that we can re-open this and k8s can support this functionality, even if it's in the future? @errordeveloper @thockin

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. sig/network Categorizes an issue or PR as relevant to SIG Network.
Projects
None yet
Development

No branches or pull requests