-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Deprecate ComponentStatus #553
Comments
I am not sure if this deprecation proposal fits under features or kubernetes issues. If it needs to be moved, just let me know. |
Have you found out who is using this API and confirmed that they will be able to move off it in this timeframe? It's relatively short given that the api deprecation is effectively being announced just before the 1.10 code freeze. |
/cc @hulkholden |
Rather than just saying "we should delete this because it isn't perfect" what about finishing the replacement (see kubernetes/kubernetes#18610)? Then we could give consumers of this API something to move to instead of just saying that the API is going away. |
Agreed - I don't think it's helpful to deprecate something without having a clear recommendation of what it should be replaced with. At the very least we need at least one minor release with both the old and new thing around so we can safely migrate all our clusters. |
Apologies, but the feature freeze deadline for 1.10 has already passed - https://github.com/kubernetes/sig-release/blob/master/releases/release-1.10/release-1.10.md#timeline. I can recommend starting working on this within the @kubernetes/sig-cluster-lifecycle-feature-requests in the scope of 1.11. |
imho this no longer belongs to sig-cluster-lifecycle and falls into api-deprecation policy. /cc @kubernetes/sig-api-machinery-feature-requests |
The current feature that only works for "special" deployments with the right hardcoded ports, exposed insecurely, on localhost. None of those assumptions work in the general case. If we were honest about the level of API, it would have been alpha all this time. This isn't so much removal without a plan as a recognition that bad assumptions were made in the original design and it probably shouldn't have merged in the first place. I don't think it is incumbent upon the community at large to build a replacement for a feature that doesn't fit in the ecosystem that it purports to watch. |
apimachinery owns how the the API is served, but we try to stay out what the content of a particular API is. @kubernetes/sig-cluster-lifecycle-feature-requests seems like a pretty good fit given their perspective on how clusters are deployed. |
My takeaway from this thread: ComponentStatus, in its current form, has large gaps in functionality (hard coded IP and ports, under specced use cases) and the current proposals for ComponentRegistration don't seem to reuse any of the existing ComponentStatus infrastructure. Could we separate out the discussion into 1) deprecating ComponentStatus, and 2) ComponentRegistration to let us proceed without mudding that water with the other feature? Deprecating the broken state doesn't stop someone from implementing a real replacement. Thoughts? |
/assign @rphillips |
@rphillips @kubernetes/sig-cluster-lifecycle-feature-requests @k8s-mirror-api-machinery-feature-rqusts do you already have a defined roadmap for this feature? Any plans for 1.11? |
Definitely on my TODO list for 1.11. |
@rphillips please fill out the appropriate line item of the |
@mistyhacks We can delete it out of the milestone. It isn't done. |
Thank you @logicalhan for the information , if things change - please let me know. |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
we saw a number of reports about us breaking this feature further on slack and in k/k - e.g. kubernetes/kubernetes#93342 i personally think that the kubernetes distributed component model and topology variance makes such a feature hard to acheive. /remove-lifecycle stale |
I agree. As a starting point, I've opened kubernetes/kubernetes#93570 to mark the API as deprecated (with no other behavior change) to at least indicate to consumers that it is not the recommended approach. |
Enhancement issues opened in /remove-lifecycle frozen |
heigh-ho, heigh-ho, it's off to the label mines I go |
/cc @rngy |
With kubernetes/kubernetes#93570 being reviewed (with a goodbye wave to this hack), what is the proposal for the replacement to this? |
The release note in kubernetes/kubernetes#93570 describes the mechanisms that can be used to determine etcd, scheduler, and controller-manager health instead |
Enhancements Lead here. Any plans for this in 1.20? Thanks, |
This was marked deprecated in 1.19 in kubernetes/kubernetes#93570, with pointers to the recommended alternatives for the components checked by that API. There is not a planned removal timeframe for the existing functionality. |
Feature Description
ComponentStatus is functionality to get the health of kubernetes components: etcd, controller manager, and scheduler. The code attempts to query controller manager and scheduler at a static (127.0.0.1) address and fixed port. This requires the components to be run alongside the API server, which might not necessarily be the case in all installations (see: kubernetes/kubernetes#19570 (comment)). In addition, the code queries etcd servers for their health which could be out of scope of kubernetes, or problematic to query from a networking standpoint as well.
We could add registration of the controller manager and scheduler (ip+port), like we do with the Lease Endpoint Reconciler for API servers directly within the storage-api (etcd), but this was a stop-gap solution.
This proposal is to deprecate the ComponentStatus API and cli, and eventually remove them around the 1.12-1.13 release.
The text was updated successfully, but these errors were encountered: