Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Deprecate ComponentStatus #553

Closed
rphillips opened this issue Feb 16, 2018 · 42 comments
Closed

Deprecate ComponentStatus #553

rphillips opened this issue Feb 16, 2018 · 42 comments
Assignees
Labels
kind/feature Categorizes issue or PR as related to a new feature. sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery. sig/cluster-lifecycle Categorizes an issue or PR as relevant to SIG Cluster Lifecycle. stage/alpha Denotes an issue tracking an enhancement targeted for Alpha status

Comments

@rphillips
Copy link
Member

rphillips commented Feb 16, 2018

Feature Description

  • One-line feature description (can be used as a release note): Deprecate ComponentStatus APIs and kubectl componentstatus functionality
  • Primary contact (assignee): @rphillips
  • Responsible SIGs: @kubernetes/sig-cluster-lifecycle-feature-requests
  • Design proposal link (community repo):
  • Link to e2e and/or unit tests:
  • Issue componentstatus fails when components are not running on the same host as apiserver kubernetes#19570
  • Reviewer(s) - (for LGTM) recommend having 2+ reviewers (at least one from code-area OWNERS file) agreed to review. Reviewers from multiple companies preferred: /cc @justinsb @hongchaodeng
  • Approver (likely from SIG/area to which feature belongs):
  • Feature target (which target equals to which milestone):
    • Alpha release target (x.y): add code to deprecate kubectl get cs. 1.10 add code to deprecate the APIs
    • Beta release target (x.y): change documentation and commandline deprecation to beta
    • Stable release target (x.y): Remove kubectl componentstatus functionality and APIs

ComponentStatus is functionality to get the health of kubernetes components: etcd, controller manager, and scheduler. The code attempts to query controller manager and scheduler at a static (127.0.0.1) address and fixed port. This requires the components to be run alongside the API server, which might not necessarily be the case in all installations (see: kubernetes/kubernetes#19570 (comment)). In addition, the code queries etcd servers for their health which could be out of scope of kubernetes, or problematic to query from a networking standpoint as well.

We could add registration of the controller manager and scheduler (ip+port), like we do with the Lease Endpoint Reconciler for API servers directly within the storage-api (etcd), but this was a stop-gap solution.

This proposal is to deprecate the ComponentStatus API and cli, and eventually remove them around the 1.12-1.13 release.

@k8s-ci-robot k8s-ci-robot added sig/cluster-lifecycle Categorizes an issue or PR as relevant to SIG Cluster Lifecycle. kind/feature Categorizes issue or PR as related to a new feature. labels Feb 16, 2018
@rphillips
Copy link
Member Author

I am not sure if this deprecation proposal fits under features or kubernetes issues. If it needs to be moved, just let me know.

@roberthbailey
Copy link
Contributor

eventually remove them around the 1.12-1.13 release.

Have you found out who is using this API and confirmed that they will be able to move off it in this timeframe? It's relatively short given that the api deprecation is effectively being announced just before the 1.10 code freeze.

@roberthbailey
Copy link
Contributor

/cc @hulkholden

@roberthbailey
Copy link
Contributor

Rather than just saying "we should delete this because it isn't perfect" what about finishing the replacement (see kubernetes/kubernetes#18610)? Then we could give consumers of this API something to move to instead of just saying that the API is going away.

@hulkholden
Copy link

Agreed - I don't think it's helpful to deprecate something without having a clear recommendation of what it should be replaced with. At the very least we need at least one minor release with both the old and new thing around so we can safely migrate all our clusters.

@idvoretskyi
Copy link
Member

Feature target (which target equals to which milestone):
Alpha release target (x.y): 1.10 add code to deprecate kubectl get cs. 1.10 add code to deprecate the APIs

Apologies, but the feature freeze deadline for 1.10 has already passed - https://github.com/kubernetes/sig-release/blob/master/releases/release-1.10/release-1.10.md#timeline.

I can recommend starting working on this within the @kubernetes/sig-cluster-lifecycle-feature-requests in the scope of 1.11.

@timothysc
Copy link
Member

imho this no longer belongs to sig-cluster-lifecycle and falls into api-deprecation policy.

/cc @kubernetes/sig-api-machinery-feature-requests

@k8s-ci-robot k8s-ci-robot added the sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery. label Feb 20, 2018
@deads2k
Copy link
Contributor

deads2k commented Feb 21, 2018

Rather than just saying "we should delete this because it isn't perfect" what about finishing the replacement (see kubernetes/kubernetes#18610)? Then we could give consumers of this API something to move to instead of just saying that the API is going away.

Agreed - I don't think it's helpful to deprecate something without having a clear recommendation of what it should be replaced with. At the very least we need at least one minor release with both the old and new thing around so we can safely migrate all our clusters.

The current feature that only works for "special" deployments with the right hardcoded ports, exposed insecurely, on localhost. None of those assumptions work in the general case. If we were honest about the level of API, it would have been alpha all this time.

This isn't so much removal without a plan as a recognition that bad assumptions were made in the original design and it probably shouldn't have merged in the first place. I don't think it is incumbent upon the community at large to build a replacement for a feature that doesn't fit in the ecosystem that it purports to watch.

@deads2k
Copy link
Contributor

deads2k commented Feb 21, 2018

imho this no longer belongs to sig-cluster-lifecycle and falls into api-deprecation policy.

/cc @kubernetes/sig-api-machinery-feature-requests

apimachinery owns how the the API is served, but we try to stay out what the content of a particular API is. @kubernetes/sig-cluster-lifecycle-feature-requests seems like a pretty good fit given their perspective on how clusters are deployed.

@rphillips
Copy link
Member Author

My takeaway from this thread:

ComponentStatus, in its current form, has large gaps in functionality (hard coded IP and ports, under specced use cases) and the current proposals for ComponentRegistration don't seem to reuse any of the existing ComponentStatus infrastructure. Could we separate out the discussion into 1) deprecating ComponentStatus, and 2) ComponentRegistration to let us proceed without mudding that water with the other feature? Deprecating the broken state doesn't stop someone from implementing a real replacement.

Thoughts?

@idvoretskyi
Copy link
Member

/assign @rphillips

@idvoretskyi
Copy link
Member

@rphillips @kubernetes/sig-cluster-lifecycle-feature-requests @k8s-mirror-api-machinery-feature-rqusts do you already have a defined roadmap for this feature? Any plans for 1.11?

cc @justaugustus

@rphillips
Copy link
Member Author

Definitely on my TODO list for 1.11.

@justaugustus justaugustus added this to the v1.11 milestone Apr 17, 2018
@justaugustus justaugustus added the stage/alpha Denotes an issue tracking an enhancement targeted for Alpha status label Apr 17, 2018
@justaugustus justaugustus added the tracked/yes Denotes an enhancement issue is actively being tracked by the Release Team label Apr 29, 2018
@mdlinville
Copy link

@rphillips please fill out the appropriate line item of the
1.11 feature tracking spreadsheet
and open a placeholder docs PR against the
release-1.11 branch
by 5/25/2018 (tomorrow as I write this) if new docs or docs changes are
needed and a relevant PR has not yet been opened.

@rphillips
Copy link
Member Author

@mistyhacks We can delete it out of the milestone. It isn't done.

@luxas luxas modified the milestones: v1.11, next-milestone May 25, 2018
@mdlinville
Copy link

@justaugustus

@msedzins
Copy link

msedzins commented May 1, 2020

Thank you @logicalhan for the information , if things change - please let me know.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 30, 2020
@neolit123
Copy link
Member

we saw a number of reports about us breaking this feature further on slack and in k/k - e.g. kubernetes/kubernetes#93342

i personally think that the kubernetes distributed component model and topology variance makes such a feature hard to acheive.
what we have currently should be deprecated and KEPs for new alternatives should be a follow up.

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 30, 2020
@liggitt
Copy link
Member

liggitt commented Jul 30, 2020

I personally think that the kubernetes distributed component model and topology variance makes such a feature hard to acheive.

I agree. As a starting point, I've opened kubernetes/kubernetes#93570 to mark the API as deprecated (with no other behavior change) to at least indicate to consumers that it is not the recommended approach.

@liggitt liggitt added the lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. label Jul 30, 2020
@fejta-bot
Copy link

Enhancement issues opened in kubernetes/enhancements should never be marked as frozen.
Enhancement Owners can ensure that enhancements stay fresh by consistently updating their states across release cycles.

/remove-lifecycle frozen

@k8s-ci-robot k8s-ci-robot removed the lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. label Jul 30, 2020
@liggitt
Copy link
Member

liggitt commented Jul 30, 2020

Enhancement Owners can ensure that enhancements stay fresh by consistently updating their states across release cycles.

heigh-ho, heigh-ho, it's off to the label mines I go

@logicalhan
Copy link
Member

/cc @rngy

@fabioy
Copy link

fabioy commented Jul 30, 2020

With kubernetes/kubernetes#93570 being reviewed (with a goodbye wave to this hack), what is the proposal for the replacement to this?

@liggitt
Copy link
Member

liggitt commented Jul 30, 2020

The release note in kubernetes/kubernetes#93570 describes the mechanisms that can be used to determine etcd, scheduler, and controller-manager health instead

@kikisdeliveryservice
Copy link
Member

Hi @liggitt @rphillips

Enhancements Lead here. Any plans for this in 1.20?

Thanks,
Kirsten

@liggitt
Copy link
Member

liggitt commented Sep 14, 2020

This was marked deprecated in 1.19 in kubernetes/kubernetes#93570, with pointers to the recommended alternatives for the components checked by that API. There is not a planned removal timeframe for the existing functionality.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature. sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery. sig/cluster-lifecycle Categorizes an issue or PR as relevant to SIG Cluster Lifecycle. stage/alpha Denotes an issue tracking an enhancement targeted for Alpha status
Projects
None yet
Development

No branches or pull requests