Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Mark componentstatus as deprecated #93570

Merged
merged 1 commit into from
Jul 31, 2020

Conversation

liggitt
Copy link
Member

@liggitt liggitt commented Jul 30, 2020

What type of PR is this?
/kind api-change
/kind deprecation

What this PR does / why we need it:
xref kubernetes/enhancements#553 (comment) and #19570

The current state of this API is problematic, and requires reversing the actual data flow (it requires the API server to call to its clients), and is not functional across deployment topologies.

Leaving it in place attracts new attempts to make additions to it (#74643, #82247) and leads to confusion or bug reports for deployments that do not match its topology assumptions (#93342, #93472). It should be clearly marked as deprecated.

Does this PR introduce a user-facing change?:

kube-apiserver: the componentstatus API is deprecated. This API provided status of etcd, kube-scheduler, and kube-controller-manager components, but only worked when those components were local to the API server, and when kube-scheduler and kube-controller-manager exposed unsecured health endpoints. Instead of this API, etcd health is included in the kube-apiserver health check and kube-scheduler/kube-controller-manager health checks can be made directly against those components' health endpoints.

/cc @deads2k @lavalamp @neolit123
/sig api-machinery cluster-lifecycle

@k8s-ci-robot k8s-ci-robot added size/M Denotes a PR that changes 30-99 lines, ignoring generated files. release-note Denotes a PR that will be considered when it comes time to generate release notes. kind/api-change Categorizes issue or PR as related to adding, removing, or otherwise changing an API kind/deprecation Categorizes issue or PR as related to a feature/enhancement marked for deprecation. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery. sig/cluster-lifecycle Categorizes an issue or PR as relevant to SIG Cluster Lifecycle. needs-priority Indicates a PR lacks a `priority/foo` label and requires one. sig/apps Categorizes an issue or PR as relevant to SIG Apps. approved Indicates a PR has been approved by an approver from all required OWNERS files. labels Jul 30, 2020
@liggitt
Copy link
Member Author

liggitt commented Jul 30, 2020

/cc @smarterclayton

@liggitt
Copy link
Member Author

liggitt commented Jul 30, 2020

/priority important-longterm

@k8s-ci-robot k8s-ci-robot added priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. and removed needs-priority Indicates a PR lacks a `priority/foo` label and requires one. labels Jul 30, 2020
@liggitt
Copy link
Member Author

liggitt commented Jul 30, 2020

/hold for api-review

@k8s-ci-robot k8s-ci-robot added the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Jul 30, 2020
@dims
Copy link
Member

dims commented Jul 30, 2020

@liggitt when will we be able to remove it? (if at all?)

@liggitt
Copy link
Member Author

liggitt commented Jul 30, 2020

/retest

@liggitt
Copy link
Member Author

liggitt commented Jul 30, 2020

@liggitt when will we be able to remove it? (if at all?)

I'm not sure (a core/v1 API predating the deprecation policy, with no replacement, is sort of unprecedented), but communicating the lack of continued development and known inconsistent behavior of what is currently there is important.

@smarterclayton
Copy link
Contributor

smarterclayton commented Jul 30, 2020

A Kubernetes provider is not required to follow the configuration necessary to expose this data, nor is this required for all distributions to implement. We have explicitly stated that we would not put this in conformance because conformant distributions may choose not to expose this. Component status should be removed.

/approve

I will leave the hold for another reviewer.

@neolit123
Copy link
Member

thanks!
/lgtm

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Jul 30, 2020
@lavalamp
Copy link
Member

/approve
/lgtm

@lllamnyp
Copy link

lllamnyp commented Aug 21, 2020

If reversing data flow is a problem re

The current state of this API is problematic, and requires reversing the actual data flow

then are we to expect a deprecation of kubectl logs and kubectl exec at some point?

@lavalamp
Copy link
Member

then are we to expect a deprecation of kubectl logs and kubectl exec at some point?

YES

(Actually this is really hard, because those are much-used features. The current "plan" is to add the notion of subresource to the aggregator, and then move "model breaking" subresources, such as those, out of the main apiserver into a separate binary. Ideally also adding a redirect rather than proxy mode, so that actual traffic doesn't have to go through apiserver at all. This isn't staffed or being worked on right now but if it's something someone wants to work on, come talk to SIG API Machinery...)

@YanzhaoLi
Copy link
Member

Hi, any idea for alternative method to check status?

@Napsty
Copy link
Contributor

Napsty commented Feb 11, 2021

Instead of this API, etcd health is included in the kube-apiserver health check and kube-scheduler/kube-controller-manager health checks can be made directly against those components' health endpoints.

Any example on this would be greatly appreciated. ;-)

edit: Found etcd check example on https://kubernetes.io/docs/reference/using-api/health-checks/, but only seems to work starting with Kubernetes 1.20:

curl -k https://localhost:6443/livez/etcd

schu added a commit to schu/kubedee that referenced this pull request May 25, 2021
`ComponentStatus` was deprecated a while ago and will be removed at some
point: kubernetes/kubernetes#93570

We might add an alternative later.
juliogreff added a commit to DataDog/datadog-agent that referenced this pull request Jul 6, 2021
Previously, the kube_apiserver_controlplane used ComponentStatus to
report control plane components' liveness. This has been deprecated in
[Kubernetes 1.19](kubernetes/kubernetes#93570)
and will be removed at some point in the future.

To remediate that, we're following the recommendation in the deprecation
notice to use the components' own health check endpoints.
juliogreff added a commit to DataDog/datadog-agent that referenced this pull request Jul 8, 2021
Previously, the kube_apiserver_controlplane used ComponentStatus to
report control plane components' liveness. This has been deprecated in
[Kubernetes 1.19](kubernetes/kubernetes#93570)
and will be removed at some point in the future.

To remediate that, we're following the recommendation in the deprecation
notice to use the API Server's health endpoint instead. This change also
removes the `component` tag in this service check, as it no longer
reports separate components, and just the API server itself.
Per-component service checks will eventually be available through the
kube_controller_manager and kube_scheduler checks themselves.
juliogreff added a commit to DataDog/datadog-agent that referenced this pull request Jul 8, 2021
Previously, the kube_apiserver_controlplane used ComponentStatus to
report control plane components' liveness. This has been deprecated in
[Kubernetes 1.19](kubernetes/kubernetes#93570)
and will be removed at some point in the future.

To remediate that, we're following the recommendation in the deprecation
notice to use the API Server's health endpoint instead. This change also
removes the `component` tag in this service check, as it no longer
reports separate components, and just the API server itself.
Per-component service checks will eventually be available through the
kube_controller_manager and kube_scheduler checks themselves.
@eumel8
Copy link

eumel8 commented Dec 21, 2021

Now it comes nearly to an end for the componentstatus. If you headed here from Google and you are a Rancher user you can use Rancher management API to ask the status:

$ kubectl get clusters.management.cattle.io <your-cluster-id> -o json | jq  '.status.componentStatuses[] | .name,.conditions[].message'
"controller-manager"
"ok"
"etcd-0"
"{\"health\":\"true\"}"
"etcd-1"
"{\"health\":\"true\"}"
"etcd-2"
"{\"health\":\"true\"}"
"scheduler"
"ok"

plain curl:

$ curl -s -H "Content-Type: application/json" -H "authorization: Bearer <token>"  https://<rancher-server>/k8s/clusters/local/apis/management.cattle.io/v3/clusters/<your-cluster-id> | jq '.status.componentStatuses[] | .name,.conditions[].message'
"controller-manager"
"ok"
"etcd-0"
"{\"health\":\"true\"}"
"etcd-1"
"{\"health\":\"true\"}"
"etcd-2"
"{\"health\":\"true\"}"
"scheduler"
"ok"'

zer0def pushed a commit to zer0def/kubedee that referenced this pull request Nov 17, 2022
`ComponentStatus` was deprecated a while ago and will be removed at some
point: kubernetes/kubernetes#93570

We might add an alternative later.
nirs added a commit to nirs/kubectl-gather that referenced this pull request May 2, 2024
To avoid the warning:

    W0503 01:07:59.564568 2724335 warnings.go:70] v1 ComponentStatus is deprecated in v1.19+

Maybe it is possible to silence the warning, but this resource is not
needed now so lets skip it.

[1] kubernetes/kubernetes#93570
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. kind/api-change Categorizes issue or PR as related to adding, removing, or otherwise changing an API kind/deprecation Categorizes issue or PR as related to a feature/enhancement marked for deprecation. lgtm "Looks good to me", indicates that a PR is ready to be merged. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. release-note Denotes a PR that will be considered when it comes time to generate release notes. sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery. sig/apps Categorizes an issue or PR as relevant to SIG Apps. sig/cluster-lifecycle Categorizes an issue or PR as relevant to SIG Cluster Lifecycle. size/M Denotes a PR that changes 30-99 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet