-
Notifications
You must be signed in to change notification settings - Fork 921
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
kubectl edit or apply can not update .status when status sub resource is enabled #564
Comments
/kind bug |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Related to kubernetes/kubernetes#60845 |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle rotten |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
/priority P2 |
I guess this is intended behavior according to the design proposal :
|
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle rotten |
From @DBarthe
Wouldn't it be nice if we could change the status field in some kind with Our use-case: |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Created kubernetes/kubernetes#99556 as a POC to get initial feedback |
/unassign @eddiezane @seans3 |
/cc |
What's the best workaround for this issue? I could use a client library to write it in go but that seems a bit heavy handed. |
@nathanperkins This feature is planned for v1.22 - https://github.com/kubernetes/enhancements/tree/master/keps/sig-cli/2590-kubectl-subresource |
Since it appears this did not make it into v1.22 , do you have any idea of an updated timeline, @nikhita ? Our use case is similar to that mentioned by @florianrusch . |
@nikhita waiting for this feature +1 |
https://github.com/ulucinar/kubectl-edit-status plugin can be used in the meantime. |
Any timeline for this one? |
This comment was marked as spam.
This comment was marked as spam.
kubernetes/kubernetes#99556 has merged and subresource support will be present in Kubernetes v1.24. |
The switchover procedure is entirely handled by PGK and not guided by the operator. PGK during its reconciliation cycle will look for changes in the `targetPrimary` status field of the cluster. When a new server is promoted as a primary: - the old primary will request a fast shutdown, leading to a restart in the Pod. The Pod will be restarted by the Kubelet, and PGK will wait for the switchover to be completed (targetPrimary == currentPrimary). It will then demote itself, and start following the new primary. - the promoted replica will wait for its WAL sender to be stopped and then promote itself. Since `currentPrimary` and `targetPrimary` are stored in the status subresource of the Cluster, they can't currently be modified the `kubectl command` as per this bug here: kubernetes/kubectl#564 In this commit there is also included a new command that can be used to promote PostgreSQL instances. Co-authored-by: Leonardo Cecchi <leonardo.cecchi@2ndquadrant.it> Co-authored-by: Marco Nenciarini <marco.nenciarini@2ndquadrant.it> Co-authored-by: Francesco Canovai <francesco.canovai@2ndquadrant.it>
@nikhita Does kubernetes/kubernetes#99556 close this issue? |
Looks like this thread staled out a bit.. I tried the new flag out with kubectl v1.25.4 and As a quick recap: the original reproduction from the top of this thread is still unchanged. If you don't use |
@danopia thanks for summarizing it well!
This is correct. The main resource endpoint will ignore all changes in the status subpath, so Closing since kubernetes/kubernetes#99556 fixes the issue. |
@nikhita: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
@nikhita can I use kubectl v1.24 but server still keep v1.11 or v1.18?(this two version are on my production now, hard to upgrade) |
Is this a BUG REPORT or FEATURE REQUEST? (choose one):
BUG REPORT
Kubernetes version (use
kubectl version
):Client Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.0", GitCommit:"0ed33881dc4355495f623c6f22e7dd0b7632b7c0", GitTreeState:"clean", BuildDate:"2018-09-27T17:05:32Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.3", GitCommit:"a4529464e4629c21224b3d52edfe0ea91b072862", GitTreeState:"clean", BuildDate:"2018-09-09T17:53:03Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
Environment:
What happened:
I have a crd that has status subresource is enabled. When i edit the status of the crd using
kubectl edit
, the changes doesn't apply.What you expected to happen:
kubectl edit
should apply the changes in status field.How to reproduce it (as minimally and precisely as possible):
Anything else we need to know:
If status subresource is disabled for crd, then
kubectl edit
works fine.The text was updated successfully, but these errors were encountered: