Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kubectl edit or apply can not update .status when status sub resource is enabled #564

Closed
nightfury1204 opened this issue Nov 21, 2018 · 46 comments
Assignees
Labels
area/kubectl kind/feature Categorizes issue or PR as related to a new feature. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. priority/P1 sig/cli Categorizes an issue or PR as relevant to SIG CLI.

Comments

@nightfury1204
Copy link

Is this a BUG REPORT or FEATURE REQUEST? (choose one):
BUG REPORT

Kubernetes version (use kubectl version):
Client Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.0", GitCommit:"0ed33881dc4355495f623c6f22e7dd0b7632b7c0", GitTreeState:"clean", BuildDate:"2018-09-27T17:05:32Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.3", GitCommit:"a4529464e4629c21224b3d52edfe0ea91b072862", GitTreeState:"clean", BuildDate:"2018-09-09T17:53:03Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}

Environment:

  • minikube

What happened:
I have a crd that has status subresource is enabled. When i edit the status of the crd using kubectl edit, the changes doesn't apply.

What you expected to happen:
kubectl edit should apply the changes in status field.

How to reproduce it (as minimally and precisely as possible):

$ cat customresourcedefination.yaml 
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: foos.try.com
spec:
  group: try.com
  version: v1alpha1
  scope: Namespaced
  subresources:
    status: {}
  names:
    plural: foos
    singular: foo
    kind: Foo

$ kubectl apply -f customresourcedefination.yaml
$ cat foo.yaml 
apiVersion: try.com/v1alpha1
kind: Foo
metadata:
  name: my-foo
status:
  hello: world

$ kubectl apply -f foo.yaml
# edit the status
$ kubectl edit foo/my-foo

Anything else we need to know:
If status subresource is disabled for crd, then kubectl edit works fine.

@tamalsaha
Copy link
Member

/kind bug
/sig cli

@k8s-ci-robot k8s-ci-robot added kind/bug Categorizes issue or PR as related to a bug. sig/cli Categorizes an issue or PR as relevant to SIG CLI. labels Nov 21, 2018
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 19, 2019
@flyer103
Copy link

flyer103 commented Mar 7, 2019

Related to kubernetes/kubernetes#60845

@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Apr 6, 2019
@coderanger
Copy link
Member

/remove-lifecycle rotten

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Apr 23, 2019
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 22, 2019
@seans3
Copy link
Contributor

seans3 commented Jul 22, 2019

/remove-lifecycle stale
/area kubectl

@k8s-ci-robot k8s-ci-robot added area/kubectl and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jul 22, 2019
@seans3
Copy link
Contributor

seans3 commented Jul 22, 2019

/priority P2

@DBarthe
Copy link

DBarthe commented Jul 26, 2019

I guess this is intended behavior according to the design proposal :

If the /status subresource is enabled, the following behaviors change:

  • The main resource endpoint will ignore all changes in the status subpath. (note: it will not reject requests which try to change the status, following the existing semantics of other resources).

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 24, 2019
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Nov 23, 2019
@florianrusch
Copy link

/remove-lifecycle rotten

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Nov 27, 2019
@florianrusch
Copy link

From @DBarthe

I guess this is intended behavior according to the design proposal :

If the /status subresource is enabled, the following behaviors change:

  • The main resource endpoint will ignore all changes in the status subpath. (note: it will not reject requests which try to change the status, following the existing semantics of other resources).

Wouldn't it be nice if we could change the status field in some kind with kubectl? Maybe with an extra command like: kubectl edit foo/status my-foo or kubectl edit foo.status my-foo. I'm not sure if there is any convention on how this kind of commands should look like.

Our use-case:
We've built an operator (shell-operator) for our CRDs and would like to edit the status field with kubectl within this operator.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 25, 2020
@nikhita
Copy link
Member

nikhita commented Feb 28, 2021

Created kubernetes/kubernetes#99556 as a POC to get initial feedback

@eddiezane
Copy link
Member

/unassign @eddiezane @seans3
/assign @nikhita

@aojea
Copy link
Member

aojea commented May 5, 2021

/cc

@nathanperkins
Copy link

What's the best workaround for this issue? I could use a client library to write it in go but that seems a bit heavy handed.

@nikhita
Copy link
Member

nikhita commented Jun 17, 2021

@nathanperkins This feature is planned for v1.22 - https://github.com/kubernetes/enhancements/tree/master/keps/sig-cli/2590-kubectl-subresource

@gtfortytwo
Copy link

@nathanperkins This feature is planned for v1.22 - https://github.com/kubernetes/enhancements/tree/master/keps/sig-cli/2590-kubectl-subresource

Since it appears this did not make it into v1.22 , do you have any idea of an updated timeline, @nikhita ? Our use case is similar to that mentioned by @florianrusch .

@silentred
Copy link

@nikhita waiting for this feature +1

@AlekSi
Copy link

AlekSi commented Sep 8, 2021

https://github.com/ulucinar/kubectl-edit-status plugin can be used in the meantime.

@Ruoyu-y
Copy link

Ruoyu-y commented Feb 25, 2022

Any timeline for this one?

@lumjjb

This comment was marked as spam.

@nikhita
Copy link
Member

nikhita commented Mar 25, 2022

kubernetes/kubernetes#99556 has merged and subresource support will be present in Kubernetes v1.24.

mnencia added a commit to cloudnative-pg/cloudnative-pg that referenced this issue Apr 1, 2022
The switchover procedure is entirely handled by PGK and not guided by
the operator.

PGK during its reconciliation cycle will look for changes in the
`targetPrimary` status field of the cluster.

When a new server is promoted as a primary:

- the old primary will request a fast shutdown, leading to a restart in
  the Pod. The Pod will be restarted by the Kubelet, and PGK will wait for
  the switchover to be completed (targetPrimary == currentPrimary). It will
  then demote itself, and start following the new primary.

- the promoted replica will wait for its WAL sender to be stopped and
  then promote itself.

Since `currentPrimary` and `targetPrimary` are stored in the status
subresource of the Cluster, they can't currently be modified the
`kubectl command` as per this bug here:

kubernetes/kubectl#564

In this commit there is also included a new command that can be used to
promote PostgreSQL instances.

Co-authored-by: Leonardo Cecchi <leonardo.cecchi@2ndquadrant.it>
Co-authored-by: Marco Nenciarini <marco.nenciarini@2ndquadrant.it>
Co-authored-by: Francesco Canovai <francesco.canovai@2ndquadrant.it>
@brianpursley
Copy link
Member

@nikhita Does kubernetes/kubernetes#99556 close this issue?

@danopia
Copy link

danopia commented Jan 3, 2023

Looks like this thread staled out a bit..

I tried the new flag out with kubectl v1.25.4 and kubectl edit --subresource=status works to edit the status section. Nice!

As a quick recap: the original reproduction from the top of this thread is still unchanged. If you don't use --subresource and try changing status, those changes will still be silently dropped. As per #564 (comment) this is intended behavior.

@nikhita
Copy link
Member

nikhita commented Jan 3, 2023

@danopia thanks for summarizing it well!

As a quick recap: the original reproduction from the top of this thread is still unchanged. If you don't use --subresource and try changing status, those changes will still be silently dropped. As per #564 (comment) this is intended behavior.

This is correct. The main resource endpoint will ignore all changes in the status subpath, so --subresource needs to be explicitly specified if you need to edit the status section.

Closing since kubernetes/kubernetes#99556 fixes the issue.
/close

@k8s-ci-robot
Copy link
Contributor

@nikhita: Closing this issue.

In response to this:

@danopia thanks for summarizing it well!

As a quick recap: the original reproduction from the top of this thread is still unchanged. If you don't use --subresource and try changing status, those changes will still be silently dropped. As per #564 (comment) this is intended behavior.

This is correct. The main resource endpoint will ignore all changes in the status subpath, so --subresource needs to be explicitly specified if you need to edit the status section.

Closing since kubernetes/kubernetes#99556 fixes the issue.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@fengyinqiao
Copy link

fengyinqiao commented Mar 7, 2023

kubernetes/kubernetes#99556 has merged and subresource support will be present in Kubernetes v1.24.

@nikhita can I use kubectl v1.24 but server still keep v1.11 or v1.18?(this two version are on my production now, hard to upgrade)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/kubectl kind/feature Categorizes issue or PR as related to a new feature. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. priority/P1 sig/cli Categorizes an issue or PR as relevant to SIG CLI.
Projects
None yet
Development

No branches or pull requests