New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kubectl patch should not exit with a non-zero exit code on no changes #58212

Closed
wknapik opened this Issue Jan 12, 2018 · 12 comments

Comments

Projects
None yet
@wknapik

wknapik commented Jan 12, 2018

Is this a BUG REPORT or FEATURE REQUEST?:
/kind bug

What happened:
kubectl patch exits with exit code 1 when there are no changes.

What you expected to happen:
I expected the exit code to be 0.
A warning could be printed to stderr, but the exit code should be 0, since no error occurred.
This is really annoying in automation, where the non-zero exit code means "something went wrong, bail".

How to reproduce it (as minimally and precisely as possible):

$ kubectl create configmap foo
configmap "foo" created
$ kubectl patch configmap/foo --patch $'data:\n  foo: bar'
$ kubectl patch configmap/foo --patch $'data:\n  foo: bar'
configmap "foo" not patched
$ echo "$?"
1
$

Anything else we need to know?:
Don't think so.

Environment:

  • Kubernetes version (use kubectl version):
Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.0", GitCommit:"925c127ec6b946659ad0fd596fa959be43f0cc05", GitTreeState:"clean", BuildDate:"2017-12-15T21:07:38Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.4", GitCommit:"9befc2b8928a9426501d3bf62f72849d5cbcd5a3", GitTreeState:"clean", BuildDate:"2017-11-20T05:17:43Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
  • Cloud provider or hardware configuration:
    AWS
  • OS (e.g. from /etc/os-release):
    Arch Linux
  • Kernel (e.g. uname -a):
    4.14.11
  • Install tools:
    ?
  • Others:
    N/A
@wknapik

This comment has been minimized.

Show comment
Hide comment
@wknapik

wknapik Jan 12, 2018

If it was at least a unique exit code, I could do || [[ "$?" -eq 1 ]] and be done with it, but exit code 1 is used for actual errors, like "Error from server (NotFound)", etc.

wknapik commented Jan 12, 2018

If it was at least a unique exit code, I could do || [[ "$?" -eq 1 ]] and be done with it, but exit code 1 is used for actual errors, like "Error from server (NotFound)", etc.

@wknapik

This comment has been minimized.

Show comment
Hide comment
@wknapik

wknapik Jan 12, 2018

Also, the 'configmap "foo" not patched' is printed to stdout, instead of stderr.

For ansible users who get here from google:

register: foo
failed_when: foo.rc > 0 and not "not patched" in foo.stdout

wknapik commented Jan 12, 2018

Also, the 'configmap "foo" not patched' is printed to stdout, instead of stderr.

For ansible users who get here from google:

register: foo
failed_when: foo.rc > 0 and not "not patched" in foo.stdout
@nikhita

This comment has been minimized.

Show comment
Hide comment
@nikhita

nikhita Jan 12, 2018

Member

/sig api-machinery

Member

nikhita commented Jan 12, 2018

/sig api-machinery

@yangyumo123

This comment has been minimized.

Show comment
Hide comment
@yangyumo123

yangyumo123 Jan 12, 2018

/sig maybe this is not a bug, if you think this is a bug, you can modify like this,
remove below code in pkg/kubectl/cmd/patch.go RunPatch function.
or add some condition,like GCE or AWS platform judge,first patch,second patch,and so on.

// if object was not successfully patched, exit with error code 1
if !didPatch {
return cmdutil.ErrExit
}

yangyumo123 commented Jan 12, 2018

/sig maybe this is not a bug, if you think this is a bug, you can modify like this,
remove below code in pkg/kubectl/cmd/patch.go RunPatch function.
or add some condition,like GCE or AWS platform judge,first patch,second patch,and so on.

// if object was not successfully patched, exit with error code 1
if !didPatch {
return cmdutil.ErrExit
}

@liggitt

This comment has been minimized.

Show comment
Hide comment
@liggitt
Member

liggitt commented Jan 13, 2018

Related to #52647 (comment) and #52577

@shiywang

This comment has been minimized.

Show comment
Hide comment
@shiywang

shiywang Jan 18, 2018

Member

Firstly, this change was introduced in #52450, and apologies for that I didn't have a thoughtful review.

Here is my understanding: (@liggitt please correct me if I'm wrong)

  1. From k8s developer perspective, apply patch should consider as idempotent ("make-it-so") commands and according to conventions here: patch should not directly throw an error (exit 1) when not patched, it should only by user setting the flag --error-unchanged, which is my pr here: #52647 was going to implement.

  2. But since @juanvallejo change the behaves to directly throw an error (exit 1), if that is user want, or it could be more acceptably consider as "normal" patch behavior, then I could also add an --ignore-unchanged flag to address this issue, but it will mess up with the convention docs here

I lean to option1(which is also my pr #52647 here currently working on)

@juanvallejo @liggitt @smarterclayton which one do you prefer, or have any better ideas ?

Member

shiywang commented Jan 18, 2018

Firstly, this change was introduced in #52450, and apologies for that I didn't have a thoughtful review.

Here is my understanding: (@liggitt please correct me if I'm wrong)

  1. From k8s developer perspective, apply patch should consider as idempotent ("make-it-so") commands and according to conventions here: patch should not directly throw an error (exit 1) when not patched, it should only by user setting the flag --error-unchanged, which is my pr here: #52647 was going to implement.

  2. But since @juanvallejo change the behaves to directly throw an error (exit 1), if that is user want, or it could be more acceptably consider as "normal" patch behavior, then I could also add an --ignore-unchanged flag to address this issue, but it will mess up with the convention docs here

I lean to option1(which is also my pr #52647 here currently working on)

@juanvallejo @liggitt @smarterclayton which one do you prefer, or have any better ideas ?

@shiywang

This comment has been minimized.

Show comment
Hide comment
@shiywang

shiywang Jan 18, 2018

Member

Also we distinguish idempotent or non-idempotent commands by HTTP method right ? @liggitt

Member

shiywang commented Jan 18, 2018

Also we distinguish idempotent or non-idempotent commands by HTTP method right ? @liggitt

@filipnowakswissre

This comment has been minimized.

Show comment
Hide comment
@filipnowakswissre

filipnowakswissre Feb 1, 2018

i don't want to create new bug for this, but for me main problem is that in case of rc == 1 you are writing to stdout instead of stderr - like it was the case before breaking release of kubectl.

filipnowakswissre commented Feb 1, 2018

i don't want to create new bug for this, but for me main problem is that in case of rc == 1 you are writing to stdout instead of stderr - like it was the case before breaking release of kubectl.

@bendrucker

This comment has been minimized.

Show comment
Hide comment
@bendrucker

bendrucker Mar 1, 2018

Contributor

Here's how I'm working around this in scripts to get an idempotent patch and output to stderr in case it's helpful for anyone:

set +e
result=$(kubectl patch serviceaccount default --namespace "$namespace" --patch '{"imagePullSecrets": [{"name": "jfrog"}]}')
code="$?"
if [[ "$code" != "0" && "$result" == *" not patched" ]]; then
  echo "$result" 1>&2
  exit "$code"
fi
set -e

Happy to help if needed on #52647, seems like that's pretty much good to go though.

Contributor

bendrucker commented Mar 1, 2018

Here's how I'm working around this in scripts to get an idempotent patch and output to stderr in case it's helpful for anyone:

set +e
result=$(kubectl patch serviceaccount default --namespace "$namespace" --patch '{"imagePullSecrets": [{"name": "jfrog"}]}')
code="$?"
if [[ "$code" != "0" && "$result" == *" not patched" ]]; then
  echo "$result" 1>&2
  exit "$code"
fi
set -e

Happy to help if needed on #52647, seems like that's pretty much good to go though.

deiwin added a commit to salemove/pipeline-lib that referenced this issue May 11, 2018

Remove version label after rollback
`kubectl rollout undo` doesn't seem to completely remove the label, so it has
to be removed manually, to completely reset the deployment to the previous
state in case of a rollback.

Patch command for removing labels is borrowed from [SO][1] and the patch
idempotency helper from a [Kubernetes issue][2].

INF-1392

[1]: https://stackoverflow.com/a/46550845/1456578
[2]: kubernetes/kubernetes#58212 (comment)

deiwin added a commit to salemove/pipeline-lib that referenced this issue May 11, 2018

Remove version label after rollback
`kubectl rollout undo` doesn't seem to completely remove the label, so it has
to be removed manually, to completely reset the deployment to the previous
state in case of a rollback.

Patch command for removing labels is borrowed from [SO][1] and the patch
idempotency helper from a [Kubernetes issue][2]. `kubectl label` is luckily
idempotent with removals and doesn't need any special mucking.

INF-1392

[1]: https://stackoverflow.com/a/46550845/1456578
[2]: kubernetes/kubernetes#58212 (comment)

deiwin added a commit to salemove/pipeline-lib that referenced this issue May 11, 2018

Remove version label after rollback
`kubectl rollout undo` doesn't seem to completely remove the label, so it has
to be removed manually, to completely reset the deployment to the previous
state in case of a rollback.

Patch command for removing labels is borrowed from [SO][1] and the patch
idempotency helper from a [Kubernetes issue][2]. `kubectl label` is luckily
idempotent with removals and doesn't need any special mucking.

INF-1392

[1]: https://stackoverflow.com/a/46550845/1456578
[2]: kubernetes/kubernetes#58212 (comment)
@fejta-bot

This comment has been minimized.

Show comment
Hide comment
@fejta-bot

fejta-bot May 30, 2018

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

fejta-bot commented May 30, 2018

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@wknapik

This comment has been minimized.

Show comment
Hide comment
@wknapik

wknapik May 31, 2018

/remove-lifecycle stale

wknapik commented May 31, 2018

/remove-lifecycle stale

@mikecico

This comment has been minimized.

Show comment
Hide comment
@mikecico

mikecico May 31, 2018

This bit me in our gitlab runner as well, causing a deploy script to fail fast.

mikecico commented May 31, 2018

This bit me in our gitlab runner as well, causing a deploy script to fail fast.

HartS added a commit to SUSE/cf-ci that referenced this issue Jul 20, 2018

Patch sc for default annotation only when needed
Due to kubernetes/kubernetes#58212 (which may
only be tied to newer kubernetes versions), patching the storageclass to
set the 'default' annotation to the desired one will result in an exit
status when this would be a no-op. To work around this, avoid patching
an sc when it would be a no-op.

HartS added a commit to SUSE/cf-ci that referenced this issue Jul 20, 2018

Patch sc for default annotation only when needed
Due to kubernetes/kubernetes#58212 (which may
only be tied to newer kubernetes versions), patching the storageclass to
set the 'default' annotation to the desired one will result in a failure
exit status when this would be a no-op. To work around this, avoid
patching an sc when it would be a no-op.

k8s-merge-robot added a commit that referenced this issue Aug 1, 2018

Merge pull request #66725 from juanvallejo/jvallejo/update-patch-retu…
…rn-code-logic

Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

update exit code to 0 if patch not needed

**Release note**:
```release-note
The `kubectl patch` command no longer exits with exit code 1 when a redundant patch results in a no-op
```

The specific logic in the `patch` command that exited with code 1, was only doing so when there was no diff between an existing object and its patched counterpart. (In case of errors, we just return those, which eventually ends up exiting with code 1 anyway). This patch removes this block, as we should not be treating patch no-ops as errors.

Fixes #58212

cc @soltysh

deiwin added a commit to salemove/pipeline-lib that referenced this issue Aug 24, 2018

Remove version label after rollback
`kubectl rollout undo` doesn't seem to completely remove the label, so it has
to be removed manually, to completely reset the deployment to the previous
state in case of a rollback.

Patch command for removing labels is borrowed from [SO][1] and the patch
idempotency helper from a [Kubernetes issue][2]. `kubectl label` is luckily
idempotent with removals and doesn't need any special mucking.

INF-1392

[1]: https://stackoverflow.com/a/46550845/1456578
[2]: kubernetes/kubernetes#58212 (comment)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment