Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kubectl apply should have a way to indicate "no change" #52577

Closed
smarterclayton opened this issue Sep 15, 2017 · 19 comments
Closed

kubectl apply should have a way to indicate "no change" #52577

smarterclayton opened this issue Sep 15, 2017 · 19 comments
Assignees
Labels
area/kubectl kind/feature Categorizes issue or PR as related to a new feature. priority/backlog Higher priority than priority/awaiting-more-evidence. sig/cli Categorizes an issue or PR as relevant to SIG CLI.

Comments

@smarterclayton
Copy link
Contributor

smarterclayton commented Sep 15, 2017

Some configuration management systems distinguish between "no change", "changed", and "failed". It should be possible to use kubectl apply and know when any changes were applied from normal shell scripting

Since "no-op" is also success, and we don't expect clients to parse our stdout/stderr, it seems reasonable that we should allow a kubectl apply caller to request that a no-op be given a special exit code that we ensure no other result can return. Since today we return 1 for almost all errors, we have the option to begin defining "special" errors.

Possible options:

  • kubectl apply ... --fail-when-unchanged=2 returns exit code 2 (allows user to control exit code)
  • kubectl apply ... --fail-when-unchanged returns exit code 2 always (means we can document the exit code as per UNIX norms)

The latter is probably better. Naming of course is up in the air.

@kubernetes/sig-cli-feature-requests

I rate this as high importance for integration with config management (like Ansible) which expects to be able to discern this.

@k8s-ci-robot k8s-ci-robot added sig/cli Categorizes an issue or PR as relevant to SIG CLI. kind/feature Categorizes issue or PR as related to a new feature. labels Sep 15, 2017
@shiywang
Copy link
Contributor

shiywang commented Sep 16, 2017

sounds interesting, @fabianofranz @pwittrock @mengqiy I would like to dibs on it.

@shiywang
Copy link
Contributor

shiywang commented Sep 16, 2017

hi, @smarterclayton also need to confirm you, this is only for apply right ? we do not currently want other declarative/imperative object configuration command or even every command to have this feature right ?
otherwise the tittle would be like "implement a custom return error code mechanism for kubectl"

@smarterclayton
Copy link
Contributor Author

smarterclayton commented Sep 16, 2017 via email

@pwittrock
Copy link
Member

Sounds like a good idea. I remember sometime back there were issues where apply see detect that there were changes when there in fact were none. This may have since been resolved. I think it was due to an interaction with round tripping and defaulting, but don't quite remember.

This would fit nicely with the other apply renovations we are doing to address long standing issues.

RE priority art for other unix utils.

diff exits 0 on no differences, 1 on differences found, and 1> on error
grep exits 0 on lines found, 1 on no lines found, and 1> on error

If we had a green field, it might be worth trying to do something consistent - perhaps exit 1 if we make changes and 0 if we don't make any changes. That might lend itself to a retry loop to - fetch recent, apply, retry on non-0 exit (expecting that the next apply will return 0 if no changes, and maybe doing exponential backoff for exit >1).

This of course may impact existing scripts, so doing as you suggested and making it opt-in is the better route, and then add this to the list of things we would like to change when we do something that allows us to break backward compatibility (e.g. introducing a new "version" of the command or something).

Re naming: maybe something like --exit-failure-unchanged?

@mengqiy
Copy link
Member

mengqiy commented Sep 17, 2017

This feature will be helpful. And it doesn't require big change since apply already can distinguish if there is a change (but it only print it out).

I agree with @pwittrock's opinion, make it opt-in for now and change the behavior in future major version change.

@mengqiy
Copy link
Member

mengqiy commented Sep 19, 2017

it doesn't require big change since apply already can distinguish if there is a change (but it only print it out).

@shiywang Sorry, I was wrong. It is actually kubectl edit that can distinguish if there is a change and print no changes made.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 6, 2018
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Feb 10, 2018
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@soltysh
Copy link
Contributor

soltysh commented May 17, 2018

/remove-lifecycle stale
/lifecycle frozen

@soltysh soltysh added priority/backlog Higher priority than priority/awaiting-more-evidence. and removed lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. labels May 17, 2018
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 15, 2018
@huyqut
Copy link

huyqut commented Sep 5, 2018

Is there any progress on this?

@soltysh
Copy link
Contributor

soltysh commented Sep 11, 2018

There's a server-side apply working group, which is working on moving the apply command to the server. I'd be good to sync with them for the update.

@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Oct 11, 2018
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@tpoindessous
Copy link

/remove-lifecycle stale
/remove-lifecycle rotten
/reopen

@k8s-ci-robot
Copy link
Contributor

@tpoindessous: You can't reopen an issue/PR unless you authored it or you are a collaborator.

In response to this:

/remove-lifecycle stale
/remove-lifecycle rotten
/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Dec 13, 2018
@tpoindessous
Copy link

Hi @soltysh , could be please re-open this issue if it's not finished ?

Thanks !

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/kubectl kind/feature Categorizes issue or PR as related to a new feature. priority/backlog Higher priority than priority/awaiting-more-evidence. sig/cli Categorizes an issue or PR as relevant to SIG CLI.
Projects
None yet
Development

No branches or pull requests

10 participants