-
Notifications
You must be signed in to change notification settings - Fork 39.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
kubectl apply should have a way to indicate "no change" #52577
Comments
sounds interesting, @fabianofranz @pwittrock @mengqiy I would like to dibs on it. |
hi, @smarterclayton also need to confirm you, this is only for apply right ? we do not currently want other declarative/imperative object configuration command or even every command to have this feature right ? |
In the future this would likely be extended to other declarative commands.
We should pick a pattern that would translate to all other declarative
commands. Apply is the important one right now.
…On Sat, Sep 16, 2017 at 7:04 AM, Shiyang Wang ***@***.***> wrote:
/assign
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#52577 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ABG_p6LP4Ho0huJsUM6HhBSzhqp9eCHdks5si6stgaJpZM4PZkeI>
.
|
Sounds like a good idea. I remember sometime back there were issues where apply see detect that there were changes when there in fact were none. This may have since been resolved. I think it was due to an interaction with round tripping and defaulting, but don't quite remember. This would fit nicely with the other apply renovations we are doing to address long standing issues. RE priority art for other unix utils.
If we had a green field, it might be worth trying to do something consistent - perhaps exit 1 if we make changes and 0 if we don't make any changes. That might lend itself to a retry loop to - fetch recent, apply, retry on non-0 exit (expecting that the next apply will return 0 if no changes, and maybe doing exponential backoff for exit >1). This of course may impact existing scripts, so doing as you suggested and making it opt-in is the better route, and then add this to the list of things we would like to change when we do something that allows us to break backward compatibility (e.g. introducing a new "version" of the command or something). Re naming: maybe something like |
This feature will be helpful. And it doesn't require big change since apply already can distinguish if there is a change (but it only print it out). I agree with @pwittrock's opinion, make it opt-in for now and change the behavior in future major version change. |
@shiywang Sorry, I was wrong. It is actually |
Issues go stale after 90d of inactivity. Prevent issues from auto-closing with an If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Rotten issues close after 30d of inactivity. Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Is there any progress on this? |
There's a server-side apply working group, which is working on moving the apply command to the server. I'd be good to sync with them for the update. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Rotten issues close after 30d of inactivity. Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
@fejta-bot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/remove-lifecycle stale |
@tpoindessous: You can't reopen an issue/PR unless you authored it or you are a collaborator. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Hi @soltysh , could be please re-open this issue if it's not finished ? Thanks ! |
Some configuration management systems distinguish between "no change", "changed", and "failed". It should be possible to use kubectl apply and know when any changes were applied from normal shell scripting
Since "no-op" is also success, and we don't expect clients to parse our stdout/stderr, it seems reasonable that we should allow a kubectl apply caller to request that a no-op be given a special exit code that we ensure no other result can return. Since today we return 1 for almost all errors, we have the option to begin defining "special" errors.
Possible options:
kubectl apply ... --fail-when-unchanged=2
returns exit code 2 (allows user to control exit code)kubectl apply ... --fail-when-unchanged
returns exit code 2 always (means we can document the exit code as per UNIX norms)The latter is probably better. Naming of course is up in the air.
@kubernetes/sig-cli-feature-requests
I rate this as high importance for integration with config management (like Ansible) which expects to be able to discern this.
The text was updated successfully, but these errors were encountered: