Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Deprecate and remove --record flag from kubectl #40422

Open
soltysh opened this issue Jan 25, 2017 · 55 comments
Open

Deprecate and remove --record flag from kubectl #40422

soltysh opened this issue Jan 25, 2017 · 55 comments
Labels
area/kubectl kind/feature Categorizes issue or PR as related to a new feature. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. sig/cli Categorizes an issue or PR as relevant to SIG CLI.

Comments

@soltysh
Copy link
Contributor

soltysh commented Jan 25, 2017

Currently --record flag seems like a really bad decision made some time ago, and supporting it makes live harder when trying to modify any parts of code (see this discussion). I'm proposing to deprecate that flag and drop it entirely in future version.

@liggitt fyi
@kubernetes/sig-cli-feature-requests opinions?

@soltysh soltysh added area/kubectl sig/cli Categorizes an issue or PR as relevant to SIG CLI. sig/windows Categorizes an issue or PR as relevant to SIG Windows. labels Jan 25, 2017
@fabianofranz
Copy link
Contributor

@soltysh do you mean only deprecating the flag and make it always record, or remove the entire recording feature?

@soltysh
Copy link
Contributor Author

soltysh commented Jan 25, 2017 via email

@soltysh
Copy link
Contributor Author

soltysh commented Jan 25, 2017 via email

@0xmichalis
Copy link
Contributor

SGTM

@soltysh soltysh added kind/feature Categorizes issue or PR as related to a new feature. and removed sig/windows Categorizes an issue or PR as relevant to SIG Windows. labels Jan 26, 2017
@soltysh
Copy link
Contributor Author

soltysh commented Jan 26, 2017

If I don't hear any objections I'll start with deprecating this flag in 1.6.

@liggitt
Copy link
Member

liggitt commented Jan 26, 2017

needs broader discussion and agreement... find the originating PR and tag those folks in.

@soltysh
Copy link
Contributor Author

soltysh commented Jan 27, 2017

The initial PR was to address history information about deployments: #20035 implemented by @janetkuo. @Kargakis you will know better if it's still helpful, since it only has the change cause? Besides, that annotation isn't filled in when doing regular update, only certain kubectl commands actually fill this in.
Additionally, there are some open issues, @smarterclayton raised on the original PR wrt not storing any sensitive-related information like passwords, secrets, etc (#20508).
My main argument is remains strong, there are other mechanism (advanced audit) that are targeted into actually solving this problem.

@0xmichalis
Copy link
Contributor

We still need a way to properly correlate information about why a rollout happened with the respective replica set. Admittedly, --record is far from perfect and I am all for deprecating it but not before we find a better way to store the info we need. How about a flag (either repurposing --record or a new one) that accepts the information users want to store? There have been requests about overriding change-cause, see #25554

@janetkuo
Copy link
Member

janetkuo commented Jan 27, 2017

+1 for supporting overriding change-cause. We need to provide an alternative to rollout history change-cause before deprecating --record.

cc @ghodss

@ghodss
Copy link
Contributor

ghodss commented Jan 27, 2017

I haven't thought about this issue for a while, but I'm pretty sure we can solve our use case of recording which git commit or jenkins run resulted in an apply in our own custom annotation and not need a kube-standard one.

@0xmichalis
Copy link
Contributor

I haven't thought about this issue for a while, but I'm pretty sure we can solve our use case of recording which git commit or jenkins run resulted in an apply in our own custom annotation and not need a kube-standard one.

This is not ideal because we already do this sort of thing in kubectl albeit storing less valuable info ie. the kubectl command that was invoked. @kubernetes/sig-cli-feature-requests lets add a new flag in kubectl that users can use similar to --record but instead of storing the invoked command, store the string that is passed by the user.

@soltysh
Copy link
Contributor Author

soltysh commented Jan 27, 2017 via email

@0xmichalis
Copy link
Contributor

What alternative do you suggest? Repurpose the current flag? Something else? We need users/automated processes to be able to specify a reason when images (or less frequently other parts of the pod spec) are updated so things like kubectl set image or kubectl apply need to pass the info down to the deployment->replica set.

@soltysh
Copy link
Contributor Author

soltysh commented Jan 30, 2017

I'm leaning towards automated process, I don't have any details figured out, will keep you posted.

@adohe-zz
Copy link

adohe-zz commented Feb 3, 2017

/cc @adohe

@zjj2wry
Copy link
Contributor

zjj2wry commented Sep 25, 2017

/cc

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 6, 2018
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale

@liggitt liggitt added the lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. label Aug 17, 2019
@joelhoisko
Copy link

Has there been any progress on this issue since 2019?

@BenTheElder
Copy link
Member

#102873 this is actually happening now.

@crokobit
Copy link

crokobit commented Jul 9, 2021

+1

@praparn
Copy link

praparn commented Sep 11, 2021

Is it possible for append this option on deployment yaml itself? A lot of need to use this feature for check and record history purpose.

@vovasoft
Copy link

We need to provide an alternative to rollout history change-cause before deprecating --record.

@cuianbing
Copy link

2022-01-20,I now use --record command to prompt me "Flag --record has been deprecated, --record will be removed in the future". Has it been discarded.Is there an alternative solution now。Thanks

@sysnet4admin
Copy link

FYI (2022-02-14)
v1.23 is same as before.

[root@m-k8s 9.4]# k get node 
NAME     STATUS   ROLES                  AGE     VERSION
m-k8s    Ready    control-plane,master   6d21h   v1.23.3
w1-k8s   Ready    <none>                 6d21h   v1.23.3
<snipped>
[root@m-k8s 9.4]# k set image deployment deploy-rollout nginx=nginx:1.21.0 --record
Flag --record has been deprecated, --record will be removed in the future
deployment.apps/deploy-rollout image updated

@tanvp112
Copy link

tanvp112 commented Mar 6, 2022

#102873

@vjunior1981
Copy link

2022-01-20,I now use --record command to prompt me "Flag --record has been deprecated, --record will be removed in the future". Has it been discarded.Is there an alternative solution now。Thanks

@olwenya just tested like this and it worked:

~$ kubectl create deploy nginx --image=nginx --replicas=2
deployment.apps/nginx created
~$ kubectl set image deploy/nginx nginx=nginx:1.19
deployment.apps/nginx image updated
~$ kubectl annotate deploy/nginx kubernetes.io/change-cause='update image to 1.19'
deployment.apps/nginx annotated
~$ kubectl rollout history deploy/nginx
deployment.apps/nginx 
REVISION  CHANGE-CAUSE
1         <none>
2         update image to 1.19

Not perfect, but is an alternative.

@sysnet4admin
Copy link

@vjunior1981 suggestion is looking good I think.
So how about --annotation instead of --record ? like this?
(It is for instance)

~$ kubectl set image deploy/nginx nginx=nginx:1.19 --annotation=update image to 1.19
deployment.apps/nginx image updated
~$ kubectl rollout history deploy/nginx
deployment.apps/nginx 
REVISION  CHANGE-CAUSE
1         update image to 1.19

@praparn
Copy link

praparn commented May 5, 2022

Wow this look good idea. As check back to Deployment. They also remove "--record". Will take some time to test about both of this

@fireflycons
Copy link

Or even introduce --annotation and have a default if the user does not specify it, along the lines of

kubectl set image deploy/nginx nginx=nginx:1.19

...defaults to set image to nginx:1.19 - perhaps just the tag if image name considered insecure.

kubectl rollout undo deploy/nginx --to-revision=3

...defaults to rollback to revision 3

@mohini4prac
Copy link

Although annotate option is there to have CHANGE-CAUSE , it is better to have --record option . It may happen that some incorrect message is provided in annotate .Providing actual command that was run while updating deployment will be more useful.

@pnts-se
Copy link

pnts-se commented Jul 7, 2022

#40422 (comment)

This does not work for me when working with DaemonSet in v1.23.1.
I must do it the other way around.
First annotate, then set image, like so:

kubectl annotate ds myds01 kubernetes.io/change-cause='downgrade to 1.16.1-alpine'
kubectl set image ds myds01 nginx=nginx:1.16.1-alpine

@ric79
Copy link

ric79 commented Aug 2, 2022

Here my example

$ k version --short
Client Version: v1.24.3
Kustomize Version: v4.5.4
Server Version: v1.24.0

$ k create deployment nginx-dep --image=nginx:1.22.0-alpine-perl --replicas 5

$ k rollout history deployment nginx-dep
deployment.apps/nginx-dep
REVISION  CHANGE-CAUSE
1         <none>

$ k set image deployment nginx-dep nginx=nginx:1.23-alpine-perl
$ k annotate deployment nginx-dep kubernetes.io/change-cause="demo version changed from 1.22.0 to 1.23.0" --overwrite=true
$ k rollout history deployment nginx-dep
deployment.apps/nginx-dep
REVISION  CHANGE-CAUSE
1         <none>
2         demo version changed from 1.22.0 to 1.23.0

$ k set image deployment nginx-dep nginx=nginx:1.23.1-alpine-perl
$ k annotate deployment nginx-dep kubernetes.io/change-cause="demo version changed from 1.23.0 to 1.23.1" --overwrite=true
$ k rollout history deployment nginx-dep                                                                             
deployment.apps/nginx-dep
REVISION  CHANGE-CAUSE
1         <none>
2         demo version changed from 1.22.0 to 1.23.0
3         demo version changed from 1.23.0 to 1.23.1

@rittneje
Copy link

@soltysh What is the replacement for this feature? Having to manually annotate everything is not a workable solution. I see some mention of HTTP headers getting sent by kubectl, but it is very unclear what is expected to consume these headers, and how I am expected to see them from the various yaml specs. And the kubectl debug logs don't show any additional headers being sent, even when explicitly setting KUBECTL_COMMAND_HEADERS=1.

@huapox
Copy link

huapox commented Feb 12, 2023

it works for me:

[root@(?.|default:default) ~]$ kc version --short 
Client Version: v1.17.5
Server Version: v1.22.17+k3s1

[root@(?.|default:default) ~]$ kc set env ds alpine-ds AA=123
daemonset.apps/alpine-ds env updated
[root@(?.|default:default) ~]$ kc annotate ds alpine-ds kubernetes.io/change-cause='set AA=123'
daemonset.apps/alpine-ds annotated

[root@(?.|default:default) ~]$ kc rollout history ds alpine-ds
daemonset.apps/alpine-ds 
REVISION  CHANGE-CAUSE
5         update image to 1.19-04
6         set AA=123

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/kubectl kind/feature Categorizes issue or PR as related to a new feature. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. sig/cli Categorizes an issue or PR as relevant to SIG CLI.
Projects
Status: Needs Triage
Development

No branches or pull requests