Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kubectl run change default restart policy due to generator depreciation #82723

Closed
bdowling opened this issue Sep 14, 2019 · 6 comments
Closed
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. sig/cli Categorizes an issue or PR as relevant to SIG CLI.

Comments

@bdowling
Copy link

bdowling commented Sep 14, 2019

What happened:

Most of the generators for kubectl run have been depreciated per #68132

As a result, the default invocation of kubectl run -it some/image my-pod is still complaining about this depreciation many versions later.

kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.

This happens I believe because --restart=Always is the default and it is trying to generate a Deployment. Adding --restart=Never to the command line creates a Pod by default. This should probably just be made the default given run is no longer encouraged for creating Replicas, Deployments, etc.

What you expected to happen:

How to reproduce it (as minimally and precisely as possible):

$ kubectl run  --image=busybox test --dry-run -o yaml | grep kind:
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
kind: Deployment

$ kubectl run --restart=Never  --image=busybox test --dry-run -o yaml | grep kind:
kind: Pod

Anything else we need to know?:

If you google for this, you will find SO posts and others saying that kubectl run is depreciated, which I don't believe it is. Just the generators moved over to kubectl create, etc.

Being able to use kubectl run in the same fashion as docker run for quick pod tests, etc is very useful..

Environment:

  • Kubernetes version (use kubectl version):
Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.2", GitCommit:"f6278300bebbb750328ac16ee6dd3aa7d3549568", GitTreeState:"clean", BuildDate:"2019-08-05T09:23:26Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"13+", GitVersion:"v1.13.7-gke.24", GitCommit:"2ce02ef1754a457ba464ab87dba9090d90cf0468", GitTreeState:"clean", BuildDate:"2019-08-12T22:05:28Z", GoVersion:"go1.11.5b4", Compiler:"gc", Platform:"linux/amd64"}

/sig CLI

@bdowling bdowling added the kind/bug Categorizes issue or PR as related to a bug. label Sep 14, 2019
@k8s-ci-robot k8s-ci-robot added the sig/cli Categorizes an issue or PR as relevant to SIG CLI. label Sep 14, 2019
@ZP-AlwaysWin
Copy link
Contributor

ZP-AlwaysWin commented Sep 15, 2019

create a deployment default restart policy is "Always",it don't have "Never" restart policy。

The Deployment "xxx-deployment" is invalid: spec.template.spec.restartPolicy: Unsupported value: "Never": supported values: "Always"

so you use --restart=Never it default to create a pod,because it can't craate a deployment.

You can use this command to try

$ kubectl run  --image=busybox test --dry-run --generator=run-pod/v1 --restart=Never -o yaml|grep -E 'kind|restartPolicy'
---
kind: Pod
  restartPolicy: Never

@bdowling
Copy link
Author

bdowling commented Nov 8, 2019

@ZP-AlwaysWin not sure what is different from what you added, my point is the default run errors out, if the other generators are depricated, the default generator or options for restart policy should change.

I can submit a PR, but I was looking for support/ agreement before I put in the effort.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 6, 2020
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Mar 7, 2020
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. sig/cli Categorizes an issue or PR as relevant to SIG CLI.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants