Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use --dry-run=server in kubectl commands #87714

Merged

Conversation

@julianvmodesto
Copy link
Contributor

julianvmodesto commented Jan 31, 2020

What type of PR is this?

/kind feature

What this PR does / why we need it:
Now that we can parse --dry-run=server, let's start using it in commands such as apply, create, patch, etc.

  • annotate
  • apply
  • create
  • expose
  • label
  • patch
  • run
  • set image
  • set resources
  • set selector
  • set serviceaccount
  • apply set-last-applied
  • autoscale
  • create clusterrole
  • create clusterrolebinding
  • create configmap
  • create cronjob
  • create deployment
  • create job
  • create namespace
  • create poddisruptionbudget
  • create priorityclass
  • create quota
  • create role
  • create rolebinding
  • create secret
  • create service
  • create serviceaccount
  • drain
  • rollout undo
  • set env
  • set subject

Follow up to #87580.

Which issue(s) this PR fixes:

#85652

Does this PR introduce a user-facing change?:

Support server-side dry-run in kubectl with --dry-run=server for commands including apply, patch, create, run and expose.

Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.:

- [KEP]: https://github.com/kubernetes/enhancements/blob/master/keps/sig-api-machinery/0015-dry-run.md

/sig api-machinery
/sig cli
cc @apelisse
/priority important-soon

@k8s-ci-robot

This comment has been minimized.

Copy link
Contributor

k8s-ci-robot commented Jan 31, 2020

Hi @julianvmodesto. Thanks for your PR.

I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot requested review from deads2k and dixudx Jan 31, 2020
@julianvmodesto julianvmodesto force-pushed the julianvmodesto:use-kubectl-ss-dry-run-flag branch from d66f59a to ad1c663 Jan 31, 2020
@julianvmodesto julianvmodesto force-pushed the julianvmodesto:use-kubectl-ss-dry-run-flag branch from ad1c663 to c6dc998 Jan 31, 2020
@apelisse

This comment has been minimized.

Copy link
Member

apelisse commented Jan 31, 2020

/ok-to-test
/lgtm
/approve

@julianvmodesto julianvmodesto changed the title [WIP] use --dry-run=server in kubectl commands Use --dry-run=server in kubectl commands Jan 31, 2020
@julianvmodesto julianvmodesto force-pushed the julianvmodesto:use-kubectl-ss-dry-run-flag branch from c6dc998 to 5e32542 Jan 31, 2020
@k8s-ci-robot k8s-ci-robot removed the lgtm label Jan 31, 2020
@julianvmodesto julianvmodesto force-pushed the julianvmodesto:use-kubectl-ss-dry-run-flag branch from 5e32542 to 78aad8d Jan 31, 2020
@apelisse

This comment has been minimized.

Copy link
Member

apelisse commented Feb 10, 2020

Also make sure you update your checklist!

@julianvmodesto julianvmodesto force-pushed the julianvmodesto:use-kubectl-ss-dry-run-flag branch 5 times, most recently from b5cbd9b to 3097fc5 Feb 10, 2020
@k8s-ci-robot k8s-ci-robot added size/XXL and removed size/XL labels Feb 11, 2020
@apelisse apelisse mentioned this pull request Feb 11, 2020
@julianvmodesto

This comment has been minimized.

Copy link
Contributor Author

julianvmodesto commented Feb 11, 2020

also thank you @mikedanese for adding Options to the generated clients recently :) got lucky w/ timing, otherwise some of this change would have been trickier/blocked...

@julianvmodesto julianvmodesto force-pushed the julianvmodesto:use-kubectl-ss-dry-run-flag branch 2 times, most recently from e641d98 to ec4f225 Feb 11, 2020
@julianvmodesto

This comment has been minimized.

Copy link
Contributor Author

julianvmodesto commented Feb 12, 2020

I'm removing rollout-undo from my checklist because it's a deprecated command: #88051

@julianvmodesto julianvmodesto force-pushed the julianvmodesto:use-kubectl-ss-dry-run-flag branch 2 times, most recently from 9551b1c to 8532665 Feb 12, 2020
@julianvmodesto

This comment has been minimized.

Copy link
Contributor Author

julianvmodesto commented Feb 12, 2020

PTAL @apelisse. The checklist is complete such that any command that supported --dry-run now supports both --dry-run=client and --dry-run=server. Each command has a corresponding updated or new test under test/cmd/. These integration tests are passing.

As discussed, we can add --dry-run=client|server support to more commands in a follow up.

Copy link
Member

apelisse left a comment

Looks great, maybe a couple of super small nits.

staging/src/k8s.io/kubectl/pkg/cmd/apply/apply.go Outdated Show resolved Hide resolved
@@ -335,7 +335,7 @@ func TestRunExposeService(t *testing.T) {
},
flags: map[string]string{"selector": "svc=frompod", "port": "90", "labels": "svc=frompod", "generator": "service/v2"},
output: &corev1.Service{
ObjectMeta: metav1.ObjectMeta{Name: "a-name-that-is-toooo-big-for-a-service-because-it-can-only-handle-63-characters", Namespace: "", Labels: map[string]string{"svc": "frompod"}},
ObjectMeta: metav1.ObjectMeta{Name: "a-name-that-is-toooo-big-for-a-service-because-it-can-only-handle-63-characters"[:63], Namespace: "", Labels: map[string]string{"svc": "frompod"}},

This comment has been minimized.

Copy link
@apelisse

This comment has been minimized.

Copy link
@julianvmodesto

julianvmodesto Feb 13, 2020

Author Contributor

This test is actually broken. test.output in the table tests was supposed to be used somewhere, but it was never plumbed through. It is hard to tell how these table tests worked previously without this..

But anyway, after I fixed the plumbing for test.output, I had to fix this specific test, which is testing to check that the .metadata.name.

This comment has been minimized.

Copy link
@julianvmodesto

julianvmodesto Feb 13, 2020

Author Contributor

Okay looking back, I see that what I did was changed this to use the resource.Helper instead of our DynamicClient. Very interesting but that change somehow revealed that these tests are broken...

I could change back to use the DynamicClient maybe..

@@ -611,6 +611,8 @@ func TestRunExposeService(t *testing.T) {
switch p, m := req.URL.Path, req.Method; {
case p == test.calls[m] && m == "GET":
return &http.Response{StatusCode: test.status, Header: cmdtesting.DefaultHeader(), Body: cmdtesting.ObjBody(codec, test.input)}, nil
case p == test.calls[m] && m == "POST":

This comment has been minimized.

Copy link
@apelisse

apelisse Feb 13, 2020

Member

Why do you have to change that?

This comment has been minimized.

Copy link
@julianvmodesto

julianvmodesto Feb 13, 2020

Author Contributor

Mentioned in the comment above, too:

The test actually was technically broken but somehow was passing previously... The table test's test.output was never used until now, and we somehow got away with returning the test.input as part of the test response, which works for part of the expose command.

@@ -72,7 +72,7 @@ func (c *CordonHelper) UpdateIfRequired(desired bool) bool {
// updating the given node object; it may return error if the object cannot be encoded as
// JSON, or if either patch or update calls fail; it will also return a second error
// whenever creating a patch has failed
func (c *CordonHelper) PatchOrReplace(clientset kubernetes.Interface) (error, error) {
func (c *CordonHelper) PatchOrReplace(clientset kubernetes.Interface, dryRun bool) (error, error) {

This comment has been minimized.

Copy link
@apelisse

apelisse Feb 13, 2020

Member

It's not super clear what type of dry-run here? Probably doesn't matter.

This comment has been minimized.

Copy link
@julianvmodesto

julianvmodesto Feb 13, 2020

Author Contributor

Done: renamed s/dryRun/serverDryRun for clarity

@@ -271,7 +291,9 @@ func (d *Helper) evictPods(pods []corev1.Pod, policyGroupVersion string, getPodF
out: d.Out,
}
_, err := waitForDelete(params)
if err == nil {
if d.DryRunStrategy == cmdutil.DryRunServer {

This comment has been minimized.

Copy link
@apelisse

apelisse Feb 13, 2020

Member

is waitForDelete going to do anything since we're not actually doing a delete? I suspect we might want to return early on this one, or even reject server dry-run?

This comment has been minimized.

Copy link
@julianvmodesto

julianvmodesto Feb 13, 2020

Author Contributor

Ah right. Done: returned early.

@julianvmodesto julianvmodesto force-pushed the julianvmodesto:use-kubectl-ss-dry-run-flag branch 2 times, most recently from 921bf08 to fe71260 Feb 13, 2020
- Support --dry-run=server for subcommands apply, run, create, annotate,
expose, patch, label, autoscale, apply set-last-applied, drain, rollout undo
- Support --dry-run=server for set subcommands
  - image
  - resources
  - serviceaccount
  - selector
  - env
  - subject
- Support --dry-run=server for create subcommands.
  - clusterrole
  - clusterrolebinding
  - configmap
  - cronjob
  - job
  - deployment
  - namespace
  - poddisruptionbudget
  - priorityclass
  - quota
  - role
  - rolebinding
  - service
  - secret
  - serviceaccount
- Remove GetClientSideDryRun
@julianvmodesto julianvmodesto force-pushed the julianvmodesto:use-kubectl-ss-dry-run-flag branch from fe71260 to 13b80b4 Feb 13, 2020
@julianvmodesto

This comment has been minimized.

Copy link
Contributor Author

julianvmodesto commented Feb 13, 2020

Addressed the comments, and also squashed!

@apelisse

This comment has been minimized.

Copy link
Member

apelisse commented Feb 13, 2020

I think that's good.

/lgtm
/approve

Let's talk tomorrow to see what's left. We're getting there!

@k8s-ci-robot k8s-ci-robot added the lgtm label Feb 13, 2020
@k8s-ci-robot

This comment has been minimized.

Copy link
Contributor

k8s-ci-robot commented Feb 13, 2020

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: apelisse, julianvmodesto

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot merged commit a11a8b8 into kubernetes:master Feb 13, 2020
17 checks passed
17 checks passed
cla/linuxfoundation julianvmodesto authorized
Details
pull-kubernetes-bazel-build Job succeeded.
Details
pull-kubernetes-bazel-test Job succeeded.
Details
pull-kubernetes-conformance-kind-ipv6-parallel Job succeeded.
Details
pull-kubernetes-dependencies Job succeeded.
Details
pull-kubernetes-e2e-gce Job succeeded.
Details
pull-kubernetes-e2e-gce-100-performance Job succeeded.
Details
pull-kubernetes-e2e-gce-device-plugin-gpu Job succeeded.
Details
pull-kubernetes-e2e-kind Job succeeded.
Details
pull-kubernetes-e2e-kind-ipv6 Job succeeded.
Details
pull-kubernetes-integration Job succeeded.
Details
pull-kubernetes-kubemark-e2e-gce-big Job succeeded.
Details
pull-kubernetes-node-e2e Job succeeded.
Details
pull-kubernetes-node-e2e-containerd Job succeeded.
Details
pull-kubernetes-typecheck Job succeeded.
Details
pull-kubernetes-verify Job succeeded.
Details
tide In merge pool.
Details
@k8s-ci-robot k8s-ci-robot added this to the v1.18 milestone Feb 13, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Linked issues

Successfully merging this pull request may close these issues.

None yet

3 participants
You can’t perform that action at this time.