Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kubectl scale fails to scale a CustomResource if resourceVersion is not provided #80515

Closed
xmudrii opened this issue Jul 24, 2019 · 12 comments · Fixed by #80572
Closed

kubectl scale fails to scale a CustomResource if resourceVersion is not provided #80515

xmudrii opened this issue Jul 24, 2019 · 12 comments · Fixed by #80572
Labels
kind/bug Categorizes issue or PR as related to a bug. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery.
Milestone

Comments

@xmudrii
Copy link
Member

xmudrii commented Jul 24, 2019

What happened:

Using a scale command such as:

kubectl scale -n kube-system machinedeployment marko-1-pool1 --replicas=2

fails with the following error:

The machinedeployments "marko-1-pool1" is invalid: metadata.resourceVersion: Invalid value: 0x0: must be specified for an update

This bug happens on each run and started occurring as of kubectl v1.15.0. With kubectl v1.14 or older, I don't have such problem, i.e. it works flawlessly.

What you expected to happen:

The kubectl scale command succeeds:

machinedeployment.cluster.k8s.io/marko-1-pool1 scaled

How to reproduce it (as minimally and precisely as possible):

Create a CRD that has the scale subresource and try to scale a CR using kubectl. Example of such CRD can be found here.

Anything else we need to know?:

Regarding the kubectl bug, it seems that it started appearing as of #75210. Before the PR, we always fetched the resource and it worked because it had the resource version, while now we are only fetching it in case preconditions are provided (preconditions are not provided by default/when using kubectl).

Here is the -v8 output of a kubectl scale command:

I0724 13:21:30.988579    9001 loader.go:359] Config loaded from file:  /home/marko/Projects/src/github.com/kubermatic/kubeone/examples/terraform/aws/marko-1-kubeconfig
I0724 13:21:30.998409    9001 round_trippers.go:416] GET https://<redacted>:6443/apis/cluster.k8s.io/v1alpha1/namespaces/kube-system/machinedeployments/marko-1-pool1
I0724 13:21:30.998427    9001 round_trippers.go:423] Request Headers:
I0724 13:21:30.998436    9001 round_trippers.go:426]     Accept: application/json
I0724 13:21:30.998445    9001 round_trippers.go:426]     User-Agent: kubectl/v1.15.1 (linux/amd64) kubernetes/4485c6f
I0724 13:21:31.156732    9001 round_trippers.go:441] Response Status: 200 OK in 158 milliseconds
I0724 13:21:31.156763    9001 round_trippers.go:444] Response Headers:
I0724 13:21:31.156780    9001 round_trippers.go:447]     Date: Wed, 24 Jul 2019 11:21:31 GMT
I0724 13:21:31.156807    9001 round_trippers.go:447]     Content-Type: application/json
I0724 13:21:31.156823    9001 round_trippers.go:447]     Content-Length: 2281
I0724 13:21:31.156893    9001 request.go:947] Response Body: {"apiVersion":"cluster.k8s.io/v1alpha1","kind":"MachineDeployment","metadata":{"annotations":{"machinedeployment.clusters.k8s.io/revision":"1"},"creationTimestamp":"2019-07-24T10:06:54Z","generation":3,"name":"marko-1-pool1","namespace":"kube-system","resourceVersion":"6229","selfLink":"/apis/cluster.k8s.io/v1alpha1/namespaces/kube-system/machinedeployments/marko-1-pool1","uid":"93ec8c7c-c4b0-459c-9a22-f1aab1d25214"},"spec":{"minReadySeconds":0,"progressDeadlineSeconds":600,"replicas":3,"revisionHistoryLimit":1,"selector":{"matchLabels":{"workerset":"marko-1-pool1"}},"strategy":{"rollingUpdate":{"maxSurge":1,"maxUnavailable":0},"type":"RollingUpdate"},"template":{"metadata":{"creationTimestamp":null,"labels":{"workerset":"marko-1-pool1"},"namespace":"kube-system"},"spec":{"metadata":{"creationTimestamp":null,"labels":{"workerset":"marko-1-pool1"}},"providerSpec":{ <redacted> [truncated 1257 chars]
I0724 13:21:31.163767    9001 request.go:947] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"marko-1-pool1","namespace":"kube-system","creationTimestamp":null},"spec":{"replicas":2},"status":{"replicas":0}}
I0724 13:21:31.163841    9001 round_trippers.go:416] PUT https://marko-1-api-lb-06aba60d7183279e.elb.eu-west-3.amazonaws.com:6443/apis/cluster.k8s.io/v1alpha1/namespaces/kube-system/machinedeployments/marko-1-pool1/scale
I0724 13:21:31.163853    9001 round_trippers.go:423] Request Headers:
I0724 13:21:31.163862    9001 round_trippers.go:426]     Accept: application/json, */*
I0724 13:21:31.163871    9001 round_trippers.go:426]     User-Agent: kubectl/v1.15.1 (linux/amd64) kubernetes/4485c6f
I0724 13:21:31.207621    9001 round_trippers.go:441] Response Status: 422 Unprocessable Entity in 43 milliseconds
I0724 13:21:31.207653    9001 round_trippers.go:444] Response Headers:
I0724 13:21:31.207672    9001 round_trippers.go:447]     Content-Type: application/json
I0724 13:21:31.207687    9001 round_trippers.go:447]     Content-Length: 482
I0724 13:21:31.207702    9001 round_trippers.go:447]     Date: Wed, 24 Jul 2019 11:21:31 GMT
I0724 13:21:31.207789    9001 request.go:947] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"machinedeployments.cluster.k8s.io \"marko-1-pool1\" is invalid: metadata.resourceVersion: Invalid value: 0x0: must be specified for an update","reason":"Invalid","details":{"name":"marko-1-pool1","group":"cluster.k8s.io","kind":"machinedeployments","causes":[{"reason":"FieldValueInvalid","message":"Invalid value: 0x0: must be specified for an update","field":"metadata.resourceVersion"}]},"code":422}
F0724 13:21:31.208216    9001 helpers.go:114] The machinedeployments "marko-1-pool1" is invalid: metadata.resourceVersion: Invalid value: 0x0: must be specified for an update

If I manually craft the request that has the resource version (or use the --resource-version flag), it works.

{"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"marko-1-pool1","namespace":"kube-system","creationTimestamp":null,"resourceVersion":"6229"},"spec":{"replicas":3},"status":{"replicas":0}}

Is it as expected that the autoscaling/v1 Scale resource requires the resource version in order for update to succeed? Shouldn't it always pick the latest resource version in case it isn't provided?

On that note, I also noticed that if you want to update a CR by crafting the request manually and using curl, you also need to specify the resource version or you will get the same error. I'm not sure if this is expected and related, but I'm writing it down as it might be related. As far as I know, this is not required for some native resources at all, but I'm not sure is this the case for CRDs.

Environment:

  • Kubernetes version (use kubectl version):
Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.1", GitCommit:"4485c6f18cee9a5d3c3b4e523bd27972b1b53892", GitTreeState:"clean", BuildDate:"2019-07-18T09:18:22Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.1", GitCommit:"4485c6f18cee9a5d3c3b4e523bd27972b1b53892", GitTreeState:"clean", BuildDate:"2019-07-18T09:09:21Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
@xmudrii xmudrii added the kind/bug Categorizes issue or PR as related to a bug. label Jul 24, 2019
@k8s-ci-robot k8s-ci-robot added the needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. label Jul 24, 2019
@xmudrii
Copy link
Member Author

xmudrii commented Jul 24, 2019

/sig api-machinery
cc @deads2k

@k8s-ci-robot k8s-ci-robot added sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery. and removed needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. labels Jul 24, 2019
@roycaihw
Copy link
Member

/sub

@knight42
Copy link
Member

knight42 commented Aug 2, 2019

@xmudrii could you try my patch to see if it fix the problem?

@xmudrii
Copy link
Member Author

xmudrii commented Aug 2, 2019

I've created a test cluster based on #80572 and I can confirm that I'm able to scale CustomResources using kubectl as usual.

I used an example resource to test this, such as:

k scale ct test --replicas=5
crontab.stable.example.com/test scaled

Thanks a lot @knight42 for your time spent fixing this!

@liggitt liggitt added this to the v1.17 milestone Oct 6, 2019
@liggitt liggitt added the priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. label Oct 6, 2019
@markjacksonfishing
Copy link

markjacksonfishing commented Oct 26, 2019

@liggitt Bug triage for 1.17 here with a gentle reminder that code freeze for this release is on November 14. Is this issue still intended for 1.17? I see you move it to 1.17 so I just wanted to be sure.

@knight42
Copy link
Member

@liggitt I guess #80572 could be merged now?

@liggitt
Copy link
Member

liggitt commented Oct 26, 2019

It is in the review queue and should still make 1.17

@markjacksonfishing
Copy link

@liggitt a kind reminder that code freeze for this release is on November 14.

@markjacksonfishing
Copy link

@liggitt I wanted to circle back around on this. We are 4 days out from code freeze. If this is still good based on #80572 just note here. Judging from the looks of it we are still good

@liggitt
Copy link
Member

liggitt commented Nov 10, 2019

Yes this should merge early in the week

@markjacksonfishing
Copy link

@liggitt Code freeze is tomorrow, this still going to make it?

@liggitt
Copy link
Member

liggitt commented Nov 13, 2019

yes, #80572 is still planned

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery.
Projects
None yet
6 participants