You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
[FATAL][2017-07-31 21:33:12 +0000] > Error from kubectl:
[FATAL][2017-07-31 21:33:12 +0000] Error from server (InternalError): error when applying patch:
[FATAL][2017-07-31 21:33:12 +0000] {"spec":{"containers":[{"name":"command-runner","resources":{"limits":{"cpu":"1000m"}}}]}}
[FATAL][2017-07-31 21:33:12 +0000] to:
[FATAL][2017-07-31 21:33:12 +0000] &{0xc42100b680 0xc420152070 <app> n upload-assets-7b4d9314-1f4a36a4 /tmp/Pod-upload-assets-7b4d9314-1f4a36a420170731-5277-12341p6.yml 0xc4205cca28 0xc420f4c000 82075679 false}
[FATAL][2017-07-31 21:33:12 +0000] for: "/tmp/Pod-upload-assets-7b4d9314-1f4a36a420170731-5277-12341p6.yml": an error on the server ("Internal Server Error: \"/api/v1/namespaces/<app>/pods/upload-assets-7b4d9314-1f4a36a4\": the server could not find the requested resource") has prevented the request from succeeding (patch pods upload-assets-7b4d9314-1f4a36a4)
[FATAL][2017-07-31 21:33:12 +0000] > Rendered template content:
It appears that the pod died while/before it was being updated with the resource limit.
Could this be a concurrency bug? Subsequent deploy passed fine.
The text was updated successfully, but these errors were encountered:
I'm pretty confident that's not concurrency-related, or at least not on our side. That pod has a generated name (as it should), so it won't be part of the template set for any other deploy, i.e. no other deploy will try to apply it. The pod gets created at the beginning of the deploy, and the template is included in the big kubectl apply -f [dir of all the resources] so that it won't get pruned until the next deploy. It is that single kubectl apply that failed with the error mentioned. I don't know why the error message mentions an attempt to patch the resource, since that portion of the template could not possibly have changed since it was originally applied (I checked that they didn't add any dynamic sketchiness to it). The "could not be found" seems to have been a server-side blip, as the follow-up deploy (which succeeded) explicitly says it pruned that pod. I don't think this reflects any bug in kubernetes-deploy, so I'm going to close this, but please reopen if you think I'm wrong and/or see it again.
Saw this:
It appears that the pod died while/before it was being updated with the resource limit.
Could this be a concurrency bug? Subsequent deploy passed fine.
The text was updated successfully, but these errors were encountered: