Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error: the server could not find the requested resource #154

Closed
stefanmb opened this issue Jul 31, 2017 · 2 comments
Closed

Error: the server could not find the requested resource #154

stefanmb opened this issue Jul 31, 2017 · 2 comments
Labels
🪲 bug Something isn't working

Comments

@stefanmb
Copy link
Contributor

Saw this:

[FATAL][2017-07-31 21:33:12 +0000]	> Error from kubectl:
[FATAL][2017-07-31 21:33:12 +0000]	    Error from server (InternalError): error when applying patch:
[FATAL][2017-07-31 21:33:12 +0000]	    {"spec":{"containers":[{"name":"command-runner","resources":{"limits":{"cpu":"1000m"}}}]}}
[FATAL][2017-07-31 21:33:12 +0000]	    to:
[FATAL][2017-07-31 21:33:12 +0000]	    &{0xc42100b680 0xc420152070 <app> n upload-assets-7b4d9314-1f4a36a4 /tmp/Pod-upload-assets-7b4d9314-1f4a36a420170731-5277-12341p6.yml 0xc4205cca28 0xc420f4c000 82075679 false}
[FATAL][2017-07-31 21:33:12 +0000]	    for: "/tmp/Pod-upload-assets-7b4d9314-1f4a36a420170731-5277-12341p6.yml": an error on the server ("Internal Server Error: \"/api/v1/namespaces/<app>/pods/upload-assets-7b4d9314-1f4a36a4\": the server could not find the requested resource") has prevented the request from succeeding (patch pods upload-assets-7b4d9314-1f4a36a4)
[FATAL][2017-07-31 21:33:12 +0000]	> Rendered template content:

It appears that the pod died while/before it was being updated with the resource limit.

Could this be a concurrency bug? Subsequent deploy passed fine.

@stefanmb stefanmb added 🪲 bug Something isn't working question labels Jul 31, 2017
@stefanmb
Copy link
Contributor Author

stefanmb commented Aug 1, 2017

@KnVerey Any thoughts on this? First time i ran into it.

@KnVerey
Copy link
Contributor

KnVerey commented Aug 1, 2017

I'm pretty confident that's not concurrency-related, or at least not on our side. That pod has a generated name (as it should), so it won't be part of the template set for any other deploy, i.e. no other deploy will try to apply it. The pod gets created at the beginning of the deploy, and the template is included in the big kubectl apply -f [dir of all the resources] so that it won't get pruned until the next deploy. It is that single kubectl apply that failed with the error mentioned. I don't know why the error message mentions an attempt to patch the resource, since that portion of the template could not possibly have changed since it was originally applied (I checked that they didn't add any dynamic sketchiness to it). The "could not be found" seems to have been a server-side blip, as the follow-up deploy (which succeeded) explicitly says it pruned that pod. I don't think this reflects any bug in kubernetes-deploy, so I'm going to close this, but please reopen if you think I'm wrong and/or see it again.

@KnVerey KnVerey closed this as completed Aug 1, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
🪲 bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants