-
Notifications
You must be signed in to change notification settings - Fork 38.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
apiserver still eats panic #30305
Comments
I would like to get this fixed if everyone agrees that this is an issue we should resolve. |
Yeah, we're seeing this . |
I turned off a bunch of the panic eating b/c failing fast is a 1st order principle in dist-systems, but "other folks" added it back. |
yeah, maybe running w/ really-crash=true is the workaround, i suspect the generalized reproduction scenario outside of our systems is slow network, high max inflight, high apiserver capacity : @jeremyeder . |
Issues go stale after 90d of inactivity. Prevent issues from auto-closing with an If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or |
Rotten issues close after 30d of inactivity. Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
We try to stop everything from eating panics here: #28800.
But API server still eats panics since the HTTP Server catches panics.
See this example:
/cc @lavalamp @krousey
The text was updated successfully, but these errors were encountered: