-
Notifications
You must be signed in to change notification settings - Fork 38.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
apiserver: fix data race on request body with timeout #117164
base: master
Are you sure you want to change the base?
Conversation
Please note that we're already in Test Freeze for the Fast forwards are scheduled to happen every 6 hours, whereas the most recent run was: Fri Apr 7 14:01:36 UTC 2023. |
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: tkashem The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
That looks like the code that I wrote, I also had such a timeout reader. Can you explain more in your PR how it's different from what I proposed and why it should get merged? I just says "fixes 117052" without saying much about how it fixes that problem? |
I was expecting we could use the new 1.20 golang functionality to implement per handlers timeouts and get rid of the groutine hack :/ |
if tr.timedOut { | ||
return 0, http.ErrHandlerTimeout | ||
} | ||
return tr.ReadCloser.Read(b) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Suppose Read
blocks because it is waiting for the HTTP client to send the body. Then the lock is held and timeout()
in timeoutHandler.ServeHTTP
blocks, preventing the timeout handling.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, this could happen. I think we already have this today for the ResponseWriter
tw.timeout(err) |
ioutil.ReadAll
reads 512 bytes at a time, so the the Read
calls will probably be intermingled between the two goroutinesin in this situation:
- goroutine
B
does aRead
and reads some bytes from the Request Body (before timeout) - request times out
- timeout handler marks the body reader as
timed out
- goroutine
A
orB
callsRead
and getsErrHandlerTimeout
I don't think we need to prevent Read
calls getting intermingled, the request has timed out and we are sending a 504
to the caller.
but if a Read
or Write
call blocks from goroutine B
it will freeze the timeout handler and prevent it from sending a panic or 504 indefinitely.
@aojea #114189 will pave the way for go 1.20 per handler timeout, I need to resolve the test failures. As far as when we should remove the tiemeout filter, i put some thoughts here: #117111 (comment) This PR (or #117111) is a short-term band aid if we want to run the integration test suite race enabled now. |
/triage accepted |
The Kubernetes project currently lacks enough contributors to adequately respond to all PRs. This bot triages PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all PRs. This bot triages PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
/remove lifecycle/rotten @tkashem : are you still working on this? |
/remove-lifecycle rotten |
The Kubernetes project currently lacks enough contributors to adequately respond to all PRs. This bot triages PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle rotten |
The Kubernetes project currently lacks enough active contributors to adequately respond to all PRs. This bot triages PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
What type of PR is this?
/kind bug
What this PR does / why we need it:
Which issue(s) this PR fixes:
Fixes #117052
Special notes for your reviewer:
Does this PR introduce a user-facing change?
Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.: