Conversation
http.TimeoutHandler buffers the entire response in memory and only sends it when the handler returns, which breaks SSE, chunked downloads, and any response that relies on incremental flushing. The previous fix using context.WithTimeout solved the buffering problem but required protocol-specific branching for upgrades. Switch to transport-level timeouts that match how nginx and Traefik handle this: ResponseHeaderTimeout catches backends that never start responding, and an idle read timeout (via SetReadDeadline on each Read) catches backends that stop sending mid-stream. Both use the configured RequestTimeout. Active streams keep flowing because each data chunk resets the idle timer. This eliminates the isUpgradeRequest check entirely — every request goes through the same code path.
📝 WalkthroughWalkthroughThe pull request refactors HTTP ingress timeout handling by replacing the TimeoutHandler-based mechanism with a connection-level idle timeout wrapper. A new idleTimeoutConn type wraps net.Conn and resets read deadlines on each Read call to enforce idle timeouts without buffering active streams. The Server struct now uses an http.RoundTripper transport instead of a handler, with the transport configured to apply idle timeout wrapping to all proxied connections. Upgrade request special-case handling is removed in favor of unified request processing. Proxy error handling is extended to detect and respond to timeout errors with HTTP 503 responses. Tests are refactored to verify timeout behavior, streaming without buffering, and WebSocket upgrades through the new transport mechanism. Comment |
There was a problem hiding this comment.
🧹 Nitpick comments (1)
servers/httpingress/httpingress.go (1)
1050-1054: Internal requests don't use the configured transport with idle timeout.
executeInternalRequestcreates a plainhttp.Clientwith only an overall timeout, bypassing the custom transport that providesResponseHeaderTimeoutand theidleTimeoutConnidle detection. While internal requests typically complete quickly, this creates inconsistent timeout behavior.Consider using the server's transport for consistency:
♻️ Proposed fix
// Execute the request client := &http.Client{ - Timeout: h.config.RequestTimeout, + Transport: h.transport, + Timeout: h.config.RequestTimeout, }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@servers/httpingress/httpingress.go` around lines 1050 - 1054, The internal request in executeInternalRequest constructs a new http.Client with only Timeout, bypassing the server's configured transport (which supplies ResponseHeaderTimeout and idleTimeoutConn); update executeInternalRequest to create the client using the server's transport (e.g., set Transport: h.transport or use the existing h.httpClient) so the request benefits from ResponseHeaderTimeout and idle connection handling, while still applying the per-request Timeout value (avoid mutating the shared transport—assign it to the client's Transport field or clone it if needed).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@servers/httpingress/httpingress.go`:
- Around line 1050-1054: The internal request in executeInternalRequest
constructs a new http.Client with only Timeout, bypassing the server's
configured transport (which supplies ResponseHeaderTimeout and idleTimeoutConn);
update executeInternalRequest to create the client using the server's transport
(e.g., set Transport: h.transport or use the existing h.httpClient) so the
request benefits from ResponseHeaderTimeout and idle connection handling, while
still applying the per-request Timeout value (avoid mutating the shared
transport—assign it to the client's Transport field or clone it if needed).
ℹ️ Review info
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Disabled knowledge base sources:
- Linear integration is disabled
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (2)
servers/httpingress/httpingress.goservers/httpingress/httpingress_test.go
evanphx
left a comment
There was a problem hiding this comment.
I like it! A good solution for the problem, using SetReadDeadline
http.TimeoutHandlerbuffers the entire response in memory and only sends it when the handler returns. That's fine for normal request/response, but it completely breaks SSE, chunked downloads, long-polling, and anything that relies on incremental flushing. The previous iteration replaced it withcontext.WithTimeout, which fixed the buffering but still needed a specialisUpgradeRequestcheck to let WebSocket connections through.This switches to transport-level timeouts — the same approach used by nginx (
proxy_read_timeout) and Traefik (responseHeaderTimeout). A customhttp.Transportwith two complementary mechanisms handles all the cases:ResponseHeaderTimeoutcatches backends that never start responding (crashed, deadlocked, stuck boot), and an idle read timeout viaSetReadDeadlineon a wrappingidleTimeoutConncatches backends that stop sending mid-stream. Both use the configuredRequestTimeout(default 60s). Active streams stay alive because each chunk of data resets the idle timer — apps just need standard keepalive (SSE:comments, WebSocket ping/pong), which they'd already need behind any production reverse proxy.The nice thing about this approach is that it eliminates all protocol-specific branching. No more
isUpgradeRequestcheck, no more upgrade-vs-normal code paths — every request flows through the same transport.Closes MIR-740