Close channel in ChanWriter when RPC stream ends#655
Conversation
ChanWriter spawns a goroutine that reads from an RPC RecvStream and forwards values into a channel, but it never closed that channel on exit. Since ChanWriter is the sole sender, it owns channel closure by Go convention. Without it, readers like the window-resize goroutine in the exec server would either leak or — if something else tried to close the channel — panic with "close of closed channel".
|
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: Organization UI Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (1)
📝 WalkthroughWalkthroughAdded a Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@pkg/rpc/inline.go`:
- Around line 198-201: Guard against races between Close and getStream by
marking the connection as closed before clearing and closing the pool and by
making getStream and initPool check that closed flag; specifically, in Close set
c.closed = true (or otherwise flip a boolean) under the same mutex used around
c.pool, capture the old pool to a local variable, set c.pool = nil, then iterate
over and close that local channel while skipping nil entries; update getStream
to check c.closed before receiving from c.pool and return an error if closed
(avoid using c.pool after it might be nil), and update initPool to early-return
if c.closed is set so it cannot recreate the pool after Close; reference
symbols: Close, getStream, initPool, c.pool, c.closed, conn.enc, conn.stream.
inlineClient.Close() calls close(c.pool) but never nils out c.pool afterwards. When the RPC framework tears down a streaming call, both the handler's defers and the framework's own cleanup can call Close on the same client, triggering a "close of closed channel" panic. Nil out c.pool before closing so the second call is a no-op. Also guard getStream against receiving from a closed pool channel (which yields nil, causing a nil pointer dereference), and prevent initPool from recreating the pool after Close by checking c.closed.
996f9ca to
945644a
Compare
miren exec(andmiren app run, which goes through the same server path) would panic with "close of closed channel" after the command finished. The exec output came back fine, but then the RPC handler blew up — not great for service stability.There were actually two bugs conspiring here. The first was
ChanWriterinpkg/rpc/stream/stream.go— it spawns a goroutine that reads from an RPC stream and forwards values into a Go channel, but when the stream ended it just returned without closing the output channel. SinceChanWriteris the only sender, it owns channel closure by Go convention. Addeddefer close(ch)and a test that wires up a full RPC pipeline to verify the channel gets closed when the source goes away.The second (and the one actually producing the panic in the stacktrace) was
inlineClient.Close()inpkg/rpc/inline.go. It callsclose(c.pool)but never nils outc.poolafterwards. When the RPC framework tears down a streaming call, both the handler's defers and the framework's own cleanup end up callingClose()on the same client — the second call seesc.pool != nil, tries to close it again, and panics. The fix nils outc.poolbefore closing so the second call is a no-op.Confirmed the fix by deploying a test app in the dev environment and running
sandbox execrepeatedly — zero panics across multiple runs, where before it panicked every time.Closes MIR-622