workflow: Create run pool reduce alloc#163
Conversation
WalkthroughThe changes add a Run object pooling mechanism: a new Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Areas needing extra attention:
Suggested reviewers
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
run.go (1)
18-19: Unusedreset()method.This method is defined but never called anywhere in the pooling lifecycle. Either implement it with the field-clearing logic (and call it from
releaseRun), or remove it to avoid dead code.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (12)
builder.go(2 hunks)callback.go(1 hunks)callback_internal_test.go(2 hunks)run.go(3 hunks)schedule_test.go(1 hunks)step.go(3 hunks)step_internal_test.go(4 hunks)testing.go(1 hunks)timeout.go(2 hunks)timeout_internal_test.go(2 hunks)workflow.go(1 hunks)workflow_test.go(3 hunks)
🧰 Additional context used
🧬 Code graph analysis (5)
workflow_test.go (2)
run.go (1)
Run(10-16)testing.go (1)
Require(90-135)
callback_internal_test.go (1)
run.go (1)
Run(10-16)
step_internal_test.go (1)
run.go (1)
Run(10-16)
timeout_internal_test.go (1)
run.go (1)
Run(10-16)
run.go (4)
status.go (1)
StatusType(3-7)record.go (2)
Record(7-45)TypedRecord(80-84)runstate.go (1)
NewRunStateController(90-95)workflow.go (1)
Workflow(59-105)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: build
- GitHub Check: core (1)
- GitHub Check: core (1.24)
🔇 Additional comments (17)
schedule_test.go (1)
285-285: Inconsistent sleep duration compared to similar tests.This sleep was increased from 10ms to 1 second, but other scheduling tests in this file (lines 60, 71, 83, 188, 200, 217, 232, 340, 393, 400) use 10ms sleeps for the same purpose. Either this change is masking a timing issue introduced by the pooling changes, or it makes the test unnecessarily slow.
Consider investigating whether there's an underlying race condition, or revert to 10ms if the longer sleep was added as a quick fix.
timeout_internal_test.go (1)
27-31: LGTM!The
runPoolinitialisation correctly mirrors the production pool setup, enabling the test to exercise the pooled Run lifecycle.run.go (3)
54-86: LGTM on the collector/releaser pattern andbuildRunrefactoring.The abstraction cleanly separates pool acquisition from Run initialisation, and
buildRuncorrectly populates the pooled Run's fields before returning.
88-93: LGTM!The
newRunObjmethod correctly wraps the pool'sGetcall with the appropriate type assertion.
95-103: No changes needed. TheRecord.Metafield contains only value types (stringanduint), so no references persist between pool reuses. The existing comment is accurate.testing.go (1)
153-159: LGTM!The pool integration is correctly implemented:
buildRunobtains a Run from the pool, the error is checked before the defer is registered, andreleaseRunensures the Run is returned afterfn(run)completes.workflow.go (1)
95-96: The pool is already properly initialised with aNewfunction.In
builder.golines 42–46, therunPoolis initialised withinNewBuilderwith aNewfunction that creates&Run[Type, Status]{}instances. This ensuressync.Pool.Get()will never returnnil, preventing any panic on the type assertion innewRunObj().callback.go (1)
66-72: LGTM! Run pooling integration is correctly implemented.The
newRunObj()retrieval and deferredreleaseRun(run)ensure proper lifecycle management. The defer placement after the error check on line 67-69 is correct — ifbuildRunfails, there's no Run to release.builder.go (1)
42-46: LGTM! Pool initialisation is correctly configured.The
sync.Poolis properly initialised with aNewfunction that returns a fresh*Run[Type, Status]{}. This ensures the pool can create new instances when empty.callback_internal_test.go (1)
25-29: LGTM! Test setup correctly mirrors production pool configuration.The test properly initialises the
runPoolfield to match the production configuration inbuilder.go, ensuring the pooling behaviour is exercised during tests.workflow_test.go (1)
238-244: LGTM! Status indexing logic is consistent.The loop now creates steps from
status(1)throughstatus(numberOfSteps+1), and the final assertion on line 271 correctly expectsstatus(numberOfSteps+1). The 1-based indexing is internally consistent.timeout.go (2)
121-127: LGTM! Run pooling correctly integrated in timeout processing.The
newRunObj()retrieval and deferredreleaseRun(run)follow the same correct pattern established incallback.go. The defer is placed after the error check, ensuring no premature release attempts.
293-294: The original concern is unfounded.w.newRunObj()returns arunCollector[Type, Status]function (defined in run.go:89-92), not a*Runinstance. Both call sites correctly pass this function: line 121 passes it tobuildRun, and lines 293-294 pass it tostepConsumer. ThebuildRunfunction receives the collector and invokes it at line 74 to retrieve a Run from the pool. Types align correctly across both usages.step_internal_test.go (1)
93-94: Test parameter updates look correct.The use of inline factory and no-op releaser functions for tests is appropriate. This isolates test behaviour from the actual pooling mechanism whilst maintaining the correct function signature contract.
Also applies to: 146-147, 197-198, 250-251
step.go (3)
94-95: Wiring of pool collector and releaser looks correct.The integration passes
w.newRunObj()as the factory function andw.releaseRunas the cleanup callback, properly connecting the workflow's pool to the step consumer.
116-117: Function signature update aligns with the pooling pattern.The addition of
runCollectorandrunReleaserparameters follows a clean collector/releaser pattern that enables dependency injection for both production pooling and test isolation.
167-173: Correct placement of defer for pool release with proper cleanup.The defer is correctly placed after
buildRunsucceeds (line 173), ensuring the run is only returned to the pool when one was actually obtained. This handles all exit paths (success, error, and panic). ThereleaseRunfunction properly clears critical references (run.Objectandrun.controller) before returning the object to the pool, preventing data leakage between workflow runs.
There was a problem hiding this comment.
Actionable comments posted: 0
🧹 Nitpick comments (2)
run.go (2)
19-20: WireresetintoreleaseRunand fully clear pooled state
resetis currently empty and unused, whilereleaseRunmanually clears onlyObjectandcontroller. This both triggers the static analysis warning and means any references held inRecordstay reachable while theRunsits in the pool.I’d suggest centralising clean‑up in
resetand calling it fromreleaseRun, e.g.:-func (r *Run[Type, Status]) reset() { -} +func (r *Run[Type, Status]) reset() { + // Prepare the Run for reuse in the pool. + r.TypedRecord = TypedRecord[Type, Status]{} + r.controller = nil +} // releaseRun returns a Run object back to the workflow's pool for reuse func (w *Workflow[Type, Status]) releaseRun(run *Run[Type, Status]) { - // Clear references to prevent memory leaks - run.Object = nil - run.controller = nil - // Note: We don't clear the Record as it's a value type - - w.runPool.Put(run) + run.reset() + w.runPool.Put(run) }This keeps the reset logic in one place, avoids retaining old
Recordcontents unnecessarily, and resolves the SonarCloud complaint.Also applies to: 113-119
60-74:newRunPoolimplementation is sound; pre‑allocation is optionalThe generic
newRunPoolsetup is correct and safe for concurrent use. If benchmarks do not show a clear win from warming the pool with 10 items, you could simplify by dropping the pre‑allocation loop and relying solely on theNewfunction, but that’s an optional micro‑optimisation.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (6)
builder.go(1 hunks)callback_internal_test.go(1 hunks)run.go(4 hunks)timeout_internal_test.go(1 hunks)workflow.go(1 hunks)workflow_test.go(3 hunks)
🚧 Files skipped from review as they are similar to previous changes (5)
- workflow_test.go
- callback_internal_test.go
- builder.go
- timeout_internal_test.go
- workflow.go
🧰 Additional context used
🧬 Code graph analysis (1)
run.go (3)
status.go (1)
StatusType(3-7)record.go (2)
Record(7-45)TypedRecord(80-84)runstate.go (1)
NewRunStateController(90-95)
🪛 GitHub Check: SonarCloud Code Analysis
run.go
[failure] 19-20: Add a nested comment explaining why this function is empty or complete the implementation.
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: core (1)
🔇 Additional comments (4)
run.go (4)
3-6:syncimport is appropriateImporting
synchere is appropriate for the newsync.Pool‑based run pooling below. Nothing to change.
55-58: Collector/releaser function types read cleanlyThe
runCollector/runReleasergenerics give clear names to the pooling callbacks and keep call‑sites tidy. No issues here.
76-103:buildRuncorrectly reinitialises pooledRuninstancesUsing the injected
collectorto fetch aRunand then fully assigningTypedRecordandcontrollerensures pooled instances are correctly reinitialised and no stale state is carried over. The logic aroundRunStateInitiated→RunStateRunningis preserved. This looks correct.
105-110:newRunObjis a thin, suitable wrapper over the poolReturning a closure that does
w.runPool.Get().(*Run[Type, Status])keeps the pooling concern encapsulated inWorkflow. AssumingrunPoolis always initialised when aWorkflowis constructed, this is fine as‑is.
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (2)
run.go (2)
52-71: Pooling helpers look good; consider making preallocation more explicitThe
runCollector/runReleaseraliases andnewRunPoolare clear and keep pooling concerns separate frombuildRun. The preallocation of 10Runinstances is a reasonable starting point, but it is effectively a magic number.If you expect very different concurrency levels across deployments, consider either documenting why “10” is a good default, or making it configurable at
Workflowconstruction time. Otherwise, this is fine as is.
73-100: buildRun pooling integration preserves behaviour; maybe add a defensive checkThe change to obtain a
Runvia the injectedcollectorand then reinitialiseTypedRecordandcontrollerlooks behaviourally equivalent to the previous direct allocation, while enabling reuse. The ordering (unmarshal, run‑state transition, then allocation/initialisation) is also sensible and avoids touching the pool when unmarshalling fails.Given
buildRunnow depends on the caller providing a non‑nilcollectorthat returns a correctly typed*Run[Type, Status], you may want to add a cheap defensive check (e.g. panic or error) ifcollectoris nil or returns nil, to fail fast in case of misuse rather than later via a nil‑pointer dereference.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
run.go(3 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
run.go (4)
status.go (1)
StatusType(3-7)record.go (2)
Record(7-45)TypedRecord(80-84)runstate.go (1)
NewRunStateController(90-95)workflow.go (1)
Workflow(59-105)
🔇 Additional comments (1)
run.go (1)
3-6: Import ofsyncis appropriateUsing
synchere is appropriate for the newsync.Pool‑based run pooling; no issues from an import/use perspective.
|
There was a problem hiding this comment.
Actionable comments posted: 0
🧹 Nitpick comments (2)
run.go (2)
57-71:newRunPoolimplementation looks correct; pre‑warm size is a tunable constantThe pool setup is sound:
Newalways returns*Run[Type, Status], and the pre‑warm loop simply seeds 10 zeroed instances, whichbuildRunthen fully reinitialises. The only thing to consider is whether10should be a small named constant so it’s easy to tune (or even set to0if benchmarks ever show pre‑warming doesn’t help on some workloads).Please confirm your benchmarks still look good if you vary the pre‑warm size; if needed, we can parameterise this via a
constor helper.
102-107:newRunObjis straightforward; consider guarding against a nil poolThe closure directly wraps
w.runPool.Get()and asserts to*Run[Type, Status], which is fine given the waynewRunPoolis constructed. As a small safety improvement, you could either lazily initialisew.runPoolhere if it’snilor add a comment/defensive check to make it clear that allWorkflowinstances must be constructed via the builder that setsrunPool.If you expect any callers to instantiate
Workflowdirectly in this repo, it’s worth verifying they all initialiserunPoolbeforenewRunObjis used.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
run.go(3 hunks)
🔇 Additional comments (4)
run.go (4)
3-6: Import ofsyncfor pooling is appropriateUsing
synchere is expected for the newsync.Pool-basedRunpooling; nothing to change.
52-55: Generic collector / releaser aliases are clear and type‑safeThe
runCollector/runReleasertype aliases read well and make the pooling contract explicit at call sites, without leakingsync.Pooldetails. No changes needed.
73-100: Pooling integration inbuildRunis correct and avoids leaksUsing the
collectoronly afterUnmarshalsucceeds ensures pooledRuninstances aren’t leaked on decode errors. Reassigning a freshTypedRecord(includingRecord,Status, andObject) plus a newRunStateControlleron every call correctly wipes any previous state from the pooled object. This matches the previous non‑pooled semantics and should be safe across reuse.
109-113:releaseRunnow correctly drops all embedded state before poolingZeroing
run.TypedRecordand clearingrun.controllerbefore returning the object torunPoolensures there are no lingering references, including toRecord.Objectbyte slices, addressing the earlier retention concern noted in past review. The pooling lifecycle (buildRunfully reinitialises;releaseRunfully clears) looks consistent.



Uh oh!
There was an error while loading. Please reload this page.