Implement sync.Pool for highly churned internal structs (like metric trackers, internal state objects, or timer wrappers) to guarantee zero heap allocations during the core execution loop under massive concurrency.
- Architecture & Implementation Requirements:
- Identification: Profile the current implementation using go test -benchmem to identify any structs allocated per-request (e.g., RetryState instances passed to closures, or internal configuration copies).
- Pool Definition: Declare global or policy-level pools: var statePool = sync.Pool{ New: func() any { return new(RetryState) } }.
- Lifecycle: 1. Fetch from pool at the start of Do/DoErr: state := statePool.Get().(*RetryState).
2. Crucial: Explicitly zero-out all fields to prevent state leakage from previous executions.
3. Defer returning the object to the pool: defer statePool.Put(state).
- Acceptance Criteria:
- Any benchmark covering the happy path (no retries triggered) must report exactly 0 allocs/op.
- Must not introduce data races (verify with go test -race).
Implement sync.Pool for highly churned internal structs (like metric trackers, internal state objects, or timer wrappers) to guarantee zero heap allocations during the core execution loop under massive concurrency.
2. Crucial: Explicitly zero-out all fields to prevent state leakage from previous executions.
3. Defer returning the object to the pool: defer statePool.Put(state).