Prevent response cache mutex from serializing concurrent HTTP action requests#21880
Prevent response cache mutex from serializing concurrent HTTP action requests#21880george-dorin merged 5 commits intodevelopfrom
Conversation
|
👋 george-dorin, thanks for creating this pull request! To help reviewers, please consider creating future PRs as drafts first. This allows you to self-review and make any final changes before notifying the team. Once you're ready, you can mark it as "Ready for review" to request feedback. Thanks! |
|
I see you updated files related to
|
|
✅ No conflicts with other open PRs targeting |
| // requests to the same cache key — only one fetchFn runs, others | ||
| // wait for its result. Requests to different keys run in parallel. | ||
| result, _, _ := rc.flight.Do(cacheKey, func() (interface{}, error) { | ||
| return fetchFn(), nil |
There was a problem hiding this comment.
should we move the cacheKey read and write inside the flight.Do()? one requirement for HTTP action is best-effort single-attempt delivery. As it stands, for HTTP action that has caching feature enabled can fail to deduplicate outgoing HTTP request.
There was a problem hiding this comment.
Fixed in 6e8de5b by moving the cache read and write inside flight.Do(). The singleflight key now stays active until the result is cached, closing the window. Added an inner cache re-check (double-checked locking) so back-to-back flights for the same key find the previous flight's cached result instead of fetching again.
|




responseCache.Fetch()holding a global mutex during the entire HTTP round-trip, which serialized all concurrent HTTP action requestssingleflightto deduplicate concurrent requests to the same cache key without blocking unrelated requestscacheMufromsync.Mutextosync.RWMutex— cache lookups useRLock, only writes take the exclusive lockworkflowIDparameter fromisExpiredOrNotCached,Fetch,Set, and theResponseCacheinterface — it was never used in cache key generation despite the comment claimingotherwise
extractWorkflowIDFromRequestPathfunction (only caller was removed)synctestfor deterministic, non-flaky concurrency testsProblem:
responseCache.Fetch()acquirescacheMu.Lock()and holds it until the HTTP request completes. Every concurrent goroutine blocks on the mutex, causing sequential execution even whenStore: false(caching disabled), as long asMaxAgeMs > 0Fix: The mutex now only protects cache map reads and writes (microseconds). The HTTP request runs outside the lock.
singleflight.Groupensures concurrent requests to the same cache key are deduplicated (one fetch, shared result), preserving the original cache semantics.