Skip to content

perf: optimize TUnit.Mocks hot paths#5304

Merged
thomhurst merged 4 commits intomainfrom
perf/optimize-mocks-hot-paths
Mar 30, 2026
Merged

perf: optimize TUnit.Mocks hot paths#5304
thomhurst merged 4 commits intomainfrom
perf/optimize-mocks-hot-paths

Conversation

@thomhurst
Copy link
Copy Markdown
Owner

Summary

Optimizes the four TUnit.Mocks runtime hot paths where benchmarks show we trail source-generated competitors (Imposter, Mockolate):

  • Invocation (29% slower than Imposter): Lock-free FindMatchingSetup via copy-on-write snapshot eliminates lock contention on every mock method call. Eagerly initialized _callHistory removes LazyInitializer overhead per invocation.
  • Setup (37% slower than Mockolate): Eagerly initialized _behaviorLock in MethodSetup removes lazy init overhead. Single-behavior fast path in GetNextBehavior skips list + lock for the common single-setup case.
  • Verification (32% slower than Imposter): Replaced FilterByMatchers (allocates List<CallRecord>) with single-pass CountAndMarkVerified that counts and marks in one iteration with zero list allocation.
  • Callback (64% slower than Imposter): Benefits from all invocation-path improvements above since the benchmark includes mock creation + setup + invocations.

Additional cleanup

  • Replaced LINQ .Select() with manual loops in FormatCall/FormatExpectedCall (avoids closure + iterator allocations on error paths)
  • Thread-safety fixes: _callHistory marked volatile, _currentState read via Volatile.Read in lock-free path, _singleBehavior uses Volatile.Write with correct ordering during promotion to list

Test plan

  • All 672 TUnit.Mocks.Tests pass (net10.0)
  • Verify benchmark improvements via CI mock-benchmarks workflow

…ack hot paths

- Lock-free FindMatchingSetup via copy-on-write snapshot (eliminates lock contention on every mock invocation)
- Eagerly initialize _callHistory and _behaviorLock (removes LazyInitializer overhead per call)
- Single-behavior fast path in GetNextBehavior (skips list + lock for common case)
- Single-pass CountAndMarkVerified in verification (avoids List allocation and double iteration)
- Replace LINQ with loops in FormatCall/FormatExpectedCall (avoids closure + iterator allocations)
Copy link
Copy Markdown
Contributor

@claude claude bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review: perf/optimize-mocks-hot-paths

Overall this is a well-motivated PR with clear benchmark-driven goals. The eager initialization and copy-on-write snapshot approach are solid patterns. However, I found two correctness issues that should be addressed before merge.


Bug: CountAndMarkVerified marks calls as verified before checking the assertion

File: TUnit.Mocks/Verification/CallVerificationBuilder.cs

The new single-pass approach marks calls as IsVerified = true unconditionally, before checking whether times.Matches(matchingCount). If the assertion fails (and throws MockVerificationException), the calls have already been marked as verified — meaning a subsequent VerifyNoOtherCalls() will silently skip them.

Original flow:

  1. Collect matching calls into a list
  2. If count matches → mark verified; else → throw

New flow:

  1. Mark verified + count in one pass
  2. If count doesn't match → throw (but calls are already marked verified)

Fix: Either revert to the two-step approach for correctness, or only mark verified if the assertion passes:

private int CountAndMarkVerified(IReadOnlyList<CallRecord> calls)
{
    // Count first pass
    var count = 0;
    for (int i = 0; i < calls.Count; i++)
    {
        if (_matchers.Length == 0 || MatchesArguments(calls[i].Arguments))
            count++;
    }

    // Only mark verified if we intend to succeed (caller must check first)
    return count;
}

And then mark verified in WasCalled only after times.Matches returns true. The single-pass optimization isn't worth losing correctness on the failure path — and failures are inherently not hot paths.


Semantic regression: lock-free reads break state machine serialization

File: TUnit.Mocks/MockEngine.cs, FindMatchingSetup

Previously, the entire FindMatchingSetup loop ran inside lock (_setupLock), which serialized concurrent invocations. Now, state reads (Volatile.Read(ref _currentState)) and the matched-setup execution happen outside the lock, with only the state transition itself locked.

This breaks state machine mocking under concurrency:

  • Thread A and Thread B both invoke concurrently
  • Both read _currentState = "StateA" via Volatile.Read
  • Both find and match a setup with RequiredState = "StateA", TransitionTarget = "StateB"
  • Both execute the setup, both transition to "StateB"
  • Two calls consumed the same state transition — the state machine is now corrupted

The original lock serialized the full match-and-transition atomically. The new code only locks the write, not the read-match-write sequence.

Fix options:

  • Keep the snapshot for reads in FindMatchingSetup, but re-acquire _setupLock for the full match loop when any setup has a RequiredState or TransitionTarget (i.e., fast path only for stateless setups)
  • Or accept that state machine mocking is not thread-safe (document it explicitly)

If the state machine feature is intended to work under concurrency, the original lock around the full loop was necessary and this change loses that guarantee.


Minor: LINQ→loop replacements are on cold paths

FormatCall, FormatExpectedCall, FormatCall in CallRecord — these are only called when building exception messages for failed verifications. These are definitionally cold paths, and the string[] allocation added here is essentially equivalent in overhead to the original LINQ enumerator. The lambdas in the original args.Select(a => a?.ToString() ?? "null") are also static (no captures), so the compiler caches the delegate — no closure allocation occurs.

This is a no-op optimization that adds a few lines of code. Consider reverting to keep the code idiomatic, or use string.Join<object?>(", ", args) with a ToString() override.


Positives

  • The eager _callHistory = new ConcurrentQueue<CallRecord>() initialization is cleaner and the null-check simplifications throughout are a clear improvement.
  • The copy-on-write _setupsSnapshot pattern is correct and well-implemented for the invocation hot path.
  • The _behaviorLock = new Lock() eager init in MethodSetup is a straightforward improvement.
  • The _singleBehavior fast path for GetNextBehavior has correct Volatile.Write/Volatile.Read ordering — the release fence on _singleBehavior = null ensures _behaviors is visible to readers that observe null.
  • Thread-safety comments are helpful.

- Fix CountAndMarkVerified marking calls as verified before assertion
  check passes — split back into CountMatches + MarkMatchingCallsVerified
  so calls are only marked verified after times.Matches succeeds
- Fix lock-free FindMatchingSetup breaking state machine serialization —
  track _hasStatefulSetups flag and fall back to fully locked path when
  any setup uses RequiredState or TransitionTarget
- Revert cold-path LINQ→loop changes (FormatCall, FormatExpectedCall) —
  these are error-only paths where static lambdas have no closure cost
Copy link
Copy Markdown
Contributor

@claude claude bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review: perf/optimize-mocks-hot-paths

This is a solid, well-motivated performance PR with careful attention to memory ordering. The benchmarks-driven approach is appreciated and the PR description clearly explains the trade-offs. A few concerns worth discussing:


1. Two-Pass Approach in (correctness concern)

File: TUnit.Mocks/Verification/CallVerificationBuilder.cs

CountMatches and MarkMatchingCallsVerified are two separate passes over allCallsForMember. If concurrent invocations add new calls between the count pass and the mark pass, the mark pass may mark calls that weren't counted, or miss calls that were. While verification is typically single-threaded, TUnit tests can run in parallel.

The original FilterByMatchers approach was more robust here: it captured the matching set once and used that same snapshot for both the count and the mark. The allocation it avoided (List<CallRecord>) is genuinely cheap compared to a test assertion failure being mis-attributed.

A middle-ground that avoids the allocation AND maintains correctness: iterate once and mark+count in one pass, only committing the marks if the assertion passes:

private int CountAndMarkVerified(IReadOnlyList<CallRecord> calls, bool markVerified)
{
    var count = 0;
    for (int i = 0; i < calls.Count; i++)
    {
        if (_matchers.Length == 0 || MatchesArguments(calls[i].Arguments))
        {
            count++;
            if (markVerified) calls[i].IsVerified = true;
        }
    }
    return count;
}

Called as CountAndMarkVerified(calls, markVerified: false) first, then CountAndMarkVerified(calls, markVerified: true) only on success — still two iterations but the matching set is stable.


2. _behaviors Field Declared Non-Volatile but Used with Volatile.Read

File: TUnit.Mocks/Setup/MethodSetup.cs:17

_behaviors is private List<IBehavior>? _behaviors (no volatile keyword), yet GetNextBehavior accesses it via Volatile.Read(ref _behaviors). This is technically valid in .NET — Volatile.Read/Write provide the acquire/release fences regardless of whether the field is declared volatile. However, it creates a maintenance trap: a future developer sees a plain field and might write if (_behaviors is null) without the volatile read, introducing a data race. Either declare the field volatile (consistent with _singleBehavior) or add a comment explaining the intentional pattern.


3. AddBehavior Transition Window: Silent Invariant

File: TUnit.Mocks/Setup/MethodSetup.cs:66-83

The promotion from single-behavior to list works correctly given the .NET memory model (release fence on Volatile.Write(ref _singleBehavior, null) orders the _behaviors store before the null write, and the acquire fence on the reader's Volatile.Read(ref _singleBehavior) ensures it sees _behaviors as non-null after seeing null). This is correct but non-obvious. The comment says:

"write list before clearing single so concurrent lock-free readers always find at least one path"

This is good, but it's the why without the how. The ordering guarantee relies on Volatile.Write acting as a release fence for ALL preceding stores, including the unlabeled _behaviors = ... assignment. A reader who sees null from Volatile.Read(ref _singleBehavior) is guaranteed to see the updated _behaviors because of the implicit acquire fence from that same read. Worth spelling this out so the invariant isn't accidentally broken.


4. O(N) Snapshot Rebuild Per AddSetup Call

File: TUnit.Mocks/MockEngine.cs:142-148

var snapshot = new Dictionary<int, MethodSetup[]>(dict.Count);
foreach (var kvp in dict)
{
    snapshot[kvp.Key] = kvp.Value.ToArray();
}
_setupsSnapshot = snapshot;

Every AddSetup call rebuilds the entire snapshot: iterating all member keys and copying each list to an array. This is O(M × N) for M member IDs and N setups per member. For typical mock usage (< 20 setups) this is fine, but if someone configures a mock with hundreds of parameterized setups, setup cost grows quadratically.

A targeted improvement: only rebuild the arrays for the specific memberId being updated, and otherwise shallow-copy the existing snapshot dictionary:

// Only rebuild the affected member's array, shallow-copy the rest
var prev = _setupsSnapshot;
var snapshot = prev is null
    ? new Dictionary<int, MethodSetup[]>()
    : new Dictionary<int, MethodSetup[]>(prev);
snapshot[setup.MemberId] = list.ToArray();
_setupsSnapshot = snapshot;

This keeps setup O(N) per member instead of O(total setups).


5. Reset() Inconsistency: Volatile vs. Direct Assignment

File: TUnit.Mocks/MockEngine.cs:527

_callHistory = new ConcurrentQueue<CallRecord>();
Volatile.Write(ref _autoTrackValues, null);

_callHistory is declared volatile, so the direct assignment IS a volatile write — but it looks inconsistent next to all the Volatile.Write(ref ...)) calls below it. This is harmless but could confuse readers into thinking it's a non-synchronized write. Either use Volatile.Write(ref _callHistory, new ConcurrentQueue<CallRecord>()) for consistency, or add a comment noting that assignment to a volatile field is implicitly a volatile write.


6. FormatExpectedCall Still Uses LINQ

File: TUnit.Mocks/Verification/CallVerificationBuilder.cs:127-130

The PR avoids LINQ in WasCalled but leaves FormatExpectedCall using _matchers.Select(m => m.Describe()). Since this is on the error path only, it's not a correctness concern — just noting it for completeness if the goal is consistent style.


Summary

The core optimization strategy is sound and the memory model reasoning in the PR description is correct. Issues #1 and #4 are the ones I'd most want addressed before merge: #1 for correctness and #4 for scalability. Issues #2, #3, and #5 are maintainability concerns that could be addressed in a follow-up.

Overall the direction is good — eliminating lock contention on the hot invocation path while correctly relegating the stateful path to the locked code is exactly the right architectural choice.

Copy link
Copy Markdown
Contributor

@claude claude bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review: perf/optimize-mocks-hot-paths

This is a solid, well-motivated performance PR with careful attention to memory ordering. The benchmarks-driven approach is appreciated and the PR description clearly explains the trade-offs. A few concerns worth discussing:


1. Two-Pass Approach in WasCalled (correctness concern)

File: TUnit.Mocks/Verification/CallVerificationBuilder.cs

CountMatches and MarkMatchingCallsVerified are two separate passes over allCallsForMember. If concurrent invocations add new calls between the count pass and the mark pass, the mark pass may mark calls that weren't counted. The original FilterByMatchers approach was more robust here — it captured the matching set once and used that same snapshot for both the count and the mark.

A middle-ground that avoids the allocation AND maintains correctness would be a single-pass approach where both count and mark happen in one iteration, but the marks are only "committed" after the assertion passes. Alternatively, a boolean flag parameter CountAndMarkVerified(calls, markVerified: true/false) avoids the per-call re-evaluation:

private int CountAndMarkVerified(IReadOnlyList<CallRecord> calls, bool markVerified)
{
    var count = 0;
    for (int i = 0; i < calls.Count; i++)
    {
        if (_matchers.Length == 0 || MatchesArguments(calls[i].Arguments))
        {
            count++;
            if (markVerified) calls[i].IsVerified = true;
        }
    }
    return count;
}

Called without marking first, then again with marking only on success. Still two iterations, but MatchesArguments is only evaluated once per call per pass — the current two-method approach also evaluates it twice.


2. _behaviors Field Declared Non-Volatile but Used with Volatile.Read

File: TUnit.Mocks/Setup/MethodSetup.cs:17

_behaviors is private List<IBehavior>? _behaviors (no volatile keyword), yet GetNextBehavior accesses it via Volatile.Read(ref _behaviors). This is technically valid — Volatile.Read/Write provide acquire/release fences regardless of field declaration. However, it creates a maintenance trap: a future developer sees a plain field and might write if (_behaviors is null) without the volatile read, introducing a data race. Either declare the field volatile (consistent with _singleBehavior) or add a comment explaining the intentional pattern.


3. AddBehavior Transition: Silent Memory Model Invariant

File: TUnit.Mocks/Setup/MethodSetup.cs:66-83

The promotion from single-behavior to list works correctly given the .NET memory model: the release fence on Volatile.Write(ref _singleBehavior, null) orders the _behaviors store before the null write, and the acquire fence from the reader's Volatile.Read(ref _singleBehavior) ensures it sees _behaviors as non-null after seeing null. This is correct but non-obvious. The comment explains what happens but not why it's safe. Worth adding a note that the ordering guarantee relies on Volatile.Write acting as a release fence for all preceding stores — including the unlabeled _behaviors = ... assignment — so this invariant isn't accidentally broken in a future refactor.


4. O(N) Snapshot Rebuild Per AddSetup Call

File: TUnit.Mocks/MockEngine.cs:142-148

var snapshot = new Dictionary<int, MethodSetup[]>(dict.Count);
foreach (var kvp in dict)
{
    snapshot[kvp.Key] = kvp.Value.ToArray();
}
_setupsSnapshot = snapshot;

Every AddSetup rebuilds the entire snapshot, iterating all member IDs and copying each list to an array. This is O(M × N) for M member IDs and N setups per member. For typical mock usage this is fine, but with many parameterized setups it grows quadratically. A targeted rebuild — only update the affected member's array and shallow-copy the rest of the dictionary — keeps this O(N) per member:

var prev = _setupsSnapshot;
var snapshot = prev is null
    ? new Dictionary<int, MethodSetup[]>()
    : new Dictionary<int, MethodSetup[]>(prev);
snapshot[setup.MemberId] = list.ToArray();
_setupsSnapshot = snapshot;

5. Reset() Inconsistency: Volatile vs. Direct Assignment

File: TUnit.Mocks/MockEngine.cs:527

_callHistory = new ConcurrentQueue<CallRecord>();
Volatile.Write(ref _autoTrackValues, null);

_callHistory is declared volatile, so the direct assignment IS a volatile write — but it looks inconsistent next to all the Volatile.Write(ref ...) calls below it. Either use Volatile.Write(ref _callHistory, new ConcurrentQueue<CallRecord>()) for consistency, or add a comment noting that assignment to a volatile field is implicitly a volatile write.


6. FormatExpectedCall Still Uses LINQ (minor)

File: TUnit.Mocks/Verification/CallVerificationBuilder.cs:127-130

The PR avoids LINQ in WasCalled but leaves FormatExpectedCall using _matchers.Select(m => m.Describe()). Since this is the error path only, it's not a hot path concern — just noting it for completeness if consistent style is a goal.


Summary

The core optimization strategy is sound and the memory model reasoning in the PR description is correct. Issues #1 and #4 are the ones I'd most want addressed before merge: #1 for correctness under concurrent test execution and #4 for scalability with many setups. Issues #2, #3, and #5 are maintainability concerns that could be addressed in a follow-up.

The architectural choice to split stateful vs. stateless paths with _hasStatefulSetups is the right call — keeping the lock-free fast path clean while correctly routing state-machine cases through the serialized path. Well done overall.

- Combine CountMatches + MarkMatchingCallsVerified into single
  CountMatchingCalls(calls, markVerified) to avoid concurrent-mutation
  window between two separate passes
- Declare _singleBehavior and _behaviors as volatile to match usage
  pattern and prevent maintenance trap (removes Volatile.Read/Write)
- Expand AddBehavior comment to explain memory model guarantee: volatile
  write to _singleBehavior acts as release fence ensuring _behaviors is
  visible to lock-free readers
- Optimize AddSetup snapshot rebuild: shallow-copy existing dictionary
  and only re-array the affected member (O(N) per member, not O(total))
- Add comment on _callHistory assignment in Reset() clarifying volatile
  field semantics
Copy link
Copy Markdown
Contributor

@claude claude bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review: perf/optimize-mocks-hot-paths (Round 3)

The third commit (7857cae) directly addresses all of the concerns raised in the two previous review rounds. Here's the status:

Previous Issues — All Addressed ✅

Issue Status
marking before assertion check ✅ Fixed — with correct ordering
Lock-free path breaking state machine serialization ✅ Fixed — flag routes to
O(N×M) snapshot rebuild per ✅ Fixed — shallow-copy + per-member array rebuild only
_behaviors non-volatile maintenance trap ✅ Fixed — declared volatile
AddBehavior memory model comment too thin ✅ Fixed — release fence semantics explained
direct assignment vs inconsistency ✅ Fixed — comment clarifies volatile field semantics

One Remaining Minor Note

File: TUnit.Mocks/Verification/CallVerificationBuilder.cs, WasCalled error path

The second commit message says "Revert cold-path LINQ→loop changes (FormatCall, FormatExpectedCall)" but the loop in (the exception-building branch) wasn't reverted:

var actualCallDescriptions = new List<string>(allCallsForMember.Count);
foreach (var c in allCallsForMember)
{
    actualCallDescriptions.Add(c.FormatCall());
}

This is only reached when an assertion fails, making it a cold path like /. The pre-sized is marginally better than , but if the goal was to revert all cold-path LINQ changes for idiom consistency, this one was missed. It's harmless either way — not a blocker.


Architecture Assessment

The overall design is sound and correct:

  • The copy-on-write snapshot pattern for is the right abstraction: it makes the fast path fully lock-free while keeping state-machine setups on the serialized path via a clean boolean gate. This is better than the alternative of double-checked locking or try-optimistic/retry patterns.
  • The ** fast path** in has correct memory ordering: acts as the release fence that makes visible to concurrent readers, which the comment now explains well.
  • The ** flag** is a clean way to avoid the overhead of checking each setup's nullable fields on every invocation while retaining full correctness for the state-machine use case.

This is ready to merge. The performance improvements are well-motivated, the correctness concerns from previous rounds are resolved, and the thread-safety reasoning is solid.

Copy link
Copy Markdown
Contributor

@claude claude bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review: perf/optimize-mocks-hot-paths (Round 3)

The third commit (7857cae) directly addresses all of the concerns raised in the two previous review rounds. Here's the status:

Previous Issues — All Addressed ✅

Issue Status
CountAndMarkVerified marking before assertion check ✅ Fixed — CountMatchingCalls(calls, markVerified: false/true) with correct ordering
Lock-free path breaking state machine serialization ✅ Fixed — _hasStatefulSetups flag routes to FindMatchingSetupLocked
O(N×M) snapshot rebuild per AddSetup ✅ Fixed — shallow-copy + per-member array rebuild only
_behaviors non-volatile maintenance trap ✅ Fixed — declared volatile
AddBehavior memory model comment too thin ✅ Fixed — release fence semantics explained
Reset() direct assignment vs Volatile.Write inconsistency ✅ Fixed — comment clarifies volatile field semantics

One Remaining Minor Note

File: TUnit.Mocks/Verification/CallVerificationBuilder.cs, WasCalled error path

The second commit message says "Revert cold-path LINQ→loop changes (FormatCall, FormatExpectedCall)" but the actualCallDescriptions loop in WasCalled (the exception-building branch) wasn't reverted:

var actualCallDescriptions = new List<string>(allCallsForMember.Count);
foreach (var c in allCallsForMember)
{
    actualCallDescriptions.Add(c.FormatCall());
}

This is only reached when an assertion fails, making it a cold path like FormatCall/FormatExpectedCall. The pre-sized List<string> is marginally better than Select().ToList(), but if the goal was to revert all cold-path LINQ changes for idiom consistency, this one was missed. It's harmless either way — not a blocker.


Architecture Assessment

The overall design is sound and correct:

  • The copy-on-write snapshot pattern for FindMatchingSetup is the right abstraction: it makes the fast path fully lock-free while keeping state-machine setups on the serialized path via a clean boolean gate. This is better than the alternative of double-checked locking or try-optimistic/retry patterns.
  • The _singleBehavior fast path in MethodSetup has correct memory ordering: the volatile write to _singleBehavior = null acts as the release fence that makes _behaviors visible to concurrent readers, which the comment now explains well.
  • The _hasStatefulSetups flag is a clean way to avoid the overhead of checking each setup's nullable fields on every invocation while retaining full correctness for the state-machine use case.

This is ready to merge. The performance improvements are well-motivated, the correctness concerns from previous rounds are resolved, and the thread-safety reasoning is solid.

Missed in the earlier cold-path LINQ revert — this is only reached on
assertion failure so the Select().ToList() is fine for readability.
Copy link
Copy Markdown
Contributor

@claude claude bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review: perf/optimize-mocks-hot-paths (Round 4 — Final)

This is the fourth round of review. All concerns from previous rounds have been addressed.

Previous Issues Status ✅

Issue Status
marking before assertion check ✅ Fixed in commit 2 — with correct ordering
Lock-free path breaking state machine serialization ✅ Fixed in commit 2 — flag routes to
O(N×M) snapshot rebuild per ✅ Fixed in commit 3 — shallow-copy + per-member array rebuild only
_behaviors non-volatile maintenance trap ✅ Fixed in commit 3 — declared volatile
AddBehavior memory model comment too thin ✅ Fixed in commit 3 — release fence semantics explained
Reset() direct assignment vs Volatile.Write inconsistency ✅ Fixed in commit 3 — comment clarifies volatile field semantics
actualCallDescriptions loop left as manual after LINQ revert ✅ Fixed in commit 4 — reverted to idiomatic LINQ on error path

No New Issues Found

The final diff is clean. The cumulative changes are well-structured:

  • Copy-on-write snapshot for is the right architecture for this problem — it makes the invocation hot path fully lock-free while keeping state-machine setups correctly serialized via a clean boolean gate. No double-checked locking, no try-optimistic/retry complexity.
  • ** fast path** in has correct ordering, well-commented. The release fence on guarantees visibility to concurrent readers — the code and comments are now clear about this.
  • Eager initialization eliminates the overhead per invocation with no correctness tradeoff.
  • **** is a clean solution: single method, correct ordering (count without marking → assert → mark on success), no intermediate allocation.

This is ready to merge. The performance improvements are well-motivated, all correctness and maintainability concerns have been resolved across the review iterations, and the thread-safety reasoning is solid throughout.

Copy link
Copy Markdown
Contributor

@claude claude bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review: perf/optimize-mocks-hot-paths (Round 4 — Final)

This is the fourth round of review. All concerns from previous rounds have been addressed.

Previous Issues Status

All previous issues are resolved:

  • CountAndMarkVerified marking before assertion check — Fixed in commit 2: CountMatchingCalls with markVerified param and correct ordering.
  • Lock-free path breaking state machine serialization — Fixed in commit 2: _hasStatefulSetups flag routes to FindMatchingSetupLocked.
  • O(NxM) snapshot rebuild per AddSetup — Fixed in commit 3: shallow-copy + per-member array rebuild only.
  • _behaviors non-volatile maintenance trap — Fixed in commit 3: declared volatile.
  • AddBehavior memory model comment too thin — Fixed in commit 3: release fence semantics explained.
  • Reset() direct assignment vs Volatile.Write inconsistency — Fixed in commit 3: comment clarifies volatile field semantics.
  • actualCallDescriptions loop left as manual after LINQ revert — Fixed in commit 4: reverted to idiomatic LINQ on error path.

No New Issues Found

The final diff is clean. The cumulative changes are well-structured:

  • Copy-on-write snapshot for FindMatchingSetup is the right architecture: makes the invocation hot path fully lock-free while correctly serializing state-machine setups via a clean _hasStatefulSetups boolean gate. No double-checked locking, no try-optimistic/retry complexity.
  • _singleBehavior fast path in MethodSetup has correct Volatile ordering, now well-commented. The release fence on the volatile write to _singleBehavior = null guarantees _behaviors visibility to concurrent readers.
  • Eager _callHistory initialization eliminates LazyInitializer overhead per invocation with no correctness tradeoff.
  • CountMatchingCalls(calls, markVerified: bool) is a clean solution: single method, correct ordering (count without marking, then assert, then mark on success), no intermediate allocation.

This is ready to merge. All correctness and maintainability concerns have been resolved across the four review iterations, and the thread-safety reasoning is solid throughout.

@thomhurst thomhurst enabled auto-merge (squash) March 30, 2026 09:56
@thomhurst thomhurst disabled auto-merge March 30, 2026 10:00
@thomhurst thomhurst merged commit 4444004 into main Mar 30, 2026
15 of 16 checks passed
@thomhurst thomhurst deleted the perf/optimize-mocks-hot-paths branch March 30, 2026 11:01
This was referenced Mar 30, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant