Skip to content

perf: Cache reflection in PipelineHost.DisposeAsync#1802

Merged
thomhurst merged 1 commit intomainfrom
fix-1473
Jan 2, 2026
Merged

perf: Cache reflection in PipelineHost.DisposeAsync#1802
thomhurst merged 1 commit intomainfrom
fix-1473

Conversation

@thomhurst
Copy link
Copy Markdown
Owner

Summary

  • Cache the PropertyInfo lookup for the Disposables property using a ConcurrentDictionary<Type, PropertyInfo?> keyed by service provider type
  • Avoids repeated reflection calls on every DisposeAsync invocation
  • Uses static lambda to prevent closure allocation

Fixes #1473

Test plan

  • Build passes (dotnet build)
  • Existing tests continue to pass
  • No behavior change, just performance improvement

🤖 Generated with Claude Code

Cache the PropertyInfo lookup for the Disposables property using a
ConcurrentDictionary keyed by Type. This avoids repeated reflection
calls on every DisposeAsync invocation.

Fixes #1473

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Copilot AI review requested due to automatic review settings January 2, 2026 17:26
@thomhurst
Copy link
Copy Markdown
Owner Author

Summary

Caches reflection lookup for the service provider's internal Disposables property to improve performance on repeated pipeline executions.

Critical Issues

None found ✅

Suggestions

Consider thread-safety of the cached PropertyInfo usage: While the ConcurrentDictionary safely caches the PropertyInfo lookup, the actual reflection access (GetValue(Services)) happens outside the cache. This is fine since PropertyInfo.GetValue is thread-safe, but it's worth noting that the Services instance could theoretically change between calls if service provider types differ across invocations.

Reflection on internal implementation details: This code relies on accessing a private "Disposables" property of the service provider implementation. While the caching improves performance, the underlying approach is fragile - if the Microsoft.Extensions.DependencyInjection internals change, this breaks. Consider:

  • Is there a public API to enumerate disposables?
  • Could this be handled through a custom service scope implementation instead?
  • Worth adding a comment explaining why this reflection is necessary

These are minor observations - the caching implementation itself is solid and the static lambda prevents allocations as intended.

Verdict

APPROVE - No critical issues, solid performance optimization

Copy link
Copy Markdown

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR introduces a performance optimization to the PipelineHost.DisposeAsync method by caching reflection lookups using a ConcurrentDictionary. The change avoids repeated reflection calls when accessing the internal Disposables property of the service provider.

Key changes:

  • Added a static ConcurrentDictionary<Type, PropertyInfo?> to cache property reflection lookups
  • Modified DisposeAsync to use GetOrAdd with a static lambda to retrieve cached PropertyInfo
  • Extracted Services.GetType() call to a local variable to improve readability

@thomhurst thomhurst merged commit 0a08e00 into main Jan 2, 2026
17 of 18 checks passed
@thomhurst thomhurst deleted the fix-1473 branch January 2, 2026 17:39
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Performance: PipelineHost.DisposeAsync uses reflection on every call

2 participants