Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Major client performance refactor #489

Merged
merged 4 commits into from
Sep 27, 2019
Merged

Conversation

JamesNK
Copy link
Member

@JamesNK JamesNK commented Sep 4, 2019

  • Cache Uri and call scope for methods on channel
  • Hardcode GrpcCall logger name to avoid reflection
  • Reduce async state machines allocated
  • Switch from Task to ValueTask where possible
  • Reuse System.Version
  • Optimize content-type validation
  • Optimize fields on GrpcCall
  • Create PushUnaryContent and PushStreamContent

@JamesNK
Copy link
Member Author

JamesNK commented Sep 5, 2019

Before:

          Method |     Mean |     Error |    StdDev |      Op/s | Gen 0/1k Op | Gen 1/1k Op | Gen 2/1k Op | Allocated Memory/Op |
---------------- |---------:|----------:|----------:|----------:|------------:|------------:|------------:|--------------------:|
 HandleCallAsync | 8.806 us | 0.0673 us | 0.0630 us | 113,559.3 |      1.3733 |           - |           - |             5.66 KB |

After:

          Method |     Mean |     Error |    StdDev |      Op/s | Gen 0/1k Op | Gen 1/1k Op | Gen 2/1k Op | Allocated Memory/Op |
---------------- |---------:|----------:|----------:|----------:|------------:|------------:|------------:|--------------------:|
 HandleCallAsync | 6.023 us | 0.1185 us | 0.1499 us | 166,036.8 |      1.0681 |           - |           - |             4.38 KB |

Copy link
Contributor

@jtattermusch jtattermusch left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I skimmed through the PR and generally looks ok, but I'd still like @JunTaoLuo to give a final LGTM after reviewing more in detail.

@@ -40,6 +37,9 @@ public sealed class GrpcChannel : ChannelBase, IDisposable
{
internal const int DefaultMaxReceiveMessageSize = 1024 * 1024 * 4; // 4 MB

private readonly ConcurrentDictionary<IMethod, GrpcMethodInfo> _methodInfoCache;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not sure I recall this correctly, but the concurrentDictonary tends to allocate even in situations when one doesn't expect that, so if the purpose of the cache is to save allocations of new GrpcMethodInfo instances, then this might only be a partial solution.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you be more specific?

I cached the delegate used with GetOrAdd in a field _createMethodInfoFunc so it is allocated once per channel, rather than once per call. Other than that I didn't notice additional allocations.

Copy link
Contributor

@JunTaoLuo JunTaoLuo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Still digesting some of the GrpcCall changes. Will finish the review tomorrow.

src/Grpc.Net.Client/GrpcChannel.cs Outdated Show resolved Hide resolved
src/Grpc.Net.Client/Internal/HttpClientCallInvoker.cs Outdated Show resolved Hide resolved
Copy link
Contributor

@JunTaoLuo JunTaoLuo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks clean!

@JamesNK JamesNK merged commit ca6cb66 into grpc:master Sep 27, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants