You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The Version 1 contains repeated bytes directly in the main message, on the other hand Version 2 has the bytes field inside a nested message.
I have a client that keeps sending the same request message repeatedly. The server is designed using the C++ Async API. As described in one of the examples, we use the corresponding Request function to register the RPC and call Next on the completion queue for the activity.
With large raw_contents size(77MB), I am observing a significant average slowdown of 28ms on moving from message format Version 1 to message format Version 2, between registering the request to detecting the activity on completion queue. The client ensures the requests are generated at the same rate for both the cases. My guess is grpc is suboptimal in reusing the protobuf request message with large nested submessages.
What did you expect to see?
Similar performance on the two message formats. The C++ async gRPC should effectively reuse the message.
What did you see instead?
An average slowdown 28ms to read the message from the wire.
Version 1:
Wait + Read (avg): 118036.41362179487us
Version 2:
Wait + Read (avg): 146169.65345528454us
Anything else we should know about your project / environment?
The text was updated successfully, but these errors were encountered:
The above reported numbers were observed when the protobuf message was cleared(by calling Clear()) in between the calls. I assumed may be not calling Clear() on the request message would help. However, not calling Clear on message before passing it to RequestAsyncCall further deteriorated the performance.
Version 2:
Wait + Read (avg): 193443us
I don't think gRPC behaves differently based on the proto messages. Serialization & Deserialization are done by protobuf so I guess this is more likely protobuf doing differently depending on the protobuf definition.
What version of gRPC and what language are you using?
v1.24.0, C++
What operating system (Linux, Windows,...) and version?
Linux, Ubuntu 18.04.1
What runtime / compiler are you using (e.g. python version or version of gcc)
gcc version 7.4.0
What did you do?
Let's assume the following two request message format for specifying requests:
Version 1:
Version 2:
The Version 1 contains repeated bytes directly in the main message, on the other hand Version 2 has the bytes field inside a nested message.
I have a client that keeps sending the same request message repeatedly. The server is designed using the C++ Async API. As described in one of the examples, we use the corresponding Request function to register the RPC and call Next on the completion queue for the activity.
With large raw_contents size(77MB), I am observing a significant average slowdown of 28ms on moving from message format Version 1 to message format Version 2, between registering the request to detecting the activity on completion queue. The client ensures the requests are generated at the same rate for both the cases. My guess is grpc is suboptimal in reusing the protobuf request message with large nested submessages.
What did you expect to see?
Similar performance on the two message formats. The C++ async gRPC should effectively reuse the message.
What did you see instead?
An average slowdown 28ms to read the message from the wire.
Version 1:
Wait + Read (avg): 118036.41362179487us
Version 2:
Wait + Read (avg): 146169.65345528454us
Anything else we should know about your project / environment?
The text was updated successfully, but these errors were encountered: