Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Serialization API - efficiency #30

Closed
JamesNK opened this issue Jan 13, 2019 · 39 comments
Closed

Serialization API - efficiency #30

JamesNK opened this issue Jan 13, 2019 · 39 comments
Assignees
Labels
google.protobuf grpc.core needs design The issue requires design work to be completed before implementation question Further information is requested
Milestone

Comments

@JamesNK
Copy link
Member

JamesNK commented Jan 13, 2019

Consider providing an extension point allowing users to configure serialization libraries other than Google.Protobuf.

Grpc.Core already offers an abstraction layer that we can use.

Issue is for discussing improving it and Google.Protobuf (default serialization layer)

Start from #30 (comment)

@jtattermusch
Copy link
Contributor

Consider providing an extension point allowing users to configure serialization libraries other than Google.Protobuf.

Right now Google.Protobuf is used for:

  • Determining the methods via its service descriptor

As noted in #21 (comment) this would not work when using other serialization format.

  • Serialization/deserialization between .NET messages and protobuf wire format

Two design questions:

I think Grpc.AspNetCore should not have a dependency on Google.Protobuf as
gRPC's philosophy is to be serialization format agnostic. We do provide easy-to-use Google.Protobuf support as the default choice, but in most implementations we do not take a direct dependency on Google.Protobuf. Support for other serialization formats (Flatbuffers, Bond, Thrift and many others) should be provided as add-on packages (provided by the community) and should not be part of the core libraries as there is many serialization formats out there and each one has its specifics.

  • Do we want Grpc.AspNetCore.* to support serialization libraries other than Google.Protobuf? For example, a C# version of FlatBuffers. Providing an interface like SignalR's IHubProtocol would allow Google.Protobuf's serialization logic to be hidden behind an interface, and let users configure their own implementation.

We should make it possible to use other serialization formats and expect users to do that at some point. I'd follow the pattern used by Grpc.Core: The only thing that Grpc.Core currently assumes is that a serialized message will be an instance of a class and that one defines a "Marshaller" that knows how to serialize/deserialize (invoking the protobuf-specific serialization/deserialization is part of the generated code right now: https://github.com/grpc/grpc/blob/b6d8957b0d78f768b858698ba6a79296db448bb2/src/csharp/Grpc.Examples/MathGrpc.cs#L30)
To use a different serialization format, one needs to write their own codegen and possibly their own version of Grpc.Tools package for build integration.

@JamesNK
Copy link
Member Author

JamesNK commented Jan 14, 2019

https://github.com/grpc/grpc/blob/master/src/csharp/Grpc.Core/Marshaller.cs

An issue with the current Marshaller pattern is it operates on byte arrays. We'd want to have a pattern that doesn't require allocating an array and buffering to memory. For best performance we'd ideally read directly from the request and write directly to the response.

For serialization extensibility, do you expect compatible serializer libraries to always generate a BindService method? Would you recommend BindService as the way Grpc.AspNetCore.Server discovers services and aquires marshallers?

https://github.com/grpc/grpc/blob/b6d8957b0d78f768b858698ba6a79296db448bb2/src/csharp/Grpc.Examples/MathGrpc.cs#L279-L300

If the answer is yes, I think there are two changes we would want to make:

  1. Update Grpc.Tools to output a BindService method that doesn't take an instance and add a new AddMethod overload, ServiceBinderBase.AddMethod<TRequest, TResponse>(Method<TRequest, TResponse>), that it would call. This relates to BindService should not require a concrete instance of the GRPC service implementation  #21

    Once we can use BindService then we should only depend on Grpc.Core types. Swapping the serialization would only involve changing the marshaller in generated code.

/// <summary>Register service method implementations with a service binder. Useful when customizing the service binding logic.
/// Note: this method is part of an experimental API that can change or be removed without any prior notice.</summary>
/// <param name="serviceBinder">Service methods will be bound by calling <c>AddMethod</c> on this object.</param></param>
public static void BindService(grpc::ServiceBinderBase serviceBinder)
{
  serviceBinder.AddMethod(__Method_Div);
  serviceBinder.AddMethod(__Method_DivMany);
  serviceBinder.AddMethod(__Method_Fib);
  serviceBinder.AddMethod(__Method_Sum);
}
  1. Update Marshaller with Serializer/Deserializer properties that allow message data to be written and read without allocating byte arrays.

(1) is something to figure out now. It will be the main way Grpc.AspNetCore.Server integrates with Grpc.Core so it is something to lock down.

(2) is largely an API detail, and is something that doesn't need to be figured out right away. I'd like to learn more about how gRPC and ASP.NET Core interact before deciding what the best signature would be. ASP.NET Core will soon expose System.IO.Pipelines in HTTP2 and it will provide the best opportunity to maximize performance.

@jtattermusch
Copy link
Contributor

https://github.com/grpc/grpc/blob/master/src/csharp/Grpc.Core/Marshaller.cs

An issue with the current Marshaller pattern is it operates on byte arrays. We'd want to have a pattern that doesn't require allocating an array and buffering to memory. For best performance we'd ideally read directly from the request and write directly to the response.

I am aware of that limitation and I've had a solution in progress for a while:
The idea is to switch the "simple marshaller" (which converts class into byte array and back). Into a "contextual marshaller" which also takes context which allow multiple ways of accessing the payload (e.g. accessing ReadOnlySpan with data slice by slice etc or using a rented buffer).

Grpc.Core already uses contextual serializers internally (but they delegate to the old-style byte array conversions for now).
grpc/grpc#16367
grpc/grpc#17167

Serialization/DeserializationContext is not populated yet with the advanced members, but I have a PR in progress. A taste of new DeserializationContext (not yet merged):
https://github.com/grpc/grpc/pull/16371/files#diff-22501e836d94b70164ce50d8b0c0c4e5

Btw, this effort started before we came up with the plan for Grpc.AspNetCore, so it would be good to review if this design works well with the planned APIs for Asp.NET Core pipeline access to requests/responses - let's create a separate issue for this review?

For serialization extensibility, do you expect compatible serializer libraries to always generate a BindService method? Would you recommend BindService as the way Grpc.AspNetCore.Server discovers services and aquires marshallers?

https://github.com/grpc/grpc/blob/b6d8957b0d78f768b858698ba6a79296db448bb2/src/csharp/Grpc.Examples/MathGrpc.cs#L279-L300

If the answer is yes, I think there are two changes we would want to make:

I think the answer is Yes.

  1. Update Grpc.Tools to output a BindService method that doesn't take an instance and add a new AddMethod overload, ServiceBinderBase.AddMethod<TRequest, TResponse>(Method<TRequest, TResponse>), that it would call. This relates to BindService should not require a concrete instance of the GRPC service implementation  #21
    Once we can use BindService then we should only depend on Grpc.Core types. Swapping the serialization would only involve changing the marshaller in generated code.
/// <summary>Register service method implementations with a service binder. Useful when customizing the service binding logic.
/// Note: this method is part of an experimental API that can change or be removed without any prior notice.</summary>
/// <param name="serviceBinder">Service methods will be bound by calling <c>AddMethod</c> on this object.</param></param>
public static void BindService(grpc::ServiceBinderBase serviceBinder)
{
  serviceBinder.AddMethod(__Method_Div);
  serviceBinder.AddMethod(__Method_DivMany);
  serviceBinder.AddMethod(__Method_Fib);
  serviceBinder.AddMethod(__Method_Sum);
}
  1. Update Marshaller with Serializer/Deserializer properties that allow message data to be written and read without allocating byte arrays.

(1) is something to figure out now. It will be the main way Grpc.AspNetCore.Server integrates with Grpc.Core so it is something to lock down.

I think we can add that soon. If you want, you can also create a PR yourself, here is an example PR that added ServiceBinderBase: https://github.com/grpc/grpc/pull/17157/commits

(2) is largely an API detail, and is something that doesn't need to be figured out right away. I'd like to learn more about how gRPC and ASP.NET Core interact before deciding what the best signature would be. ASP.NET Core will soon expose System.IO.Pipelines in HTTP2 and it will provide the best opportunity to maximize performance.

This is what ContextualSerializer/ContextualDeserializer can help with (see above). Let's review in a separate issue to make sure the (De)SerializerContext has methods that play well with both Grpc.Core and Grpc.AspNetCore (which will use System.IO.Pipelines).

@JamesNK
Copy link
Member Author

JamesNK commented Jan 15, 2019

I think we can add that soon. If you want, you can also create a PR yourself, here is an example PR that added ServiceBinderBase: grpc/grpc/pull/17157/commits

Sure, we'll create a PR for it.

This is what ContextualSerializer/ContextualDeserializer can help with (see above). Let's review in a separate issue to make sure the (De)SerializerContext has methods that play well with both Grpc.Core and Grpc.AspNetCore (which will use System.IO.Pipelines).

Great. ASP.NET Core currently doesn't expose pipelines, but it will come in the next preview of ASP.NET Core 3.0. That shouldn't be too far away. We'll look at whether the contextual serializers are the right shape then.

FYI @davidfowl

@JamesNK JamesNK added the question Further information is requested label Jan 15, 2019
@davidfowl
Copy link
Contributor

We'll need another implementation to vet the abstraction. We can try using the other popular .NET protobuf library as a test https://www.nuget.org/packages/protobuf-net/.

@jtattermusch
Copy link
Contributor

I'm not sure that Protobuf.NET supports proto3 (which is the recommended version of .proto format to use and the only one supported by Google.Protobuf for C#).
An alternative would be to try with Flatbuffers https://github.com/google/flatbuffers/tree/master/net/FlatBuffers that have the advantage of zero-overhead serialization/deserialization so could be useful for performance testing.

But it terms of vetting the abstraction, I think just the ability to expose a "generic" service (e.g. a method handler that gives access to the actual binary payloads instead of serialized/deserialized messages would be good enough). E.g. here: https://github.com/grpc/grpc/blob/1064c6f663a2735fa2f4230dd8d278ae66d344b7/src/csharp/Grpc.IntegrationTesting/ServerRunners.cs#L92

@JamesNK JamesNK changed the title Extension point for supporting other serialization libraries Serialization API - efficiency Jan 24, 2019
@JamesNK JamesNK added the needs design The issue requires design work to be completed before implementation label Jan 24, 2019
@JamesNK
Copy link
Member Author

JamesNK commented Jan 25, 2019

I've renamed this issue. Grpc.AspNetCore should use Grpc.Core's abstraction.

This is what ContextualSerializer/ContextualDeserializer can help with (see above). Let's review in a separate issue to make sure the (De)SerializerContext has methods that play well with both Grpc.Core and Grpc.AspNetCore (which will use System.IO.Pipelines).

I think we should have a meeting about this: @jtattermusch, @davidfowl, @JunTaoLuo and I.

My high-level thoughts after some research:

  1. Google.Protobuf serialization is based around Stream. Unless we want to make large changes to that code base, or switch to another serializer, then the Grpc.Core API will need logic to interop with streams
  2. Grpc.Server's most efficient way to write: write directly to Response.PipeBody, an IBufferWriter<byte>.
    • It will be simple to create a Stream implementation that writes to a IBufferWriter<byte>.
    • Getting the message length will need to be part of the marshaller API. It needs to be written first.
    • Google.Protobuf writing is sync. An async flush can happen after the message has been written to the buffer writer.
  3. Grpc.Server's most efficient way to read: Request.BodyPipe.ReadAsync() to get the message header with the message length, then read the complete message into a ReadOnlySequence<byte>.
    • It will be simple to create a Stream implementation that wraps a IBufferWriter<byte>.
    • Google.Protobuf reading is sync from the stream wrapping ReadOnlySequence<byte>. Async happens when reading the request into the sequence.
  4. Updating Grpc.Tools output needs to be updated
    • Today ContextualSerializer/ContextualDeserializer use the array methods.

@jtattermusch
Copy link
Contributor

I've renamed this issue. Grpc.AspNetCore should use Grpc.Core's abstraction.

This is what ContextualSerializer/ContextualDeserializer can help with (see above). Let's review in a separate issue to make sure the (De)SerializerContext has methods that play well with both Grpc.Core and Grpc.AspNetCore (which will use System.IO.Pipelines).

I think we should have a meeting about this: @jtattermusch, @davidfowl, @JunTaoLuo and I.

My high-level thoughts after some research:

  1. Google.Protobuf serialization is based around Stream. Unless we want to make large changes to that code base, or switch to another serializer, then the Grpc.Core API will need logic to interop
    with streams

A bit more detail here:
The two main classes are https://github.com/protocolbuffers/protobuf/blob/master/csharp/src/Google.Protobuf/CodedInputStream.cs and CodedOutputStream. All the operations are actually implemented over a byte array (including when reading/writing to from stream), the streams are only used to read/write the next chunk of data once the buffer gets full. Using a stream is not necessary, one can provide her own buffer directly (doesn't really help us in our situation, but it's good to know). The problem with using Span<> instead of a byte array here is that all the parsing state is kept in the CodedInputStream instance (and Span<> can't be stored there).

  1. Grpc.Server's most efficient way to write: write directly to Response.PipeBody, an IBufferWriter<byte>.

    • It will be simple to create a Stream implementation that writes to a IBufferWriter<byte>.
    • Getting the message length will need to be part of the marshaller API. It needs to be written first.
    • Google.Protobuf writing is sync. An async flush can happen after the message has been written to the buffer writer.
  2. Grpc.Server's most efficient way to read: Request.BodyPipe.ReadAsync() to get the message header with the message length, then read the complete message into a ReadOnlySequence<byte>.

    • It will be simple to create a Stream implementation that wraps a IBufferWriter<byte>.
    • Google.Protobuf reading is sync from the stream wrapping ReadOnlySequence<byte>. Async happens when reading the request into the sequence.
  3. Updating Grpc.Tools output needs to be updated

    • Today ContextualSerializer/ContextualDeserializer use the array methods.

@JunTaoLuo
Copy link
Contributor

Let's go over this in our next weekly sync.

@JamesNK
Copy link
Member Author

JamesNK commented Jan 26, 2019

Hmm, yeah I looked at the implementation of CodedInputStream and CodedOutputStream. They are built around byte arrays, either as the final place for data or a buffer to then write to a stream. Reading and writing directly with pipelines isn't possible with the current Google.Protobuf API. Unfortunately those types are both sealed.

@mkosieradzki
Copy link

@JamesNK https://github.com/mkosieradzki/protobuf/tree/spans-pr-rebased - as far as I remember - this specific branch in my forked repo contains implementation that is compatible with pipelines and is pretty much optimized, but it modifies both codegen and the sdk.

It should be also significantly faster, because it exercises aggressive inlining instead of virtual calls.

@JamesNK
Copy link
Member Author

JamesNK commented Jan 27, 2019

Thanks for the link.

The CodedOutputStream changes are understandable just from reading the code. Nice job!

CodedInputStream is more complicated. I want to debug it at runtime to understand it fully.

+1 on discussing this at sync next week. Getting performance to 100% is lower priority that functionality for the first preview, but a plan forward would be good 👍

@davidfowl
Copy link
Contributor

After playing with this for a little bit, we can continue to use the marshaller for now but we need some changes to it. Instead of taking a byte[] it needs to take a Span<byte> or ReadOnlySequence<byte> or both. Being a byte[] today forces the caller to allocate which is unfortunate.

@JamesNK
Copy link
Member Author

JamesNK commented Feb 5, 2019

ReadOnlySequence<byte> for reading. IBufferWriter<byte> for writing.

If you look at the new pipelines code you can see the exact points where the marshaller could slot in and use those types seamlessly.

@JamesNK
Copy link
Member Author

JamesNK commented Feb 5, 2019

We need to consider targeting for this feature. IBufferWriter<byte> only ships in the box with new versions of netcoreapp. Google.Protobuf will either need to target netcoreapp2.x or (more likely) have netstandard2.0 target that brings in the System.Memory package.

Because Grpc.Tools supports older versions of .NET, codegen will still continue to output serialization logic for byte[] by default, and there would be an opt-in flag on <Protobuf> to enable the new serialization. Our templates can have this flag set by default.

@neuecc
Copy link

neuecc commented Feb 6, 2019

Interesting project.
I'm already implements gRPC based framework that does not require .proto file and use MessagePack insteadof Protocol Buffers.
https://github.com/Cysharp/MagicOnion
Therefore, it is desirable that the serialization has a structure that does not depend on proto.

I'm also author of MessagePack-Csharp, currently AArnott working support Span and others. AArnott/MessagePack-CSharp#9

// byte[] based formatter(ver 1.x) to span based formatter(ver 2.x)
public interface IMessagePackFormatter<T>
{
    void Serialize(IBufferWriter<byte> writer, T value, IFormatterResolver resolver);
    T Deserialize(ref ReadOnlySequence<byte> byteSequence, IFormatterResolver resolver);
}

If you change the marshaller, this can be supported directly.

@davidfowl
Copy link
Contributor

@mkosieradzki Are you still willing to help out here?

@mkosieradzki
Copy link

@davidfowl yes. What kind of help is needed? My protobuf fork should be compatible with those requirements.

@JamesNK
Copy link
Member Author

JamesNK commented Feb 25, 2019

We're going to look at perf some more now that a lot of basic functionality is working.

@mkosieradzki I think your PR goes a long way to adding the right features to protobuf. Are you able to rebase it on the latest version of google/protobuf?

@mkosieradzki
Copy link

@JamesNK I will try to revisit and rebase it soon. If I fail to find some time during week, I will definitely do this on the upcoming Saturday.

@JamesNK
Copy link
Member Author

JamesNK commented Feb 25, 2019

That would be great. I'm going to put together some microbenchmarks for serialization/deserialization performance.

I'm also going to see about getting benchmarks in the ASP.NET Core and protobuf perf environments. That should track performance over time.

@mkosieradzki
Copy link

mkosieradzki commented Feb 26, 2019

@JamesNK I started going through the merge, it's not that easy, because it seems that legacy proto2 support has been added to the C# version, after creating the initial PR. So it's not only merging, but also re-implementing parts of proto2 support.

Please let me know, whether you need my code updated to the latest branch, because you need features like proto2 support (I see scenarios related to gRPC where this will be useful), or just to have the latest version. It will allow me to prioritize my work.

I will also need to know if the intention is to have this code merged back into protobuf, in that case we need to go back with @jtattermusch to the painful PR process, or do we need a span-friendly fork that works.

For the PoC purposes - this branch should be good enough: https://github.com/mkosieradzki/protobuf/tree/spans-pr-rebased

@jtattermusch
Copy link
Contributor

jtattermusch commented Mar 4, 2019

Before proceeding to implementation let's agree on the API design (the actual implementation is a lot of code and that can easily obscures the important concepts).

Here's some proposal for the parsing API (to be iterated upon):

    // TODO: class name TBD (CodedInputReader, CodedInputContext, CodedInputByRefReader, WireFormatReader)
    public ref struct CodedInputReader
    {
        // to avoid accessing first slice from inputSequence all the time,
        // also to allow reading from single span.
        ReadOnlySpan<byte> immediateBuffer;

        ReadOnlySequence<byte> inputSequence;  // struct!

        // maybe we need bufferPos to avoid incrementing totalBytesRetired all the time..
        // int bufferPos;
        
        public CodedInputReader(ReadOnlySpan<byte> buffer)
        {
            this.immediateBuffer = buffer;
            this.inputSequence = default;
        }

        public CodedInputReader(ReadOnlySequence<byte> input)
        {
            this.immediateBuffer = default;
            this.inputSequence = input;
        }

        // TODO: add constructor that allows reading from a single span, buts calls a refill callback
        // if span has been exhausted to allow supplying the next buffer slice?
        // (= same as read only sequence, but allows supplying buffer slices reading lazily)

        // internal state of coded input stream
        private uint lastTag;
        private uint nextTag;
        private bool hasNextTag;
        private int totalBytesRetired;
        private int currentLimit;  // initialize to maxInt!
        private int recursionDepth;

        // parsing limits
        private readonly int recursionLimit; 
        private readonly int sizeLimit;

        // exposed methods that copy potentially large amount of data:
        // TODO: figure out an opt-in way to avoid copying and/or allocation
        // ReadBytes()
        // ReadString();
    }

@davidfowl
Copy link
Contributor

davidfowl commented Mar 4, 2019

// TODO: add constructor that allows reading from a single span, buts calls a refill callback
// if span has been exhausted to allow supplying the next buffer slice?
// (= same as read only sequence, but allows supplying buffer slices reading lazily)

You could but I wouldn't start with that.

We may also want to make it possible to return a state object like the Utf8JsonReader does:

https://github.com/dotnet/corefx/blob/cc42bb95b0877a653d4f556a0839d261ea3563b7/src/System.Text.Json/src/System/Text/Json/Reader/Utf8JsonReader.cs#L125-L145

@ahsonkhan Can you lend your experience here with the JsonReader.

See the other state also.

https://github.com/dotnet/corefx/blob/cc42bb95b0877a653d4f556a0839d261ea3563b7/src/System.Text.Json/src/System/Text/Json/Reader/Utf8JsonReader.cs#L28-L46

@ahsonkhan
Copy link

    // exposed methods that copy potentially large amount of data:
    // TODO: figure out an opt-in way to avoid copying and/or allocation
    // ReadBytes()
    // ReadString()

You could expose the raw input data slices similar to what the Utf8JsonReader does and let the caller transcode themselves:
https://github.com/dotnet/corefx/blob/cc42bb95b0877a653d4f556a0839d261ea3563b7/src/System.Text.Json/src/System/Text/Json/Reader/Utf8JsonReader.cs#L83-L104

And if you are slicing ReadOnlySequence you likely want to track the SequencePositions as well:
https://github.com/dotnet/corefx/blob/cc42bb95b0877a653d4f556a0839d261ea3563b7/src/System.Text.Json/src/System/Text/Json/Reader/Utf8JsonReader.cs#L42-L45

We may also want to make it possible to return a state object like the Utf8JsonReader does

I am not too familiar with the types of input CodedInputReader would accept, but when it comes to returning/accepting a state for partial reading (and to support re-entrancy), you may also want an isFinalBlock bool.

Additionally, you would likely need to know if you are at the end or not:

private bool IsLastSpan => _isFinalBlock && (_isSingleSegment || _isLastSegment);

private int totalBytesRetired;

Is this the amount of bytes read from the input so far?

@jtattermusch
Copy link
Contributor

    // exposed methods that copy potentially large amount of data:
    // TODO: figure out an opt-in way to avoid copying and/or allocation
    // ReadBytes()
    // ReadString()

You could expose the raw input data slices similar to what the Utf8JsonReader does and let the caller transcode themselves:
https://github.com/dotnet/corefx/blob/cc42bb95b0877a653d4f556a0839d261ea3563b7/src/System.Text.Json/src/System/Text/Json/Reader/Utf8JsonReader.cs#L83-L104

That sounds like a good start, but in our case, the caller will be the Google.Protobuf assembly itself, so we would also need to figure out an alternative way of exposing the bytestring in the message api (this would be an advanced use case for very performance-oriented users I assume).

And if you are slicing ReadOnlySequence you likely want to track the SequencePositions as well:
https://github.com/dotnet/corefx/blob/cc42bb95b0877a653d4f556a0839d261ea3563b7/src/System.Text.Json/src/System/Text/Json/Reader/Utf8JsonReader.cs#L42-L45

We may also want to make it possible to return a state object like the Utf8JsonReader does

I am not too familiar with the types of input CodedInputReader would accept, but when it comes to returning/accepting a state for partial reading (and to support re-entrancy), you may also want an isFinalBlock bool.

Additionally, you would likely need to know if you are at the end or not:

private bool IsLastSpan => _isFinalBlock && (_isSingleSegment || _isLastSegment);

I like the idea of being able to save the parsing state using *State and making the parser re-entrant, but the problem is that unlike JSON parse, it's not enough to just remember the parsing state with a few booleans and a BitState - the parser will be recursively descending into parsing different protobuf messages and we would need to remember the exact IMessage instances on the stack (which probably involves allocating some data structure to remember these) and it might end up being too complex.

private int totalBytesRetired;

Is this the amount of bytes read from the input so far?

See comment here:
https://github.com/protocolbuffers/protobuf/blob/2d0183ab58706a919f138e6920e33e3b76eb62f6/csharp/src/Google.Protobuf/CodedInputStream.cs#L105

@jtattermusch
Copy link
Contributor

We'd also need to prototype the solution for maintaining backwards compatibility.
Here's what we talked about offline:

  • only make the fast span-based parsing available starting from netstandardX.Y (exact version TBD, the minimum requirement for System.Memory is netstandard1.1)
  • there will still only be a single runtime (the Google.Protobuf assembly), but there will be multiple target frameworks (right now we have net45 and netstandard1.0, we'll probably need to add netstandard2.0 and perhaps netstandard1.1).
  • to get access to Span, ReadOnlySequence and IBufferWriter, Google.Protobuf will need to depend on System.Memory (but that's not possible for netstandard1.0 we're currently targeting)
  • there will be a codegen option to enable generation of methods needed for fast parsing. (alternative: can we make this work by always generating the same code and using conditional compilation? That would require being able to detect compiler version using #if as c# 7.2 is needed to compile ref structs in the generated code)

@jtattermusch
Copy link
Contributor

CC @jskeet

@JamesNK
Copy link
Member Author

JamesNK commented Mar 5, 2019

When used with gRPC the protobuf serializer doesn't need to be re-entrant. gRPC will prefix the message with its size in bytes so it is simple to read from the request pipeline until we have all the necessary data to deserialize the message.

@davidfowl
Copy link
Contributor

Which is why we have a max buffer size 😄

@JamesNK
Copy link
Member Author

JamesNK commented Oct 6, 2020

This is DONE with gRPC 2.32.0 and Protobuf 3.13.0 🎉 🥳 🎈 🍰

@JamesNK JamesNK closed this as completed Oct 6, 2020
@zgramana
Copy link

zgramana commented Oct 7, 2020

@JamesNK What's a good entry point for grokking the final design (e.g. docs, examples, tests, etc.)?

This issue was closed.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
google.protobuf grpc.core needs design The issue requires design work to be completed before implementation question Further information is requested
Projects
None yet
Development

No branches or pull requests

10 participants