Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Faster instance creation for ValuesJsonConverter #119

Merged
merged 3 commits into from
Dec 31, 2019

Conversation

Turnerj
Copy link
Collaborator

@Turnerj Turnerj commented Dec 24, 2019

It turns out Activator.CreateInstance is slow relative to the rest of the serialization process.

image
In that image, you can see it taking a little over 30% of the deserialization time.

Using a technique from JSON .NET (with a similar technique in System.Text.Json), the idea is to create a custom constructor delegate to avoid the overhead of finding and calling the constructor through reflection.

image
In this image, we can see the large drop in both the System.Private.CoreLib module as well as the line percentage dropping down to ~7% of the deserialization time.

With benchmarks, we can see the difference in performance and allocations as well. Specifically with the Book benchmark:

Before

Method Runtime Mean Error StdDev Median Min Max Gen 0 Gen 1 Gen 2 Allocated
Serialize .NET 4.7.2 205.1 us 13.57 us 28.33 us 190.9 us 183.1 us 267.9 us 15.3809 - - 47.49 KB
Deserialize .NET 4.7.2 361.1 us 6.54 us 5.47 us 360.4 us 350.1 us 371.7 us 34.6680 - - 107.3 KB
Serialize .NET Core 3.1 177.1 us 3.10 us 2.59 us 177.2 us 173.7 us 181.0 us 15.3809 - - 47.27 KB
Deserialize .NET Core 3.1 290.0 us 4.08 us 3.82 us 290.0 us 281.4 us 295.7 us 33.6914 - - 104 KB

After

Method Runtime Mean Error StdDev Median Min Max Gen 0 Gen 1 Gen 2 Allocated
Serialize .NET 4.7.2 206.4 us 13.75 us 27.47 us 191.3 us 182.7 us 261.2 us 15.3809 - - 47.49 KB
Deserialize .NET 4.7.2 176.3 us 1.98 us 1.85 us 176.9 us 172.7 us 178.4 us 26.6113 - - 81.84 KB
Serialize .NET Core 3.1 181.5 us 3.14 us 2.93 us 181.5 us 174.0 us 185.6 us 15.3809 - - 47.27 KB
Deserialize .NET Core 3.1 155.5 us 1.48 us 1.38 us 155.4 us 152.7 us 158.1 us 25.6348 - - 79.1 KB

As you can see, deserialization times are cut in half while also allocating ~24% less.

The constructor delegate cache is built per each object type (eg. OneOrMany<string> or Values<string, Uri>) with every type getting an entry. For a one-off serialization of a type, this would actually use more and likely take longer as each constructor gets generated for every type on every property being serialized. The benefits to this change come more with the frequency of deserializations.


With the update to the benchmark runtime moniker, I found there to be a strange performance measuring issue when there is a mix of .NET 3.1 target in the project file with a runtime moniker targeting 3.0 while not having 3.0 installed (.NET Core was performing far worse than .NET Framework). I figure let's just make the project target and the benchmark runtime moniker be the same.

@Turnerj Turnerj added the enhancement Issues describing an enhancement or pull requests adding an enhancement. label Dec 24, 2019
@Turnerj
Copy link
Collaborator Author

Turnerj commented Dec 24, 2019

As far as deserialization goes, there are probably two more low-hanging fruit in the library:

  1. Optimising the IEnumerable<object> path in Values to not allocate 4 lists if they aren't all necessary (On-demand List allocation in Values<> type #120 )

  2. Optimising the single-item path for Values and OneOrMany when coming from ValuesJsonConverter. Currently the converter simplifies itself and allocates an array for one item. In OneOrMany as an example, it allocates a second array for filtering - that is an OK compromise for a many items but results in one-more-than-necessary allocations for a single item. Really, both types could do with a type object constructor - need to see if it is worth it though.

@RehanSaeed
Copy link
Owner

RehanSaeed commented Dec 30, 2019

I discovered the same thing some time ago. I also discovered that the generic new() constraint uses Activator.CreateInstance under the hood, so generic code can also be speeded up. I ended up using this implementation that the great Jon Skeet actually pointed me at:

https://github.com/Dotnet-Boxed/Framework/blob/master/Source/Boxed.Mapping/Factory.cs

Very similar to yours. One difference is that instead of using a collection to store a map, we use a static instead. However, we can't always do this in our case because we have constructor parameters. I raised an issue in corefx at the time for new() generic constraing to be speeded up by avoiding the use of Activator.CreateInstance but it hasn't gained much traction.

@Turnerj
Copy link
Collaborator Author

Turnerj commented Dec 31, 2019

I've moved the object activation code out to a separate class like you suggested and extended it a little to support custom parameter types (which will be useful for a future PR around faster single-item deserialization).

@RehanSaeed RehanSaeed merged commit 83452a4 into RehanSaeed:master Dec 31, 2019
@RehanSaeed
Copy link
Owner

Thanks for that.

@Turnerj Turnerj deleted the fast-object-construction branch March 12, 2020 07:33
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement Issues describing an enhancement or pull requests adding an enhancement.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants