Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reduce memory usage when reading payloads that span multiple packets. #249

Merged
merged 1 commit into from
Apr 26, 2017

Conversation

bgrainger
Copy link
Member

@bgrainger bgrainger commented Apr 24, 2017

Change ref ArraySegment<byte> to ArraySegmentHolder<byte> so that a mutated ArraySegment<byte> can be passed back through a lambda function.

When run with larger BLOBs (i.e., > 16MiB, the size of one packet), the test results are as follows. ReadBlobsNew allocates 60% less data and becomes 16% faster. Other tests are essentially unchanged.

Before

Method Mean StdDev StdErr Min Q1 Median Q3 Max Op/s Gen 0 Gen 1 Gen 2 Allocated
ReadBlobsOldAsync 183,667.8394 us 1,161.7614 us 322.2146 us 181,856.7220 us 182,956.8441 us 183,361.9487 us 184,116.7797 us 185,937.5694 us 5.44 - - - 67.01 MB
ReadBlobsOldSync 180,624.3700 us 1,573.1979 us 406.1979 us 178,565.8162 us 179,487.9914 us 180,454.4471 us 181,738.6010 us 184,863.7147 us 5.54 16.6667 16.6667 16.6667 67.01 MB
ReadBlobsNewAsync 202,842.5955 us 1,238.6008 us 319.8054 us 200,103.8980 us 202,173.3350 us 202,988.8227 us 203,793.3685 us 204,539.6862 us 4.93 1275.0000 1275.0000 1275.0000 167.76 MB
ReadBlobsNewSync 200,136.9124 us 675.7203 us 187.4111 us 198,914.1884 us 199,651.0397 us 200,025.8022 us 200,667.8334 us 201,531.4505 us 5 1391.6667 1391.6667 1391.6667 167.67 MB

After

Method Mean StdErr StdDev Median Min Q1 Q3 Max Op/s Gen 0 Gen 1 Gen 2 Allocated
ReadBlobsOldAsync 181,221.9569 us 769.7803 us 2,981.3462 us 179,935.0513 us 177,621.6681 us 179,229.9717 us 183,292.2712 us 188,012.1440 us 5.52 - - - 67.01 MB
ReadBlobsOldSync 182,072.6833 us 516.8273 us 2,001.6635 us 181,741.0179 us 178,817.1828 us 180,375.7257 us 183,730.9112 us 185,683.7819 us 5.49 - - - 67.01 MB
ReadBlobsNewAsync 168,934.5490 us 358.2424 us 1,387.4668 us 168,436.2817 us 167,306.9631 us 167,929.1449 us 169,611.1639 us 172,052.8617 us 5.92 829.1667 829.1667 829.1667 67.12 MB
ReadBlobsNewSync 166,960.6247 us 264.5874 us 989.9953 us 167,157.0712 us 165,231.1244 us 166,155.5539 us 167,565.4565 us 168,570.1445 us 5.99 170.8333 170.8333 170.8333 67.01 MB

@bgrainger bgrainger force-pushed the performance branch 3 times, most recently from 3117357 to 9714b8b Compare April 24, 2017 17:48
@bgrainger
Copy link
Member Author

bgrainger commented Apr 26, 2017

Since the cache is now owned by the MySqlSession, it could keep alive MySqlConnectionStringBuilder.MaximumPoolSize × maximum payload size bytes.

In general, this shouldn't be a problem because idle sessions will eventually get evicted from the pool (to free client and server resources). However, based on the particular workload it could be possible to get into a scenario where many large buffers are being cached longer than necessary. But it's more likely that the caching will improve performance overall for applications that retrieve a lot of large BLOBs from a database.

Finally, it should be noted that Connector/NET has a similar per-connection buffer (MySqlConnection holds a NativeDriver holds a MySqlStream holds a MySqlPacket holds a MemoryStream) so this is not introducing a new problem specific to this implementation. (It should also be noted that in unscientific local testing I performed, this implementation had 30% fewer bytes in all heaps than Connector/NET when opening a lot of connections, reading a large blob, and keeping all the connections open.)

@bgrainger bgrainger merged commit 44cb625 into mysql-net:master Apr 26, 2017
@bgrainger bgrainger deleted the performance branch April 26, 2017 06:44
@bgrainger
Copy link
Member Author

Shipped in 0.19.1.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging this pull request may close these issues.

1 participant