Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reduce allocations by using pooled memory and recycling memory streams #694

Closed
stebet opened this issue Jan 23, 2020 · 9 comments
Closed
Assignees
Milestone

Comments

@stebet
Copy link
Collaborator

stebet commented Jan 23, 2020

The RabbitMQ client currently allocates a lot of unnecessary memory and has a lot of GC overhead.

I'm currently working on a PR to reduce the allocations being made, and will probably introduce some *Async overloads as well as they help reduce lock contention.

Here is the current progress I have made with a test app that opens two connections. One connection bulk publishes 50000 messages in 100 message batches containing just an 4-byte integer as payload. The other connection receives those messages so it's mostly just measureing the frame serialize/deserialize overhead.

Before:
image

After:
image

I'll follow up this issue with discussions and link the PR once it's ready for review.

@stebet
Copy link
Collaborator Author

stebet commented Jan 23, 2020

Some explanations:
The before uses the current NuGet release of the RabbitMQ.Client library.
The after uses code based on the latest master branch which is the 6.0 release.

@lukebakken lukebakken added this to the 6.0.0 milestone Jan 23, 2020
@lukebakken lukebakken self-assigned this Jan 23, 2020
@michaelklishin
Copy link
Member

So a 25% reduction with this specific run. Looks promising!

@lechu445
Copy link

ralated to #452

@stebet
Copy link
Collaborator Author

stebet commented Jan 24, 2020

I have got System.IO.Pipelines working on the socket connection as well, will push the PR soon for review and testing. Improvements are quite impressive, will add details and screenshots later :)

@stebet
Copy link
Collaborator Author

stebet commented Jan 25, 2020

So here's where I'm at. Similar scenario as above, one sender, one receiver. Bulk Send 50000 messages in 500x100 message batches, but now with a 512 byte, 4kb and 16kb payloads.

Before

512 byte payload

image

4kb payload

image

16kb payload

image

After (using pooled arrays, recyclable memorystreams and System.IO.Pipelines)

512 byte payload

image

4kb payload

image

16kb payload

image

@stebet
Copy link
Collaborator Author

stebet commented Jan 29, 2020

More progress :)

512 byte payloads

image

4kb payloads

image

16kb payloads

image

To summarize:
512 byte payloads are down from 556mb to 238mb (57% reduction)
4kb payloads are down from 1.96gb to 411mb (79% reduction)
16kb payloads are down from 7.14gb to 1.00 gb (86% reduction)

What's left
To finish up the PR I need to get all the tests to run which will require a little refactoring around catching the exceptions/errors that happen now that the Pipelines are taking care of reading and writing the sockets, as they are little harder to reach and parse. Once that's ready, I'll submit the PR for further work and discussions on what APIs (if any) might need to change.

@michaelklishin
Copy link
Member

@stebet impressive 💪

@lukebakken lukebakken modified the milestones: 6.0.0, 7.0.0 Feb 6, 2020
@lukebakken
Copy link
Contributor

This will be addressed by either #706 or #707

@lukebakken
Copy link
Contributor

#1445 appears to be the "final word" on this issue. Closing.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants