Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Dynamically allocate write buffer if needed. (ready for merge) #324

Merged
merged 5 commits into from
Dec 12, 2023

Conversation

thedevop
Copy link
Collaborator

No description provided.

@thedevop thedevop changed the title Dynamically allocate write buffer if needed Dynamically allocate write buffer if needed. (NOT for merge) Oct 28, 2023
@x20080406
Copy link
Contributor

x20080406 commented Oct 28, 2023

I've merge the this PR to my project. Let me test on my env. I make 120 clients to send 2500 messages for every one. And then I make 120 clients slice as four groups. every group will subscribe all the message. In other words, the broker will send 1,200,000 messages to the 120 clients. Let me observe for one day.

By the way, I've change the bconn to "*bufio.Reader"

@x20080406
Copy link
Contributor

x20080406 commented Oct 29, 2023

I've merge the this PR to my project. Let me test on my env. I make 120 clients to send 2500 messages for every one. And then I make 120 clients slice as four groups. every group will subscribe all the message. In other words, the broker will send 1,200,000 messages to the 120 clients. Let me observe for one day.

By the way, I've change the bconn to "*bufio.Reader"

It's been 20 hours, and the new code is running well in the test environment. 120 clients sent 19.4 billion messages to the broker. Four groups of subscribers subscribed to 95.8 billion messages from the broker, and there was no data loss. I have set my buffer size to 10,485,760. The CPU usage of the broker process is maintained at around 3800%, and the memory usage is approximately 6.5 GB. The new code doesn't show a significant difference in performance compared to the previous code that used bufio. I look forward to someone simulating a scenario with a large number of client connections.

@mochi-co
Copy link
Collaborator

mochi-co commented Nov 2, 2023

This is super fascinating and while I've been very busy with work, I have been following it and I'm interested to see how it goes. I don't have any environments for testing on the sort of scale you are both using, so very appreciative for feedback and statistics!

@coveralls
Copy link

coveralls commented Nov 23, 2023

Pull Request Test Coverage Report for Build 7159499807

  • 24 of 31 (77.42%) changed or added relevant lines in 1 file are covered.
  • No unchanged relevant lines lost coverage.
  • Overall coverage decreased (-0.1%) to 98.79%

Changes Missing Coverage Covered Lines Changed/Added Lines %
clients.go 24 31 77.42%
Totals Coverage Status
Change from base Build 7076132597: -0.1%
Covered Lines: 5470
Relevant Lines: 5537

💛 - Coveralls

@thedevop
Copy link
Collaborator Author

I made the following changes:

  1. Changed bconn from ReadWriter to Reader
  2. Bytes written are calculated based on when its written to the write buffer. Writing to buffer will never result in error (unless system out of memory, in that case app will be killed anyway).
  3. Error is based on when writing to wire.
    With this, we don't have to make complicated changes to BytesSent/PacketsSent metrics and OnPacketSent hook.

Although my environment has tens of millions clients, but mostly it's msgs from client->server, very little server->client msgs, so I can't evaluate true impact neither.

@thedevop thedevop changed the title Dynamically allocate write buffer if needed. (NOT for merge) Dynamically allocate write buffer if needed. (ready for merge) Dec 9, 2023
@thedevop
Copy link
Collaborator Author

thedevop commented Dec 9, 2023

@mochi-co , the perf test we seen is at least 5% improvement in publishing rate for busy clients. It is ready to review/merge.

Copy link
Collaborator

@mochi-co mochi-co left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Generally this looks good to me - easy to understand code, nice and clean, and solves a problem. Love it! Approved

Copy link
Member

@werbenhu werbenhu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@thedevop It makes sense to me. If the packet size exceeds the ClientNetWriteBufferSize (default is 2KB), the data will be sent directly to the connection. Smaller packets will share a write buffer, which helps increase the frequency of write I/O. Excellent work! Approved.

@thedevop
Copy link
Collaborator Author

@mochi-co , I don't think this will reach 6 reviewers. So when you have a chance, can you pl merge. I have some follow up changes that's not related to this buffer but can improve publishing rate, like reduce writing payload during encoding, and use of sync.Pool to reduce memory allocations/GCs during packet encoding.

@mochi-co
Copy link
Collaborator

Yup looks good to me, just haven't had a chance - I'll merge and release now!

@mochi-co mochi-co merged commit dc4eecd into mochi-mqtt:main Dec 12, 2023
3 checks passed
@mochi-co
Copy link
Collaborator

Released in v2.4.3, thank you @thedevop!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants