-
Notifications
You must be signed in to change notification settings - Fork 59
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enhancement/performance optimizations - phase 2 #481
Merged
tkountis
merged 29 commits into
hazelcast:master
from
puzpuzpuz:enhancement/output-queue-optimization
Jul 18, 2019
Merged
Enhancement/performance optimizations - phase 2 #481
tkountis
merged 29 commits into
hazelcast:master
from
puzpuzpuz:enhancement/output-queue-optimization
Jul 18, 2019
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
puzpuzpuz
force-pushed
the
enhancement/output-queue-optimization
branch
from
June 15, 2019 06:05
591534b
to
4754e9f
Compare
…moves redundant conversions)
Also includes minor renaming
puzpuzpuz
force-pushed
the
enhancement/output-queue-optimization
branch
from
June 16, 2019 18:28
7d4f3d6
to
0c2fd65
Compare
puzpuzpuz
force-pushed
the
enhancement/output-queue-optimization
branch
from
June 17, 2019 16:54
11b6c09
to
eda965e
Compare
puzpuzpuz
force-pushed
the
enhancement/output-queue-optimization
branch
from
June 19, 2019 07:12
6582e92
to
4ea7f6c
Compare
verify |
1 similar comment
verify |
puzpuzpuz
changed the title
[WIP] Enhancement/performance optimizations - phase 2
Enhancement/performance optimizations - phase 2
Jun 19, 2019
puzpuzpuz
force-pushed
the
enhancement/output-queue-optimization
branch
from
June 19, 2019 15:24
4ea7f6c
to
accb6b3
Compare
tkountis
approved these changes
Jul 16, 2019
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Good work!
mdumandag
approved these changes
Jul 17, 2019
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added a few minor comments. Looks good
verify |
harunalpak
pushed a commit
to harunalpak/hazelcast-nodejs-client
that referenced
this pull request
Dec 8, 2022
Implementation of output queue for socket writes
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Includes the following:
Automated pipelining
The idea is based on the write queue implemented in DataStax Node.js Driver for Apache Cassandra (thanks @tkountis for finding this optimization and implementing the initial PoC). This optimization provides a significant throughput improvement (+25-30%) for in read scenarios. However, throughput slightly decreases in write scenarios (-13-23%). But in the real world scenarios (mixed, with more reads than writes) the benefit should be still significant.
The main benefit of this approach when compared with dedicated pipeline/batch operation APIs is the no explicit APIs for client users. Library users don't need to change their application logic (and thus, the source code) in order to benefit from automated pipelining. Whenever operations are started within the same event loop phase,
PipelinedWriter
object will try to send their payloads in a batch.Current value of threshold for
PipelinedWriter
is set to 8KB (like in DataStax's write queue). I've tried higher values, but saw no difference, so I ended up with the same default.There is one important underwater stone related with
PipelinedWriter
's behavior. As it concatenates buffers for multiple operations and flushes them in a singlesocket.write()
call, the underlying runtime (network stack/kernel/OS) may send first parts of the payload over the network earlier than next parts. So, payloads of first operations within the batch may be send over the network before thesocket.write()
's callback is executed. As the result, response for that operation may return intosocket.on('data')
earlier than promiseresolve()
calls for last operations in the batch. But in case of Hazelcast Node.js client library this behavior is acceptable: operation's write promise (the one that is send intoPipelinedWriter#write
method) is used only to chain an error handler (seeInvocationService#invoke*
).Socket reads optimization
This one removes buffer allocation and copying where possible in
socket.on('data')
event handling, including the case when the payload is received withing 2+ chunks (note: by default Node uses 64KB for TCP read chunk size). In that case,FrameReader
object caches chunks in an internal array and concatenates them only when enough data was received.Benchmarks
You may see benchmarks results for one of intermediate commits within this PR here. I'm going to perform measurements for the latest commit later.
Further Optimizations
A couple of PRDs were created as the result of work on this PR: