Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[ROCKETMQ-91] Reduce lock granularity for putMessage to get better performance #57

Closed
wants to merge 1 commit into from

Conversation

dongeforever
Copy link
Member

Motivation

CommitLog putMessage has a lock as:

lockForPutMessage()
try {
  .....
} finally {
releasePutMessageLock()
}

The logic inside the lock includes two main operations:
1 encode the message
2 write to the PageCache.
However, we can take the first operation(encode message) out from the lock to achieve better performance.

Tests

Env

Linux 24 Core 48G SSD
set sendMessageThreadPoolNums=5
use five async Producers to do stress.
50Byte message,send one by one

Results

reduce before: about 14w TPS
reduce after: about 16w TPS
And if we annotate the store layer, only test the network performance, we got about 17W TPS.

The bottleneck now is in the network layer.

@zhouxinyu @vongosling @shroman please have a review.

@dongeforever dongeforever changed the title [ROCKETMQ-91] Reduce lock granularity for putMessage [ROCKETMQ-91] Reduce lock granularity for putMessage to get better performance Feb 10, 2017
@coveralls
Copy link

coveralls commented Feb 10, 2017

Coverage Status

Coverage increased (+0.3%) to 31.614% when pulling b0e8f42 on dongeforever:up_tps into 7677758 on apache:master.

@@ -567,6 +577,11 @@ public PutMessageResult putMessage(final MessageExtBrokerInner msg) {
MappedFile unlockMappedFile = null;
MappedFile mappedFile = this.mappedFileQueue.getLastMappedFile();

//maybe need to wrap the exception
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When encoding fails, let's notify a user with a PutMessageResult about the failure. There's nothing else we can do, I think.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agree with @shroman .

@@ -1154,7 +1249,7 @@ public AppendMessageResult doAppend(final long fileFromOffset, final ByteBuffer

if (propertiesLength > Short.MAX_VALUE) {
log.warn("putMessage message properties length too long. length={}", propertiesData.length);
return new AppendMessageResult(AppendMessageStatus.PROPERTIES_SIZE_EXCEEDED);
throw new RuntimeException("PROPERTIES_SIZE_EXCEEDED");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

By changing to this, PROPERTIES_SIZE_EXCEEDED won't be caught in the following switch.

@vongosling
Copy link
Member

LGTM

@coveralls
Copy link

Coverage Status

Coverage increased (+0.2%) to 38.626% when pulling 0076cdb on dongeforever:up_tps into c0fe02e on apache:develop.

1 similar comment
@coveralls
Copy link

Coverage Status

Coverage increased (+0.2%) to 38.626% when pulling 0076cdb on dongeforever:up_tps into c0fe02e on apache:develop.

@asfgit asfgit closed this in a4ada22 Sep 20, 2017
lizhanhui pushed a commit to lizhanhui/rocketmq that referenced this pull request Dec 10, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants