-
Notifications
You must be signed in to change notification settings - Fork 3.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Message rate out with batches is counting batches/s #466
Comments
Fixed in #462 |
Sorry for the confusion I think I found the issue Need to change Will test the change tomorrow and raise a PR. |
jai1
pushed a commit
to jai1/pulsar
that referenced
this issue
Jun 15, 2017
jai1
added a commit
that referenced
this issue
Jun 15, 2017
hangc0276
pushed a commit
to hangc0276/pulsar
that referenced
this issue
May 26, 2021
### Motivation There're some problems with produce performance. The main problem is the pending produce queue. It holds multiple pending produce requests (`PendingProduce`), which wait until `PersistentTopic` is ready and the `MemoryRecords` is encoded. First, `PendingProduce`s wait for different futures of `PersistentTopic`. Second, encoding `MemoryRecords` is fast so that putting it to another thread is not necessary and could cause some performance overhead. ### Modifications 1. Encoding `MemoryRecords` in the same thread of `handleProduceRequest`. 2. Check if the `CompletableFuture<PersistentTopic>` is done. - If it's done, just publish the messages directly without pushing the pending produce requests to the queue. - Otherwise, reuse the previous `CompletableFuture<PersistentTopic>`. This trick is performed by `PendingTopicFutures`, which uses the previous `CompletableFuture<PersistentTopic>` by `thenApply` or `exceptionally`. 3. Add tests for `PendingTopicFutures`. 4. Use a map of partition and response instead of a map of partition and response future in `handleProduceRequest`.
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Expected behavior
When publishing with transparent batching the message rate in is correctly
displayed as msg/s, but the rate out is just counting the batches.
Same thing happens with the backlog count. I think we should have been
displaying an estimated of the messages in the backlog.
The text was updated successfully, but these errors were encountered: