Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[broker]Optimize topicMaxMessageSize with topic local cache. #12830

Merged
merged 2 commits into from
Nov 30, 2021

Conversation

Jason918
Copy link
Contributor

@Jason918 Jason918 commented Nov 16, 2021

Motivation

Currently, , we update the value of Topic#topicMaxMessageSize in an interval of maxMessageSizeCheckIntervalInSeconds (default as 60s).
So if we update this value, we will have a delay, max to 60s by default, before it take effects.
We can reduce the latency by updating the value with notification.

Further, we can avoid the blank updates in isExceedMaximumMessageSize.

Modifications

  1. Move topicMaxMessageSize to HierarchyTopicPolicies , and update by onUpdate listener, like other topic policies.
  2. Deprecates maxMessageSizeCheckIntervalInSeconds, since it has no use any more.
  3. Remove lastTopicMaxMessageSizeCheckTimeStamp and topicMaxMessageSizeCheckIntervalMs, no use any more.
  4. Move method registerTopicPolicyListener up to AbstractTopic level, since topicMaxMessageSize also effect non-persist topics.

Verifying this change

  • Make sure that the change passes the CI checks.

This change added tests and can be verified as follows:

(example:)

  • org.apache.pulsar.broker.admin.TopicPoliciesTest#testTopicMaxMessageSize, and added test case for non-persisitent topics.
  • Add org.apache.pulsar.broker.admin.TopicPoliciesTest#testTopicPolicyInitValue to cover the case that topic loading old settings.

Does this pull request potentially affect one of the following parts:

If yes was chosen, please highlight the changes

  • Dependencies (does it add or upgrade a dependency): (no)
  • The public API: (no)
  • The schema: (no)
  • The default values of configurations: (no)
  • The wire protocol: (no)
  • The rest endpoints: (no)
  • The admin cli options: (no)
  • Anything that affects deployment: (no)

Documentation

Check the box below and label this PR (if you have committer privilege).

Need to update docs?

  • doc

Doc updated in comments.
And checked with old configuration doc, no maxMessageSizeCheckIntervalInSeconds found, and no need to add since it is deprecated.

@github-actions github-actions bot added the doc Your PR contains doc changes, no matter whether the changes are in markdown or code files. label Nov 16, 2021
@Jason918
Copy link
Contributor Author

/pulsarbot run-failure-checks

2 similar comments
@Jason918
Copy link
Contributor Author

/pulsarbot run-failure-checks

@Jason918
Copy link
Contributor Author

/pulsarbot run-failure-checks

@Jason918
Copy link
Contributor Author

@codelipenghui @315157973 Please help take a look.

Copy link
Contributor

@codelipenghui codelipenghui left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think you missed the updateTopicPolicyByBrokerConfig to apply the default broker configuration?

@codelipenghui codelipenghui added this to the 2.10.0 milestone Nov 22, 2021
@Jason918
Copy link
Contributor Author

I think you missed the updateTopicPolicyByBrokerConfig to apply the default broker configuration?

Not really. You mean ServiceConfiguration#maxMessageSize, it take effects in the netty RPC layer.
See

brokerConf.getMaxMessageSize() + Commands.MESSAGE_SIZE_FRAME_PADDING, 0, 4, 0, 4));

1 similar comment
@Jason918
Copy link
Contributor Author

I think you missed the updateTopicPolicyByBrokerConfig to apply the default broker configuration?

Not really. You mean ServiceConfiguration#maxMessageSize, it take effects in the netty RPC layer.
See

brokerConf.getMaxMessageSize() + Commands.MESSAGE_SIZE_FRAME_PADDING, 0, 4, 0, 4));

@codelipenghui
Copy link
Contributor

Yes, but I think we should keep consistent in the local topic policy cache? And for the next step, we can apply the PolicyHierarchyValue to get topic policy.

@Jason918
Copy link
Contributor Author

Yes, but I think we should keep consistent in the local topic policy cache? And for the next step, we can apply the PolicyHierarchyValue to get topic policy.

Great point. Added. And it's easier to find out which policy have broker/ns/topic level setting.

@Jason918
Copy link
Contributor Author

/pulsarbot run-failure-checks

Copy link
Contributor

@codelipenghui codelipenghui left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you please show me where update the PolicyHierarchyValue.brokerValue?

@Jason918
Copy link
Contributor Author

Could you please show me where update the PolicyHierarchyValue.brokerValue?

Last line in method updateTopicPolicyByBrokerConfig

delete ServiceConfiguration#maxMessageSizeCheckIntervalInSeconds

set broker level value for TopicMaxMessageSize

fix test failuer

Fix tests

fix test

use DataProvider in testTopicMaxMessageSize
@codelipenghui codelipenghui merged commit 1b20a97 into apache:master Nov 30, 2021
zeo1995 pushed a commit to zeo1995/pulsar that referenced this pull request Dec 1, 2021
* up/master: (75 commits)
  [website][upgrade]feat: website upgrade / docs migration - 2.5.1 Get Started/Concepts and Architecture/Pulsar Schema (apache#13030)
  Fix environment variable assignment in startup scripts (apache#13025)
  update 2.8.x (apache#13029)
  [Doc] add tips for Pulsar tools (apache#13044)
  Suggest to use tlsPort instead of deprecated TlsEnable (apache#13039)
  Integration tests for function-worker rebalance and drain operations. (apache#13058)
  fix(functions): missing runtime set in GoInstanceConfig (apache#13031)
  [pulsar-admin] Add get-replicated-subscription-status command for topic (apache#12891)
  [Broker] Consider topics in pulsar/system namespace as system topics (apache#13050)
  Fix typo: correct sizeUint to sizeUnit (apache#13040)
  fix-12894 (apache#12896)
  Don't attempt to delete pending ack store unless transactions are enabled (apache#13041)
  [Perf] Evaluate the current protocol version once (apache#13045)
  Fix Issue apache#12885, Unordered consuming case in Key_Shared subscription (apache#12890)
  [broker]Optimize topicMaxMessageSize with topic local cache. (apache#12830)
  [PIP-105] Part-2 Support pluggable entry filter in Dispatcher (apache#12970)
  [website] Modify admin-api-topic.md document (apache#12996)
  add missed import (apache#13037)
  [metadata] Add RocksdbMetadataStore (apache#12776)
  [C] Add pulsar_client_subscribe_multi_topics and pulsar_client_subscribe_pattern (apache#12965)
  ...

# Conflicts:
#	site2/website-next/docusaurus.config.js
#	site2/website-next/versioned_sidebars/version-2.6.1-sidebars.json
#	site2/website-next/versions.json
fxbing pushed a commit to fxbing/pulsar that referenced this pull request Dec 19, 2021
…12830)

### Motivation

Currently, , we update the value of `Topic#topicMaxMessageSize` in an interval of  `maxMessageSizeCheckIntervalInSeconds` (default as 60s). 
So if we update this value, we will have a delay, max to 60s by default, before it take effects.
We can reduce the latency by updating the value with notification.

Further, we can avoid the blank updates in `isExceedMaximumMessageSize`.

### Modifications

1. Move `topicMaxMessageSize` to HierarchyTopicPolicies , and update by `onUpdate` listener, like other topic policies.
2. Deprecates `maxMessageSizeCheckIntervalInSeconds`, since it has no use any more.
3. Remove `lastTopicMaxMessageSizeCheckTimeStamp` and `topicMaxMessageSizeCheckIntervalMs`, no use any more.
4. Move method `registerTopicPolicyListener` up to AbstractTopic level, since `topicMaxMessageSize` also effect non-persist topics.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
doc Your PR contains doc changes, no matter whether the changes are in markdown or code files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants