Skip to content

HBASE-25167 Normalizer support for hot config reloading#2523

Merged
ndimiduk merged 1 commit intoapache:masterfrom
ndimiduk:25167-normalizer-config-hot-reload
Oct 30, 2020
Merged

HBASE-25167 Normalizer support for hot config reloading#2523
ndimiduk merged 1 commit intoapache:masterfrom
ndimiduk:25167-normalizer-config-hot-reload

Conversation

@ndimiduk
Copy link
Member

@ndimiduk ndimiduk commented Oct 9, 2020

Wire up the ConfigurationObserver chain for RegionNormalizerManager. The following configuration keys support hot-reloading:

  • hbase.normalizer.throughput.max_bytes_per_sec
  • hbase.normalizer.split.enabled
  • hbase.normalizer.merge.enabled
  • hbase.normalizer.min.region.count
  • hbase.normalizer.merge.min_region_age.days
  • hbase.normalizer.merge.min_region_size.mb

Note that support for hbase.normalizer.period is not provided here. Support would need to be implemented generally for the Chore subsystem.

@Apache-HBase

This comment has been minimized.

@Apache-HBase

This comment has been minimized.

@Apache-HBase

This comment has been minimized.


/** Ensure configuration changes are applied atomically. */
private final ReadWriteLock configUpdateLock = new ReentrantReadWriteLock();
@GuardedBy("configUpdateLock") private Configuration conf;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is cool. I believe this will help us avoid partial config update!

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Indeed ! Wondering if we can ensure all implementors of ConfigurationObserver can start using such lock for atomic updates of non-final fields (of course not as part of this Jira :) )

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure about "all" uses, it just seemed prudent for this use. I'm also not sure our static analysis tools honor this GuardedBy annotation, I have a TODO for myself to track this down and see if it's really supported.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have learnt about GuardedBy for the first time on this PR, so I am also not sure if static analysis tools really require some work.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems SpotBugs does have a bug description for the improper use of GuardedBy, https://spotbugs.readthedocs.io/en/latest/bugDescriptions.html#is-field-not-guarded-against-concurrent-access-is-field-not-guarded. I wonder if it's actually implemented ;)

@ndimiduk ndimiduk requested a review from anoopsjohn October 12, 2020 23:24
@ndimiduk
Copy link
Member Author

I think the failure in TestAsyncTable#testCheckAndIncrement is unrelated. Anyway it would be nicer if that test would print the exception with the underlying failure case instead of just assertTrue(result.isSuccess().

@virajjasani
Copy link
Contributor

virajjasani commented Oct 13, 2020

We used to have 2 types of hot config reload:

  1. _switch commands (e.g balancer_switch) which uses znode to update config on/off values and hence, even after master failover, new active master would be able to read config value from znode and stay up-to-date even in the presence of stale config value on hbase-site.
  2. ConfigurationObserver with implementation of onConfigurationChange(), which requires operator to update new config value on all desired servers e.g all masters (active + backup) and then utilize config reload commands update_config or update_config_all.

Since I haven't used (looking forward to dig deeper) PropagatingConfigurationObserver, one question is, does it also require updating configs on disk to make sure if master fails-over, next active master reloads config from disk or we have some rendezvous like ZK being used under the hood that ensures config change on disk is not mandatory?
I am also not a fan of using another new znode, just trying to understand the crux of hot reload of config and it's impact related to stale config value presence on backup master.

@ndimiduk
Copy link
Member Author

@virajjasani to the best of my knowledge, the ConfigurationObserver classes facilitate reloading values from configuration on disk. I think the workflow is (1) operator writes new config values to disk (2) operator triggers configuration reload in running process. As far as I know, there's no ZK involved in this stuff. Certainly, there's no new znode defined by this patch.

FYI, the only "configuration" we store in ZK, as far as I know, is for replication. The _switch commands are simple booleans only.

@virajjasani
Copy link
Contributor

virajjasani commented Oct 14, 2020

The _switch commands are simple booleans only.

That's true, _switch commands store just boolean, so not a big concern. However, I like it in a way that operator doesn't have to update on disk configs for live cluster (specifically if cluster is already going through some sort of disaster, operator tend to be overwhelmed and might forget updating disk config on some server maybe). But on the other hand, we have a new znode dependency.
As for the rate limiter, since we are expecting changes in configs related to how many bytes of sized regions we want to get normalized with rate limit, I think we should be good with current approach.
Thanks, taking a look.

/**
* Return this instance's configured value for {@value #MERGE_MIN_REGION_SIZE_MB_KEY}.
*/
public int getMergeMinRegionSizeMb() {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Last 3 getters should have @VisibleForTesting ?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In light of HBASE-24640, I didn't want to introduce any new uses of this annotation.


/** Ensure configuration changes are applied atomically. */
private final ReadWriteLock configUpdateLock = new ReentrantReadWriteLock();
@GuardedBy("configUpdateLock") private Configuration conf;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Indeed ! Wondering if we can ensure all implementors of ConfigurationObserver can start using such lock for atomic updates of non-final fields (of course not as part of this Jira :) )

@Apache-HBase

This comment has been minimized.

@Apache-HBase

This comment has been minimized.

@Apache-HBase

This comment has been minimized.

final Lock readLock = configUpdateLock.readLock();
final Lock writeLock = configUpdateLock.writeLock();
writeLock.lock(); // "a writer can acquire the read lock, but not vice-versa."
readLock.lock();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is readLock.lock() called here? Thought writeLock is enough.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I thought of the same initially but then realized that we don't want half updated configs when we read them with getConf() and make half correct decisions. Plus, since this is operator triggered action and not self triggered action, it's once in a while (Just in case you are more worried about threads reading conf getting blocked on readLock).

Still, let's wait for @ndimiduk 's response in case i might have missed some improvement here.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is readLock.lock() called here? Thought writeLock is enough.

Initially I thought so too, but reading the docs on ReadWriteLock, it never explicitly said that the WriteLock is exclusive for both readers and writers, it only says that it's exclusive. The best I can find is from Java Concurrency in Practice, which says (emphasis mine):

... Mutual exclusion is a conservative locking strategy that prevents writer/writer and writer/reader overlap, but also prevents reader/reader overlap. In many cases, data structures are “read-mostly”—they are mutable and are sometimes modified, but most accesses involve only reading. In these cases, it would be nice to relax the locking requirements to allow multiple readers to access the data structure at once. As long as each thread is guaranteed an up-to-date view of the data and no other thread modifies the data while the readers are viewing it, there will be no problems. This is what read-write locks allow: a resource can be accessed by multiple readers or a single writer at a time, but not both.

So I think you're correct @huaxiangsun , the write lock should be sufficient.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As long as each thread is guaranteed an up-to-date view of the data and no other thread modifies the data while the readers are viewing it, there will be no problems.

Do we not need readLock() + writeLock() to achieve this? How can we achieve writers not updating while readers are reading conf?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To me, the next sentence, This is what read-write locks allow: a resource can be accessed by multiple readers or a single writer at a time, but not both. To me, that means that by taking a writeLock, the thread holding it has exclusive access. writeLock should be enough.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But test output suggests otherwise :)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Or I just did it wrong.

@ndimiduk ndimiduk force-pushed the 25167-normalizer-config-hot-reload branch from 557c6c9 to ef7269f Compare October 21, 2020 18:24
@ndimiduk
Copy link
Member Author

Rebase and remove taking the ReadLock from SimpleRegionNormalizer#setConf.

@Apache-HBase

This comment has been minimized.

@Apache-HBase

This comment has been minimized.

@Apache-HBase

This comment has been minimized.

@ndimiduk ndimiduk force-pushed the 25167-normalizer-config-hot-reload branch 2 times, most recently from e1a733f to 8a71147 Compare October 21, 2020 22:52
@Apache-HBase

This comment has been minimized.

@Apache-HBase

This comment has been minimized.

@Apache-HBase

This comment has been minimized.

@Apache-HBase

This comment has been minimized.

mergeMinRegionSizeMb = parseMergeMinRegionSizeMb(conf);

final Lock writeLock = configUpdateLock.writeLock();
writeLock.lock();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Stumbled upon this change. I think there is a much simpler way to achieve this without locks and fewer lines of code (and cleaner). If we can factor all of the configs into a single object (with appropriate getters and setters if needed), something like,

static class NormalizerConfig {
  Configuration conf;
  boolean splitEnabled;
  private boolean mergeEnabled;
  private Period mergeMinRegionAge;
  private int mergeMinRegionSizeMb;
  .......

  static parseFromConfig(Conf conf);
}

private NormalizerConfig normalizerConf;

public void setConf(final Configuration conf) {
   normalizerConf = parseFromConfig(conf);
}

public boolean isSplitEnabled() {
  return normalizerConf.isSplitEnabled();
}

Reference assignment is atomic. So even if multiple threads call setConf(conf), each thread calls its own parseFromConfig() in it's own context, constructs the whole object and the reference assignment works cleanly. On the reader side depending on what reference is being used that point, the value is returned (ex: isSplitEnabled() above)..

The advantage of using these locks is the memory ordering that they enforce in methods like isSplitEnabled(). We essentially block until the reference is updated but I don't think that is a requirement here because we don't guarantee the callers of these methods (like computePlansForTable()) that they will work on the latest config while the config update is in progress (we can't guarantee that level of ordering anyway). Point here being the above approach gets rid of most code and is still not racy. WDYT.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure @bharathv , I think that's a nice suggestion. I agree that the strict ordering guarantees provided by explicit locking are not needed here. I've pushed a new commit that unwinds the locking and uses atomic instance assignment as you suggest. Let me know what you think.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is cool, much simpler!

@Apache-HBase

This comment has been minimized.

@Apache-HBase

This comment has been minimized.

@ndimiduk ndimiduk force-pushed the 25167-normalizer-config-hot-reload branch from 8a71147 to c8ecefb Compare October 29, 2020 18:54
@Apache-HBase
Copy link

🎊 +1 overall

Vote Subsystem Runtime Comment
+0 🆗 reexec 0m 44s Docker mode activated.
_ Prechecks _
+1 💚 dupname 0m 0s No case conflicting files found.
+1 💚 hbaseanti 0m 0s Patch does not have any anti-patterns.
+1 💚 @author 0m 0s The patch does not contain any @author tags.
_ master Compile Tests _
+0 🆗 mvndep 0m 32s Maven dependency ordering for branch
+1 💚 mvninstall 4m 9s master passed
+1 💚 checkstyle 1m 46s master passed
+1 💚 spotbugs 3m 6s master passed
_ Patch Compile Tests _
+0 🆗 mvndep 0m 13s Maven dependency ordering for patch
+1 💚 mvninstall 3m 51s the patch passed
+1 💚 checkstyle 0m 24s hbase-common: The patch generated 0 new + 0 unchanged - 1 fixed = 0 total (was 1)
+1 💚 checkstyle 1m 15s The patch passed checkstyle in hbase-server
+1 💚 whitespace 0m 0s The patch has no whitespace issues.
+1 💚 hadoopcheck 19m 48s Patch does not cause any errors with Hadoop 3.1.2 3.2.1 3.3.0.
+1 💚 spotbugs 3m 35s the patch passed
_ Other Tests _
+1 💚 asflicense 0m 23s The patch does not generate ASF License warnings.
48m 30s
Subsystem Report/Notes
Docker Client=19.03.13 Server=19.03.13 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2523/7/artifact/yetus-general-check/output/Dockerfile
GITHUB PR #2523
Optional Tests dupname asflicense spotbugs hadoopcheck hbaseanti checkstyle
uname Linux db187a2cc5e7 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
Build tool maven
Personality dev-support/hbase-personality.sh
git revision master / 259fe19
Max. process+thread count 94 (vs. ulimit of 30000)
modules C: hbase-common hbase-server U: .
Console output https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2523/7/console
versions git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) spotbugs=3.1.12
Powered by Apache Yetus 0.11.1 https://yetus.apache.org

This message was automatically generated.

Copy link
Contributor

@bharathv bharathv left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm. Thanks.

Wire up the `ConfigurationObserver` chain for
`RegionNormalizerManager`. The following configuration keys support
hot-reloading:
 * hbase.normalizer.throughput.max_bytes_per_sec
 * hbase.normalizer.split.enabled
 * hbase.normalizer.merge.enabled
 * hbase.normalizer.min.region.count
 * hbase.normalizer.merge.min_region_age.days
 * hbase.normalizer.merge.min_region_size.mb

Note that support for `hbase.normalizer.period` is not provided
here. Support would need to be implemented generally for the `Chore`
subsystem.
@ndimiduk ndimiduk force-pushed the 25167-normalizer-config-hot-reload branch from c8ecefb to fe89fb0 Compare October 29, 2020 21:03
@Apache-HBase
Copy link

🎊 +1 overall

Vote Subsystem Runtime Comment
+0 🆗 reexec 0m 26s Docker mode activated.
-0 ⚠️ yetus 0m 3s Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck
_ Prechecks _
_ master Compile Tests _
+0 🆗 mvndep 0m 21s Maven dependency ordering for branch
+1 💚 mvninstall 4m 12s master passed
+1 💚 compile 1m 33s master passed
+1 💚 shadedjars 6m 41s branch has no errors when building our shaded downstream artifacts.
+1 💚 javadoc 1m 7s master passed
_ Patch Compile Tests _
+0 🆗 mvndep 0m 16s Maven dependency ordering for patch
+1 💚 mvninstall 4m 4s the patch passed
+1 💚 compile 1m 30s the patch passed
+1 💚 javac 1m 30s the patch passed
+1 💚 shadedjars 6m 39s patch has no errors when building our shaded downstream artifacts.
+1 💚 javadoc 1m 5s the patch passed
_ Other Tests _
+1 💚 unit 1m 46s hbase-common in the patch passed.
+1 💚 unit 136m 6s hbase-server in the patch passed.
168m 10s
Subsystem Report/Notes
Docker Client=19.03.13 Server=19.03.13 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2523/7/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile
GITHUB PR #2523
Optional Tests javac javadoc unit shadedjars compile
uname Linux a2a9b0555579 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
Build tool maven
Personality dev-support/hbase-personality.sh
git revision master / 259fe19
Default Java 2020-01-14
Test Results https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2523/7/testReport/
Max. process+thread count 3733 (vs. ulimit of 30000)
modules C: hbase-common hbase-server U: .
Console output https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2523/7/console
versions git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f)
Powered by Apache Yetus 0.11.1 https://yetus.apache.org

This message was automatically generated.

@Apache-HBase
Copy link

🎊 +1 overall

Vote Subsystem Runtime Comment
+0 🆗 reexec 0m 27s Docker mode activated.
-0 ⚠️ yetus 0m 4s Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck
_ Prechecks _
_ master Compile Tests _
+0 🆗 mvndep 0m 23s Maven dependency ordering for branch
+1 💚 mvninstall 3m 44s master passed
+1 💚 compile 1m 19s master passed
+1 💚 shadedjars 6m 28s branch has no errors when building our shaded downstream artifacts.
+1 💚 javadoc 1m 1s master passed
_ Patch Compile Tests _
+0 🆗 mvndep 0m 17s Maven dependency ordering for patch
+1 💚 mvninstall 3m 28s the patch passed
+1 💚 compile 1m 24s the patch passed
+1 💚 javac 1m 24s the patch passed
+1 💚 shadedjars 6m 31s patch has no errors when building our shaded downstream artifacts.
+1 💚 javadoc 1m 0s the patch passed
_ Other Tests _
+1 💚 unit 1m 32s hbase-common in the patch passed.
+1 💚 unit 139m 42s hbase-server in the patch passed.
169m 47s
Subsystem Report/Notes
Docker Client=19.03.13 Server=19.03.13 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2523/7/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile
GITHUB PR #2523
Optional Tests javac javadoc unit shadedjars compile
uname Linux d3193a0f0a64 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
Build tool maven
Personality dev-support/hbase-personality.sh
git revision master / 259fe19
Default Java 1.8.0_232
Test Results https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2523/7/testReport/
Max. process+thread count 4393 (vs. ulimit of 30000)
modules C: hbase-common hbase-server U: .
Console output https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2523/7/console
versions git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f)
Powered by Apache Yetus 0.11.1 https://yetus.apache.org

This message was automatically generated.

@Apache-HBase
Copy link

🎊 +1 overall

Vote Subsystem Runtime Comment
+0 🆗 reexec 0m 55s Docker mode activated.
_ Prechecks _
+1 💚 dupname 0m 0s No case conflicting files found.
+1 💚 hbaseanti 0m 0s Patch does not have any anti-patterns.
+1 💚 @author 0m 0s The patch does not contain any @author tags.
_ master Compile Tests _
+0 🆗 mvndep 0m 14s Maven dependency ordering for branch
+1 💚 mvninstall 3m 50s master passed
+1 💚 checkstyle 1m 53s master passed
+1 💚 spotbugs 3m 1s master passed
_ Patch Compile Tests _
+0 🆗 mvndep 0m 13s Maven dependency ordering for patch
+1 💚 mvninstall 4m 18s the patch passed
+1 💚 checkstyle 0m 25s hbase-common: The patch generated 0 new + 0 unchanged - 1 fixed = 0 total (was 1)
+1 💚 checkstyle 1m 13s The patch passed checkstyle in hbase-server
+1 💚 whitespace 0m 0s The patch has no whitespace issues.
+1 💚 hadoopcheck 20m 23s Patch does not cause any errors with Hadoop 3.1.2 3.2.1 3.3.0.
+1 💚 spotbugs 3m 32s the patch passed
_ Other Tests _
+1 💚 asflicense 0m 22s The patch does not generate ASF License warnings.
49m 22s
Subsystem Report/Notes
Docker Client=19.03.13 Server=19.03.13 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2523/8/artifact/yetus-general-check/output/Dockerfile
GITHUB PR #2523
Optional Tests dupname asflicense spotbugs hadoopcheck hbaseanti checkstyle
uname Linux 29e758b2d216 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
Build tool maven
Personality dev-support/hbase-personality.sh
git revision master / 12d0397
Max. process+thread count 94 (vs. ulimit of 30000)
modules C: hbase-common hbase-server U: .
Console output https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2523/8/console
versions git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) spotbugs=3.1.12
Powered by Apache Yetus 0.11.1 https://yetus.apache.org

This message was automatically generated.

@Apache-HBase
Copy link

🎊 +1 overall

Vote Subsystem Runtime Comment
+0 🆗 reexec 0m 27s Docker mode activated.
-0 ⚠️ yetus 0m 4s Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck
_ Prechecks _
_ master Compile Tests _
+0 🆗 mvndep 0m 26s Maven dependency ordering for branch
+1 💚 mvninstall 3m 23s master passed
+1 💚 compile 1m 18s master passed
+1 💚 shadedjars 6m 35s branch has no errors when building our shaded downstream artifacts.
+1 💚 javadoc 0m 58s master passed
_ Patch Compile Tests _
+0 🆗 mvndep 0m 16s Maven dependency ordering for patch
+1 💚 mvninstall 3m 28s the patch passed
+1 💚 compile 1m 19s the patch passed
+1 💚 javac 1m 19s the patch passed
+1 💚 shadedjars 6m 34s patch has no errors when building our shaded downstream artifacts.
+1 💚 javadoc 0m 57s the patch passed
_ Other Tests _
+1 💚 unit 1m 23s hbase-common in the patch passed.
+1 💚 unit 140m 12s hbase-server in the patch passed.
169m 47s
Subsystem Report/Notes
Docker Client=19.03.13 Server=19.03.13 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2523/8/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile
GITHUB PR #2523
Optional Tests javac javadoc unit shadedjars compile
uname Linux dc74a265b670 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
Build tool maven
Personality dev-support/hbase-personality.sh
git revision master / 12d0397
Default Java 1.8.0_232
Test Results https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2523/8/testReport/
Max. process+thread count 5122 (vs. ulimit of 30000)
modules C: hbase-common hbase-server U: .
Console output https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2523/8/console
versions git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f)
Powered by Apache Yetus 0.11.1 https://yetus.apache.org

This message was automatically generated.

@Apache-HBase
Copy link

🎊 +1 overall

Vote Subsystem Runtime Comment
+0 🆗 reexec 0m 39s Docker mode activated.
-0 ⚠️ yetus 0m 2s Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck
_ Prechecks _
_ master Compile Tests _
+0 🆗 mvndep 0m 14s Maven dependency ordering for branch
+1 💚 mvninstall 4m 52s master passed
+1 💚 compile 1m 50s master passed
+1 💚 shadedjars 8m 21s branch has no errors when building our shaded downstream artifacts.
+1 💚 javadoc 1m 13s master passed
_ Patch Compile Tests _
+0 🆗 mvndep 0m 17s Maven dependency ordering for patch
+1 💚 mvninstall 4m 58s the patch passed
+1 💚 compile 1m 53s the patch passed
+1 💚 javac 1m 53s the patch passed
+1 💚 shadedjars 8m 29s patch has no errors when building our shaded downstream artifacts.
+1 💚 javadoc 1m 16s the patch passed
_ Other Tests _
+1 💚 unit 2m 1s hbase-common in the patch passed.
+1 💚 unit 144m 3s hbase-server in the patch passed.
182m 40s
Subsystem Report/Notes
Docker Client=19.03.13 Server=19.03.13 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2523/8/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile
GITHUB PR #2523
Optional Tests javac javadoc unit shadedjars compile
uname Linux e229dac2f5b8 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
Build tool maven
Personality dev-support/hbase-personality.sh
git revision master / 12d0397
Default Java 2020-01-14
Test Results https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2523/8/testReport/
Max. process+thread count 3894 (vs. ulimit of 30000)
modules C: hbase-common hbase-server U: .
Console output https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2523/8/console
versions git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f)
Powered by Apache Yetus 0.11.1 https://yetus.apache.org

This message was automatically generated.

@ndimiduk ndimiduk merged commit d790bde into apache:master Oct 30, 2020
@ndimiduk ndimiduk deleted the 25167-normalizer-config-hot-reload branch October 30, 2020 17:42
ndimiduk added a commit to ndimiduk/hbase that referenced this pull request Oct 30, 2020
Wire up the `ConfigurationObserver` chain for
`RegionNormalizerManager`. The following configuration keys support
hot-reloading:
 * hbase.normalizer.throughput.max_bytes_per_sec
 * hbase.normalizer.split.enabled
 * hbase.normalizer.merge.enabled
 * hbase.normalizer.min.region.count
 * hbase.normalizer.merge.min_region_age.days
 * hbase.normalizer.merge.min_region_size.mb

Note that support for `hbase.normalizer.period` is not provided
here. Support would need to be implemented generally for the `Chore`
subsystem.

Signed-off-by: Bharath Vissapragada <bharathv@apache.org>
Signed-off-by: Viraj Jasani <vjasani@apache.org>
Signed-off-by: Aman Poonia <aman.poonia.29@gmail.com>
ndimiduk added a commit that referenced this pull request Oct 30, 2020
Wire up the `ConfigurationObserver` chain for
`RegionNormalizerManager`. The following configuration keys support
hot-reloading:
 * hbase.normalizer.throughput.max_bytes_per_sec
 * hbase.normalizer.split.enabled
 * hbase.normalizer.merge.enabled
 * hbase.normalizer.min.region.count
 * hbase.normalizer.merge.min_region_age.days
 * hbase.normalizer.merge.min_region_size.mb

Note that support for `hbase.normalizer.period` is not provided
here. Support would need to be implemented generally for the `Chore`
subsystem.

Signed-off-by: Bharath Vissapragada <bharathv@apache.org>
Signed-off-by: Viraj Jasani <vjasani@apache.org>
Signed-off-by: Aman Poonia <aman.poonia.29@gmail.com>
clarax pushed a commit to clarax/hbase that referenced this pull request Nov 15, 2020
Wire up the `ConfigurationObserver` chain for
`RegionNormalizerManager`. The following configuration keys support
hot-reloading:
 * hbase.normalizer.throughput.max_bytes_per_sec
 * hbase.normalizer.split.enabled
 * hbase.normalizer.merge.enabled
 * hbase.normalizer.min.region.count
 * hbase.normalizer.merge.min_region_age.days
 * hbase.normalizer.merge.min_region_size.mb

Note that support for `hbase.normalizer.period` is not provided
here. Support would need to be implemented generally for the `Chore`
subsystem.

Signed-off-by: Bharath Vissapragada <bharathv@apache.org>
Signed-off-by: Viraj Jasani <vjasani@apache.org>
Signed-off-by: Aman Poonia <aman.poonia.29@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants