-
Notifications
You must be signed in to change notification settings - Fork 8.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
HDFS-15281. ZKFC ignores dfs.namenode.rpc-bind-host and uses dfs.namenode.rpc-address to bind to host address #1964
Conversation
💔 -1 overall
This message was automatically generated. |
💔 -1 overall
This message was automatically generated. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The patch looks good to me overall, left some comments and will review this again. Thanks for the patch!
import org.apache.hadoop.conf.Configuration; | ||
import static org.apache.hadoop.hdfs.DFSConfigKeys.*; | ||
|
||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: this can be one blank line.
import org.apache.hadoop.hdfs.HdfsConfiguration; | ||
import org.apache.hadoop.hdfs.MiniDFSCluster; | ||
|
||
public class TestDFSZKFCRespectsBindHostKeys { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Alternatively, this test can go to TestDFSZKFailoverController
? We may reuse existing setup and shutdown methods hopefully?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed. I moved tests case to TestDFSZKFailoverController
.setIpcPort(ServerSocketUtil.getPort(10022, 100)))); | ||
// ZKFC should not bind the wildcard address by default. | ||
try { | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: remove blank lines within try
clause? Seems related statements. It's fine if you prefer keeping them.
nit: line length is usually 80 characters. You can check the checkstyle reports from the QA comment, for e.g. https://builds.apache.org/job/hadoop-multibranch/job/PR-1964/2/artifact/out/diff-checkstyle-root.txt
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed
DFSZKFailoverController zkfc = DFSZKFailoverController.create( | ||
conf); | ||
|
||
assertThat("Bind address not expected to be wildcard by default.", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: assertEquals
is simpler.
assertThat("Bind address not expected to be wildcard by default.",
LOCALHOST_SERVER_ADDRESS, zkfc.getRpcAddressToBindTo().getHostString());
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed.
.addNN(new MiniDFSNNTopology.NNConf("nn2") | ||
.setIpcPort(ServerSocketUtil.getPort(10022, 100)))); | ||
// ZKFC should not bind the wildcard address by default. | ||
try { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you like, you can use try-with on MiniDFSCluster, e.g.
try (MiniDFSCluster cluster = new MiniDFSCluster.Builder...) {
// some stuff using cluster
}
and later
try (MiniDFSCluster cluster = new MiniDFSCluster.Builder...) {
// some stuff using cluster
}
without name conflicts and no necessary to reset cluster to null
in-between.
But if you move this to TestDFSZKFailoverController
, we can split this test method into two methods and reuse the mini cluster defined there. I'm fine either way.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Went with the first suggestion
*/ | ||
package org.apache.hadoop.hdfs.tools; | ||
|
||
import static org.hamcrest.core.IsNot.not; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: not used?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed
LOG.info("Testing with " + DFS_NAMENODE_RPC_BIND_HOST_KEY); | ||
|
||
// Tell NN to bind the wildcard address. | ||
conf.set(DFS_NAMENODE_RPC_BIND_HOST_KEY, WILDCARD_ADDRESS); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm thinking, since we care more about the service RPC binding host, this config should be setting that only.
conf.set(DFS_NAMENODE_SERVICE_RPC_BIND_HOST_KEY, WILDCARD_ADDRESS);
Is this agreed?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed.
/** Given a configuration get the bind host that could be used by ZKFC. | ||
* We derive it from NN service rpc bind host or NN rpc bind host. | ||
*/ | ||
protected String getZkfcServerBindHost(Configuration conf) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This actually can be private static
.
- First it can be static because it does not refer to this ZKFC object fields
- Other methods are
protected
I guess because they are overriding parent classZKFailoverController
methods. When overriding, we can not change the scope to a smaller one (aka weaker access privilege). So here we keep theprotected
keyword.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed
💔 -1 overall
This message was automatically generated. |
Looks good to me overall. May need a clean QA run before committing. Test failure seem unrelated, but worth another look. And we can search recent builds to see if they are also happening without this patch. The
|
@@ -321,6 +321,7 @@ private void initHM() { | |||
|
|||
protected void initRPC() throws IOException { | |||
InetSocketAddress bindAddr = getRpcAddressToBindTo(); | |||
LOG.info("ZKFC RpcServer binding to " + bindAddr); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Use the logger format with {}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed
@@ -125,7 +128,21 @@ static int getZkfcPort(Configuration conf) { | |||
return conf.getInt(DFSConfigKeys.DFS_HA_ZKFC_PORT_KEY, | |||
DFSConfigKeys.DFS_HA_ZKFC_PORT_DEFAULT); | |||
} | |||
|
|||
|
|||
/** Given a configuration get the bind host that could be used by ZKFC. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Use standard javadoc with a break line and add the param and the return.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It kind of comes from HAUtil but let's make it right here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed. Added javadoc
...-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSZKFailoverController.java
Show resolved
Hide resolved
.../hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestDFSZKFCRespectsBindHostKeys.java
Outdated
Show resolved
Hide resolved
💔 -1 overall
This message was automatically generated. |
I'm a little confused with the @hadoop-yetus report... it looks like is checking old code. |
Yes same here. CC: @aajisaka @hadoop-yetus Basically after the author updated the PR, the Yetus was still generating reports using old patch / commits. One way to confirm is that: |
I have
I have just uploaded the patch diff too, in case that helps kick off a better build. |
💔 -1 overall
This message was automatically generated. |
…ost address (#1964) Contributed by Dhiraj Hegde. Signed-off-by: Mingliang Liu <liuml07@apache.org> Signed-off-by: Inigo Goiri <inigoiri@apache.org>
…ost address (#1964) Contributed by Dhiraj Hegde. Signed-off-by: Mingliang Liu <liuml07@apache.org> Signed-off-by: Inigo Goiri <inigoiri@apache.org>
…ost address (#1964) Contributed by Dhiraj Hegde. Signed-off-by: Mingliang Liu <liuml07@apache.org> Signed-off-by: Inigo Goiri <inigoiri@apache.org>
…ost address (#1964) Contributed by Dhiraj Hegde. Signed-off-by: Mingliang Liu <liuml07@apache.org> Signed-off-by: Inigo Goiri <inigoiri@apache.org>
…ost address (#1964) Contributed by Dhiraj Hegde. Signed-off-by: Mingliang Liu <liuml07@apache.org> Signed-off-by: Inigo Goiri <inigoiri@apache.org>
…ost address (apache#1964) Contributed by Dhiraj Hegde. Signed-off-by: Mingliang Liu <liuml07@apache.org> Signed-off-by: Inigo Goiri <inigoiri@apache.org>
…ost address (apache#1964) Contributed by Dhiraj Hegde. Signed-off-by: Mingliang Liu <liuml07@apache.org> Signed-off-by: Inigo Goiri <inigoiri@apache.org>
Details of the issue is here: https://issues.apache.org/jira/browse/HDFS-15281
NOTICE
Please create an issue in ASF JIRA before opening a pull request,
and you need to set the title of the pull request which starts with
the corresponding JIRA issue number. (e.g. HADOOP-XXXXX. Fix a typo in YYY.)
For more details, please see https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute