-
Notifications
You must be signed in to change notification settings - Fork 3.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
HBASE-24055 Make AsyncFSWAL can run on EC cluster #1437
Conversation
🎊 +1 overall
This message was automatically generated. |
🎊 +1 overall
This message was automatically generated. |
🎊 +1 overall
This message was automatically generated. |
OK, good. All green. Can I get a review? @saintstack @ndimiduk I think this should be included in 2.3. @infraio Do you think this should also go into 2.2.x? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changes look good. wal
package tests in general all pass. This one specifically passes without triggering the assumption on JDK11+Hadoop3 BUT it does trigger the assumption and skip on JDK8+Hadoop2. I believe this is to be expected as the EC feature added in HDFS-7285 has a fixVersion of 3.0.0-alpha1.
try { | ||
return CreateFlag.valueOf("SHOULD_REPLICATE"); | ||
} catch (IllegalArgumentException e) { | ||
LOG.debug("can not find SHOULD_REPLICATE flag, should be hadoop 2.x", e); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should this be logged at warn
instead?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I used to make all the adaptor code to log as WARN but it was proven to be annoying for most users and we changed them all to debug...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This happens every time we open a file in hadoop2? Or just once on class loading? (Looks like the latter).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is how we test if the the SHOULD_REPLICATE flag is available? It is not available if hadoop2? If so, can we have a comment on the method to this effect? The log is a little confusing. Suggest: "SHOULD_REPLICATE is not available; this is a problem if we are running on hadoop3 (Its expected if hadoop2)".
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Only once when loading.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
All the other log messages are just like this, so do not want to only change this one. Let me add some comments on the field.
@@ -502,8 +524,8 @@ private static FanOutOneBlockAsyncDFSOutput createOutput(DistributedFileSystem d | |||
try { | |||
stat = FILE_CREATOR.create(namenode, src, | |||
FsPermission.getFileDefault().applyUMask(FsPermission.getUMask(conf)), clientName, | |||
new EnumSetWritable<>(overwrite ? EnumSet.of(CREATE, OVERWRITE) : EnumSet.of(CREATE)), | |||
createParent, replication, blockSize, CryptoProtocolVersion.supported()); | |||
getCreateFlags(overwrite), createParent, replication, blockSize, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nice cleanup.
And +1 for branch-2.3 |
Yes, EC is only available in hadoop 3.x. |
Any other concerns? @saintstack |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit in below. comments and log format. Else looks good to me.
try { | ||
return CreateFlag.valueOf("SHOULD_REPLICATE"); | ||
} catch (IllegalArgumentException e) { | ||
LOG.debug("can not find SHOULD_REPLICATE flag, should be hadoop 2.x", e); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This happens every time we open a file in hadoop2? Or just once on class loading? (Looks like the latter).
try { | ||
return CreateFlag.valueOf("SHOULD_REPLICATE"); | ||
} catch (IllegalArgumentException e) { | ||
LOG.debug("can not find SHOULD_REPLICATE flag, should be hadoop 2.x", e); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is how we test if the the SHOULD_REPLICATE flag is available? It is not available if hadoop2? If so, can we have a comment on the method to this effect? The log is a little confusing. Suggest: "SHOULD_REPLICATE is not available; this is a problem if we are running on hadoop3 (Its expected if hadoop2)".
🎊 +1 overall
This message was automatically generated. |
💔 -1 overall
This message was automatically generated. |
💔 -1 overall
This message was automatically generated. |
Signed-off-by: Nick Dimiduk <ndimiduk@apache.org> Signed-off-by: stack <stack@apache.org>
Signed-off-by: Nick Dimiduk <ndimiduk@apache.org> Signed-off-by: stack <stack@apache.org>
Signed-off-by: Nick Dimiduk <ndimiduk@apache.org> Signed-off-by: stack <stack@apache.org>
Signed-off-by: Nick Dimiduk <ndimiduk@apache.org> Signed-off-by: stack <stack@apache.org>
Signed-off-by: Nick Dimiduk <ndimiduk@apache.org> Signed-off-by: stack <stack@apache.org>
No description provided.