Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

HDFS-14617 - Improve fsimage load time by writing sub-sections to the fsimage index #1028

Merged
merged 11 commits into from Aug 23, 2019

Conversation

@sodonnel
Copy link
Contributor

sodonnel commented Jun 28, 2019

Initial code I used to perform my benchmarks for the above change. There are two parts to the change:

  1. The code used to write the sub-sections to the image, which is fairly simple.

  2. The code used to process the sub-sections in parallel. As much as possible this reuses the original code in multiple threads, but the inodeDirectory code needed to be moved around a bit to avoid synchronization issues.

The only sections with parallel loading so far are inode and inodeDirectory.

So far, there are no new tests for this feature. That is something I need to look into further if we agree this is a good change to move forward with.

@hadoop-yetus

This comment has been minimized.

Copy link

hadoop-yetus commented Jun 28, 2019

💔 -1 overall

Vote Subsystem Runtime Comment
0 reexec 34 Docker mode activated.
_ Prechecks _
+1 dupname 0 No case conflicting files found.
+1 @author 0 The patch does not contain any @author tags.
+1 test4tests 0 The patch appears to include 1 new or modified test files.
_ trunk Compile Tests _
+1 mvninstall 1085 trunk passed
+1 compile 159 trunk passed
+1 checkstyle 153 trunk passed
+1 mvnsite 152 trunk passed
+1 shadedclient 1391 branch has no errors when building and testing our client artifacts.
+1 javadoc 122 trunk passed
0 spotbugs 297 Used deprecated FindBugs config; considering switching to SpotBugs.
+1 findbugs 288 trunk passed
_ Patch Compile Tests _
+1 mvninstall 170 the patch passed
+1 compile 125 the patch passed
+1 javac 125 the patch passed
-0 checkstyle 151 hadoop-hdfs-project/hadoop-hdfs: The patch generated 12 new + 563 unchanged - 3 fixed = 575 total (was 566)
+1 mvnsite 88 the patch passed
+1 whitespace 0 The patch has no whitespace issues.
+1 shadedclient 678 patch has no errors when building and testing our client artifacts.
-1 javadoc 51 hadoop-hdfs-project_hadoop-hdfs generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0)
-1 findbugs 165 hadoop-hdfs-project/hadoop-hdfs generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0)
_ Other Tests _
-1 unit 4796 hadoop-hdfs in the patch failed.
+1 asflicense 38 The patch does not generate ASF License warnings.
9606
Reason Tests
FindBugs module:hadoop-hdfs-project/hadoop-hdfs
ins is null guaranteed to be dereferenced in org.apache.hadoop.hdfs.server.namenode.FSImageFormatPBINode$Loader$1.run() on exception path Dereferenced at FSImageFormatPBINode.java:be dereferenced in org.apache.hadoop.hdfs.server.namenode.FSImageFormatPBINode$Loader$1.run() on exception path Dereferenced at FSImageFormatPBINode.java:[line 240]
Failed junit tests hadoop.hdfs.server.datanode.TestDirectoryScanner
hadoop.tools.TestHdfsConfigFields
hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks
hadoop.hdfs.server.namenode.ha.TestBootstrapStandbyWithQJM
hadoop.hdfs.web.TestWebHdfsTimeouts
Subsystem Report/Notes
Docker Client=17.05.0-ce Server=17.05.0-ce base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1028/1/artifact/out/Dockerfile
GITHUB PR #1028
Optional Tests dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle
uname Linux eea40b3522ee 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
Build tool maven
Personality personality/hadoop.sh
git revision trunk / 4a21224
Default Java 1.8.0_212
checkstyle https://builds.apache.org/job/hadoop-multibranch/job/PR-1028/1/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
javadoc https://builds.apache.org/job/hadoop-multibranch/job/PR-1028/1/artifact/out/diff-javadoc-javadoc-hadoop-hdfs-project_hadoop-hdfs.txt
findbugs https://builds.apache.org/job/hadoop-multibranch/job/PR-1028/1/artifact/out/new-findbugs-hadoop-hdfs-project_hadoop-hdfs.html
unit https://builds.apache.org/job/hadoop-multibranch/job/PR-1028/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
Test Results https://builds.apache.org/job/hadoop-multibranch/job/PR-1028/1/testReport/
Max. process+thread count 4001 (vs. ulimit of 5500)
modules C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs
Console output https://builds.apache.org/job/hadoop-multibranch/job/PR-1028/1/console
versions git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1
Powered by Apache Yetus 0.10.0 http://yetus.apache.org

This message was automatically generated.

public static final String DFS_IMAGE_PARALLEL_THREADS_KEY =
"dfs.image.parallel.threads";
public static final int DFS_IMAGE_PARALLEL_THREADS_DEFAULT = 4;

This comment has been minimized.

Copy link
@Hexiaoqiao

Hexiaoqiao Jul 9, 2019

Contributor

IIUC, threads size should be not greater than target sections, otherwise the remaining threads will not be used or some other issues. So is it necessary to warn this configuration limit?

LOG.error("{} exceptions occurred loading INodes", exceptions.size());
throw exceptions.get(0);
}
// TODO - should we fail if total_loaded != total_expected?

This comment has been minimized.

Copy link
@Hexiaoqiao

Hexiaoqiao Jul 9, 2019

Contributor

+1

This comment has been minimized.

Copy link
@sodonnel

sodonnel Aug 14, 2019

Author Contributor

The latest version I pushed removes this TODO and causes the load to fail if the number loaded != number expected.

LOG.info("Interrupted waiting for countdown latch");
}
if (exceptions.size() != 0) {
LOG.error("{} exceptions occurred loading INodes", exceptions.size());

This comment has been minimized.

Copy link
@Hexiaoqiao

Hexiaoqiao Jul 9, 2019

Contributor

some exceptions will invisible to users? it will be not easy to get root cause if meet some exceptions but the core exception is not at head.

This comment has been minimized.

Copy link
@sodonnel

sodonnel Jul 24, 2019

Author Contributor

All exceptions will be logged by the executor here:

          public void run() {
            try {
              totalLoaded.addAndGet(loadINodesInSection(ins, null));
              prog.setCount(Phase.LOADING_FSIMAGE, currentStep,
                  totalLoaded.get());
            } catch (Exception e) {
              LOG.error("An exception occurred loading INodes in parallel", e);
              exceptions.add(new IOException(e));
            } finally {
            ...

It is fairly likely that if there are problems all exceptions will be from the same cause, so just throwing the first one, while ensuring all of them are logged should make debugging possible.

The alternative is to create a wrapper exception that all exceptions get stored into, and then throw that, but that would need to be caught and log out each exception it contains anyway, and those exceptions are already logged, so it would duplicate the information in the logs.

inodeLoader.loadINodeDirectorySection(in);
stageSubSections = getSubSectionsOfName(
subSections, SectionName.INODE_DIR_SUB);
if (loadInParallel && stageSubSections.size() > 0) {

This comment has been minimized.

Copy link
@Hexiaoqiao

Hexiaoqiao Jul 9, 2019

Contributor

Would you like to add some unit test to cover serial loading and parallel loading for old&new format fsimage?

This comment has been minimized.

Copy link
@sodonnel

sodonnel Jul 24, 2019

Author Contributor

Yes, I need to figure out how to add some tests for this overall change.

This comment has been minimized.

Copy link
@jojochuang

jojochuang Aug 8, 2019

Contributor

you probably need to put a sample fsimage in old format in the resource directory (hadoop-hdfs-project/hadoop-hdfs/src/test/resources/), and load it in a unit test

This comment has been minimized.

Copy link
@sodonnel

sodonnel Aug 14, 2019

Author Contributor

I have a unit test which ensures a parallel image can be created and loaded. It would be fairly easy to create another test which generates a non-parallel image, validates it is non-parallel and then attempt to load it with parallel enabled. Do you think that would cover what we need?
I also have a test that enables parallel and compression and then it verifies the parallel part is not used as compression disables it.

@sodonnel sodonnel force-pushed the sodonnel:HDFS-14617-FSImage branch from 88a0f06 to a426bb6 Jul 24, 2019
@hadoop-yetus

This comment has been minimized.

Copy link

hadoop-yetus commented Jul 25, 2019

💔 -1 overall

Vote Subsystem Runtime Comment
0 reexec 43 Docker mode activated.
_ Prechecks _
+1 dupname 0 No case conflicting files found.
+1 @author 0 The patch does not contain any @author tags.
+1 test4tests 0 The patch appears to include 3 new or modified test files.
_ trunk Compile Tests _
+1 mvninstall 1221 trunk passed
+1 compile 60 trunk passed
+1 checkstyle 47 trunk passed
+1 mvnsite 68 trunk passed
+1 shadedclient 777 branch has no errors when building and testing our client artifacts.
+1 javadoc 50 trunk passed
0 spotbugs 190 Used deprecated FindBugs config; considering switching to SpotBugs.
+1 findbugs 186 trunk passed
_ Patch Compile Tests _
+1 mvninstall 63 the patch passed
+1 compile 59 the patch passed
+1 javac 59 the patch passed
+1 checkstyle 45 hadoop-hdfs-project/hadoop-hdfs: The patch generated 0 new + 599 unchanged - 3 fixed = 599 total (was 602)
+1 mvnsite 66 the patch passed
-1 whitespace 0 The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix <<patch_file>>. Refer https://git-scm.com/docs/git-apply
+1 xml 1 The patch has no ill-formed XML file.
+1 shadedclient 731 patch has no errors when building and testing our client artifacts.
+1 javadoc 47 the patch passed
+1 findbugs 176 the patch passed
_ Other Tests _
-1 unit 5869 hadoop-hdfs in the patch failed.
+1 asflicense 44 The patch does not generate ASF License warnings.
9617
Reason Tests
Failed junit tests hadoop.hdfs.server.datanode.TestDataNodeECN
hadoop.hdfs.TestWriteReadStripedFile
hadoop.hdfs.TestAclsEndToEnd
hadoop.hdfs.TestLargeBlock
hadoop.hdfs.TestFileLengthOnClusterRestart
hadoop.hdfs.TestPread
hadoop.hdfs.TestDFSStartupVersions
hadoop.hdfs.TestErasureCodingPolicies
hadoop.hdfs.TestErasureCodingPoliciesWithRandomECPolicy
hadoop.hdfs.server.datanode.TestIncrementalBrVariations
hadoop.hdfs.TestFileCorruption
hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure
hadoop.hdfs.TestLease
hadoop.hdfs.web.TestWebHdfsTimeouts
hadoop.hdfs.TestReadStripedFileWithDecodingCorruptData
hadoop.hdfs.server.datanode.TestDirectoryScanner
hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks
Subsystem Report/Notes
Docker Client=19.03.0 Server=19.03.0 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1028/2/artifact/out/Dockerfile
GITHUB PR #1028
Optional Tests dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle xml
uname Linux 626eaff03e11 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
Build tool maven
Personality personality/hadoop.sh
git revision trunk / b41ef61
Default Java 1.8.0_212
whitespace https://builds.apache.org/job/hadoop-multibranch/job/PR-1028/2/artifact/out/whitespace-eol.txt
unit https://builds.apache.org/job/hadoop-multibranch/job/PR-1028/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
Test Results https://builds.apache.org/job/hadoop-multibranch/job/PR-1028/2/testReport/
Max. process+thread count 4598 (vs. ulimit of 5500)
modules C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs
Console output https://builds.apache.org/job/hadoop-multibranch/job/PR-1028/2/console
versions git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1
Powered by Apache Yetus 0.10.0 http://yetus.apache.org

This message was automatically generated.

@hadoop-yetus

This comment has been minimized.

Copy link

hadoop-yetus commented Jul 26, 2019

💔 -1 overall

Vote Subsystem Runtime Comment
0 reexec 42 Docker mode activated.
_ Prechecks _
+1 dupname 0 No case conflicting files found.
+1 @author 0 The patch does not contain any @author tags.
+1 test4tests 0 The patch appears to include 3 new or modified test files.
_ trunk Compile Tests _
+1 mvninstall 1091 trunk passed
+1 compile 58 trunk passed
+1 checkstyle 46 trunk passed
+1 mvnsite 64 trunk passed
+1 shadedclient 723 branch has no errors when building and testing our client artifacts.
+1 javadoc 49 trunk passed
0 spotbugs 164 Used deprecated FindBugs config; considering switching to SpotBugs.
+1 findbugs 162 trunk passed
_ Patch Compile Tests _
+1 mvninstall 61 the patch passed
+1 compile 58 the patch passed
+1 javac 58 the patch passed
+1 checkstyle 43 hadoop-hdfs-project/hadoop-hdfs: The patch generated 0 new + 598 unchanged - 3 fixed = 598 total (was 601)
+1 mvnsite 64 the patch passed
-1 whitespace 0 The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix <<patch_file>>. Refer https://git-scm.com/docs/git-apply
+1 xml 1 The patch has no ill-formed XML file.
+1 shadedclient 695 patch has no errors when building and testing our client artifacts.
+1 javadoc 47 the patch passed
+1 findbugs 175 the patch passed
_ Other Tests _
+1 unit 4891 hadoop-hdfs in the patch passed.
+1 asflicense 36 The patch does not generate ASF License warnings.
8356
Subsystem Report/Notes
Docker Client=19.03.1 Server=19.03.1 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1028/3/artifact/out/Dockerfile
GITHUB PR #1028
Optional Tests dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle xml
uname Linux f18a7584f2eb 4.4.0-157-generic #185-Ubuntu SMP Tue Jul 23 09:17:01 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
Build tool maven
Personality personality/hadoop.sh
git revision trunk / c0a0c35
Default Java 1.8.0_212
whitespace https://builds.apache.org/job/hadoop-multibranch/job/PR-1028/3/artifact/out/whitespace-eol.txt
Test Results https://builds.apache.org/job/hadoop-multibranch/job/PR-1028/3/testReport/
Max. process+thread count 4383 (vs. ulimit of 5500)
modules C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs
Console output https://builds.apache.org/job/hadoop-multibranch/job/PR-1028/3/console
versions git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1
Powered by Apache Yetus 0.10.0 http://yetus.apache.org

This message was automatically generated.

@hadoop-yetus

This comment has been minimized.

Copy link

hadoop-yetus commented Aug 2, 2019

💔 -1 overall

Vote Subsystem Runtime Comment
0 reexec 45 Docker mode activated.
_ Prechecks _
+1 dupname 0 No case conflicting files found.
+1 @author 0 The patch does not contain any @author tags.
+1 test4tests 0 The patch appears to include 3 new or modified test files.
_ trunk Compile Tests _
+1 mvninstall 1070 trunk passed
+1 compile 56 trunk passed
+1 checkstyle 45 trunk passed
+1 mvnsite 62 trunk passed
+1 shadedclient 745 branch has no errors when building and testing our client artifacts.
+1 javadoc 50 trunk passed
0 spotbugs 175 Used deprecated FindBugs config; considering switching to SpotBugs.
+1 findbugs 172 trunk passed
_ Patch Compile Tests _
+1 mvninstall 58 the patch passed
+1 compile 57 the patch passed
+1 javac 57 the patch passed
+1 checkstyle 42 hadoop-hdfs-project/hadoop-hdfs: The patch generated 0 new + 603 unchanged - 3 fixed = 603 total (was 606)
+1 mvnsite 61 the patch passed
-1 whitespace 0 The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix <<patch_file>>. Refer https://git-scm.com/docs/git-apply
+1 xml 3 The patch has no ill-formed XML file.
+1 shadedclient 710 patch has no errors when building and testing our client artifacts.
+1 javadoc 46 the patch passed
+1 findbugs 174 the patch passed
_ Other Tests _
-1 unit 4896 hadoop-hdfs in the patch failed.
+1 asflicense 33 The patch does not generate ASF License warnings.
8373
Reason Tests
Failed junit tests hadoop.hdfs.server.datanode.TestLargeBlockReport
hadoop.hdfs.server.namenode.snapshot.TestSnapshotRename
hadoop.hdfs.server.namenode.TestStartup
hadoop.hdfs.server.namenode.snapshot.TestSnapshotDeletion
hadoop.hdfs.server.namenode.TestFSEditLogLoader
hadoop.hdfs.server.namenode.TestStripedINodeFile
hadoop.hdfs.TestSetrepIncreasing
hadoop.hdfs.server.namenode.TestFsck
hadoop.hdfs.server.datanode.TestDataNodeErasureCodingMetrics
hadoop.hdfs.server.namenode.TestReencryption
hadoop.hdfs.server.namenode.TestXAttrConfigFlag
hadoop.hdfs.server.balancer.TestBalancerRPCDelay
hadoop.hdfs.TestDFSStartupVersions
hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots
hadoop.hdfs.server.namenode.TestNamenodeRetryCache
hadoop.hdfs.server.namenode.TestAuditLogs
hadoop.hdfs.TestDFSStorageStateRecovery
Subsystem Report/Notes
Docker Client=19.03.1 Server=19.03.1 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1028/4/artifact/out/Dockerfile
GITHUB PR #1028
Optional Tests dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle xml
uname Linux 7df7a602cde5 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
Build tool maven
Personality personality/hadoop.sh
git revision trunk / 17e8cf5
Default Java 1.8.0_212
whitespace https://builds.apache.org/job/hadoop-multibranch/job/PR-1028/4/artifact/out/whitespace-eol.txt
unit https://builds.apache.org/job/hadoop-multibranch/job/PR-1028/4/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
Test Results https://builds.apache.org/job/hadoop-multibranch/job/PR-1028/4/testReport/
Max. process+thread count 4361 (vs. ulimit of 5500)
modules C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs
Console output https://builds.apache.org/job/hadoop-multibranch/job/PR-1028/4/console
versions git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1
Powered by Apache Yetus 0.10.0 http://yetus.apache.org

This message was automatically generated.

Copy link
Contributor

jojochuang left a comment

Thanks Stephen and Xiaoqiao.
Patch looks good to me and I am only able to make a few useless comments.

if (parallelEnabled) {
if (compressionEnabled) {
LOG.warn("Parallel Image loading is not supported when {} is set to" +
" true. Parallel loading will be disabled.",

This comment has been minimized.

Copy link
@jojochuang

jojochuang Aug 3, 2019

Contributor

Please update hdfs-default.xml to include this caveat.

This comment has been minimized.

Copy link
@sodonnel

sodonnel Aug 13, 2019

Author Contributor

I have added a note to hdfs-default.xml for this.

boolean loadInParallel =
conf.getBoolean(DFSConfigKeys.DFS_IMAGE_PARALLEL_LOAD_KEY,
DFSConfigKeys.DFS_IMAGE_PARALLEL_LOAD_DEFAULT);
// TODO - check for compression and if enabled disable parallel

This comment has been minimized.

Copy link
@jojochuang

jojochuang Aug 3, 2019

Contributor

This is currently checked when saving the fsimage, but not when loading it.

This comment has been minimized.

Copy link
@sodonnel

sodonnel Aug 13, 2019

Author Contributor

I have addressed this TODO and also refactored the code a little to allow reuse between the saving and loading sections.

@jojochuang

This comment has been minimized.

Copy link
Contributor

jojochuang commented Aug 3, 2019

And we should make sure oiv tool works with this change. We can file another jira to address the oiv issue.

@Hexiaoqiao

This comment has been minimized.

Copy link
Contributor

Hexiaoqiao commented Aug 4, 2019

And we should make sure oiv tool works with this change. We can file another jira to address the oiv issue.

@jojochuang Thanks for your suggestions, current patch @sodonnel submitted seems not include OIV tools, And I agree that we should file another JIRA to trace it. I would like to file it, and help to trace it if @sodonnel have no times. FYI.
Another side, I wonder if we should supply some other tools or compatible upgrade to cover the case when parallel loading fail, then fallback to old serial loading with new FSImage format. I do not doubt the parallel loading feature, just want to give more choice to users.
Consider that when parallel loading failed due to some exception with new FSImage format, user could have no proper way to fallback even using OIV tools to transfer other format.

if (child.isFile()) {
inodeList.add(child);
}
if (inodeList.size() >= 1000) {

This comment has been minimized.

Copy link
@jojochuang

jojochuang Aug 8, 2019

Contributor

Please define the value as a constant (final static)

This comment has been minimized.

Copy link
@sodonnel

sodonnel Aug 14, 2019

Author Contributor

Done.

}

void loadINodeDirectorySection(InputStream in) throws IOException {
final List<INodeReference> refList = parent.getLoaderContext()
.getRefList();
ArrayList<INode> inodeList = new ArrayList<>();

This comment has been minimized.

Copy link
@jojochuang

jojochuang Aug 8, 2019

Contributor

i think the performance benefit of batch update outweighs cache locality.

if (loadInParallel) {
executorService = Executors.newFixedThreadPool(
conf.getInt(DFSConfigKeys.DFS_IMAGE_PARALLEL_THREADS_KEY,
DFSConfigKeys.DFS_IMAGE_PARALLEL_THREADS_DEFAULT));

This comment has been minimized.

Copy link
@jojochuang

jojochuang Aug 8, 2019

Contributor

nice to have: log an info message stating that parallel image loading is enabled and the number of threads used.

This comment has been minimized.

Copy link
@sodonnel

sodonnel Aug 14, 2019

Author Contributor

I added a log message here, and validated the thread count setting is not less than 1, otherwise I reset it to the default (4). I also pulled the code to do this check and to create the executor into a private method to reduce the noise in the already too long loadInternal method.

inodeLoader.loadINodeDirectorySection(in);
stageSubSections = getSubSectionsOfName(
subSections, SectionName.INODE_DIR_SUB);
if (loadInParallel && stageSubSections.size() > 0) {

This comment has been minimized.

Copy link
@jojochuang

jojochuang Aug 8, 2019

Contributor

you probably need to put a sample fsimage in old format in the resource directory (hadoop-hdfs-project/hadoop-hdfs/src/test/resources/), and load it in a unit test

}
if (inodeThreshold <= 0) {
LOG.warn("{} is set to {}. It must be greater than zero. Setting to" +
"default of {}",

This comment has been minimized.

Copy link
@jojochuang

jojochuang Aug 8, 2019

Contributor

" default"
^^ missing an extra space

This comment has been minimized.

Copy link
@sodonnel

sodonnel Aug 14, 2019

Author Contributor

Fixed.

@hadoop-yetus

This comment has been minimized.

Copy link

hadoop-yetus commented Aug 8, 2019

💔 -1 overall

Vote Subsystem Runtime Comment
0 reexec 46 Docker mode activated.
_ Prechecks _
+1 dupname 0 No case conflicting files found.
+1 @author 0 The patch does not contain any @author tags.
+1 test4tests 0 The patch appears to include 3 new or modified test files.
_ trunk Compile Tests _
+1 mvninstall 1057 trunk passed
+1 compile 60 trunk passed
+1 checkstyle 48 trunk passed
+1 mvnsite 63 trunk passed
+1 shadedclient 766 branch has no errors when building and testing our client artifacts.
+1 javadoc 54 trunk passed
0 spotbugs 165 Used deprecated FindBugs config; considering switching to SpotBugs.
+1 findbugs 162 trunk passed
_ Patch Compile Tests _
+1 mvninstall 61 the patch passed
+1 compile 50 the patch passed
+1 javac 50 the patch passed
+1 checkstyle 43 hadoop-hdfs-project/hadoop-hdfs: The patch generated 0 new + 604 unchanged - 3 fixed = 604 total (was 607)
+1 mvnsite 63 the patch passed
-1 whitespace 0 The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix <<patch_file>>. Refer https://git-scm.com/docs/git-apply
+1 xml 1 The patch has no ill-formed XML file.
+1 shadedclient 683 patch has no errors when building and testing our client artifacts.
+1 javadoc 47 the patch passed
+1 findbugs 166 the patch passed
_ Other Tests _
-1 unit 4923 hadoop-hdfs in the patch failed.
+1 asflicense 33 The patch does not generate ASF License warnings.
8368
Reason Tests
Failed junit tests hadoop.hdfs.TestDFSClientRetries
hadoop.hdfs.TestReadStripedFileWithDNFailure
hadoop.hdfs.server.datanode.TestLargeBlockReport
hadoop.hdfs.TestGetFileChecksum
hadoop.hdfs.server.datanode.TestDirectoryScanner
hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA
Subsystem Report/Notes
Docker Client=19.03.1 Server=19.03.1 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1028/5/artifact/out/Dockerfile
GITHUB PR #1028
Optional Tests dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle xml
uname Linux 3a6dda50bb5b 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
Build tool maven
Personality personality/hadoop.sh
git revision trunk / 70b4617
Default Java 1.8.0_212
whitespace https://builds.apache.org/job/hadoop-multibranch/job/PR-1028/5/artifact/out/whitespace-eol.txt
unit https://builds.apache.org/job/hadoop-multibranch/job/PR-1028/5/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
Test Results https://builds.apache.org/job/hadoop-multibranch/job/PR-1028/5/testReport/
Max. process+thread count 4970 (vs. ulimit of 5500)
modules C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs
Console output https://builds.apache.org/job/hadoop-multibranch/job/PR-1028/5/console
versions git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1
Powered by Apache Yetus 0.10.0 http://yetus.apache.org

This message was automatically generated.

@hadoop-yetus

This comment has been minimized.

Copy link

hadoop-yetus commented Aug 8, 2019

💔 -1 overall

Vote Subsystem Runtime Comment
0 reexec 117 Docker mode activated.
_ Prechecks _
+1 dupname 1 No case conflicting files found.
+1 @author 0 The patch does not contain any @author tags.
+1 test4tests 0 The patch appears to include 3 new or modified test files.
_ trunk Compile Tests _
+1 mvninstall 1324 trunk passed
+1 compile 65 trunk passed
+1 checkstyle 55 trunk passed
+1 mvnsite 69 trunk passed
+1 shadedclient 906 branch has no errors when building and testing our client artifacts.
+1 javadoc 52 trunk passed
0 spotbugs 171 Used deprecated FindBugs config; considering switching to SpotBugs.
+1 findbugs 169 trunk passed
_ Patch Compile Tests _
+1 mvninstall 66 the patch passed
+1 compile 59 the patch passed
+1 javac 59 the patch passed
+1 checkstyle 48 hadoop-hdfs-project/hadoop-hdfs: The patch generated 0 new + 604 unchanged - 3 fixed = 604 total (was 607)
+1 mvnsite 65 the patch passed
-1 whitespace 0 The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix <<patch_file>>. Refer https://git-scm.com/docs/git-apply
+1 xml 1 The patch has no ill-formed XML file.
+1 shadedclient 816 patch has no errors when building and testing our client artifacts.
+1 javadoc 52 the patch passed
+1 findbugs 206 the patch passed
_ Other Tests _
-1 unit 6198 hadoop-hdfs in the patch failed.
+1 asflicense 34 The patch does not generate ASF License warnings.
10362
Reason Tests
Failed junit tests hadoop.hdfs.TestWriteReadStripedFile
hadoop.hdfs.server.datanode.TestLargeBlockReport
hadoop.hdfs.web.TestWebHDFSAcl
hadoop.hdfs.TestErasureCodingPolicies
Subsystem Report/Notes
Docker Client=19.03.1 Server=19.03.1 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1028/6/artifact/out/Dockerfile
GITHUB PR #1028
Optional Tests dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle xml
uname Linux c0559e722233 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 10:55:24 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
Build tool maven
Personality personality/hadoop.sh
git revision trunk / 397a563
Default Java 1.8.0_222
whitespace https://builds.apache.org/job/hadoop-multibranch/job/PR-1028/6/artifact/out/whitespace-eol.txt
unit https://builds.apache.org/job/hadoop-multibranch/job/PR-1028/6/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
Test Results https://builds.apache.org/job/hadoop-multibranch/job/PR-1028/6/testReport/
Max. process+thread count 2717 (vs. ulimit of 5500)
modules C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs
Console output https://builds.apache.org/job/hadoop-multibranch/job/PR-1028/6/console
versions git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1
Powered by Apache Yetus 0.10.0 http://yetus.apache.org

This message was automatically generated.

@sodonnel sodonnel force-pushed the sodonnel:HDFS-14617-FSImage branch from b71ec0f to fb27e28 Aug 13, 2019
@hadoop-yetus

This comment has been minimized.

Copy link

hadoop-yetus commented Aug 14, 2019

💔 -1 overall

Vote Subsystem Runtime Comment
0 reexec 106 Docker mode activated.
_ Prechecks _
+1 dupname 0 No case conflicting files found.
+1 @author 0 The patch does not contain any @author tags.
+1 test4tests 0 The patch appears to include 3 new or modified test files.
_ trunk Compile Tests _
+1 mvninstall 2405 trunk passed
+1 compile 184 trunk passed
+1 checkstyle 137 trunk passed
+1 mvnsite 102 trunk passed
+1 shadedclient 1000 branch has no errors when building and testing our client artifacts.
+1 javadoc 59 trunk passed
0 spotbugs 198 Used deprecated FindBugs config; considering switching to SpotBugs.
+1 findbugs 194 trunk passed
_ Patch Compile Tests _
+1 mvninstall 71 the patch passed
+1 compile 66 the patch passed
+1 javac 66 the patch passed
+1 checkstyle 60 hadoop-hdfs-project/hadoop-hdfs: The patch generated 0 new + 604 unchanged - 3 fixed = 604 total (was 607)
+1 mvnsite 88 the patch passed
+1 whitespace 0 The patch has no whitespace issues.
+1 xml 2 The patch has no ill-formed XML file.
+1 shadedclient 829 patch has no errors when building and testing our client artifacts.
+1 javadoc 49 the patch passed
+1 findbugs 187 the patch passed
_ Other Tests _
-1 unit 5034 hadoop-hdfs in the patch failed.
+1 asflicense 34 The patch does not generate ASF License warnings.
10597
Reason Tests
Failed junit tests hadoop.hdfs.tools.TestDFSZKFailoverController
Subsystem Report/Notes
Docker Client=19.03.1 Server=19.03.1 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1028/7/artifact/out/Dockerfile
GITHUB PR #1028
Optional Tests dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle xml
uname Linux efadfb1ed578 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 10:55:24 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
Build tool maven
Personality personality/hadoop.sh
git revision trunk / 9691117
Default Java 1.8.0_222
unit https://builds.apache.org/job/hadoop-multibranch/job/PR-1028/7/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
Test Results https://builds.apache.org/job/hadoop-multibranch/job/PR-1028/7/testReport/
Max. process+thread count 3338 (vs. ulimit of 5500)
modules C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs
Console output https://builds.apache.org/job/hadoop-multibranch/job/PR-1028/7/console
versions git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1
Powered by Apache Yetus 0.10.0 http://yetus.apache.org

This message was automatically generated.

@sodonnel

This comment has been minimized.

Copy link
Contributor Author

sodonnel commented Aug 14, 2019

And we should make sure oiv tool works with this change. We can file another jira to address the oiv issue.

I checked OIV, and it can load the images with parallel sections in the image index with no problems and it does not produce any warnings. The reason, is that this change simply adds additional sections to the image index, so we still have:

INODE START_OFFSET LENGTH
  INODE_SUB START_OFFSET LENGTH
  INODE_SUB START_OFFSET LENGTH
  INODE_SUB START_OFFSET LENGTH
  ...
INODE_DIR START_OFFSET LENGTH
  INODE_DIR_SUB START_OFFSET LENGTH
  INODE_DIR_SUB START_OFFSET LENGTH
  INODE_DIR_SUB START_OFFSET LENGTH
  ...

This means that if a loader looks for certain sections, it does not matter which other sections are there, provided it ignores them. In the case of the OIV for the "delimited" processor, it uses this pattern:

    for (FileSummary.Section section : sections) {
      if (SectionName.fromString(section.getName()) == SectionName.INODE) {
        fin.getChannel().position(section.getOffset());
        is = FSImageUtil.wrapInputStreamForCompression(conf,
            summary.getCodec(), new BufferedInputStream(new LimitInputStream(
                fin, section.getLength())));
        outputINodes(is);
      }
    }

It loops over all the sections in the "FileSummary Index" looking for one it ones (INODE in the above example) and then ignore all others.

In the case of the XML processor, which is probably the most important, it works in a very similar way to how the namenode loads the image. It loops over all sections and uses a case statement to process the sections it is interested in, and skips others:

     for (FileSummary.Section s : sections) {
        fin.getChannel().position(s.getOffset());
        InputStream is = FSImageUtil.wrapInputStreamForCompression(conf,
            summary.getCodec(), new BufferedInputStream(new LimitInputStream(
                fin, s.getLength())));

        SectionName sectionName = SectionName.fromString(s.getName());
        if (sectionName == null) {
          throw new IOException("Unrecognized section " + s.getName());
        }
        switch (sectionName) {
        case NS_INFO:
          dumpNameSection(is);
          break;
        case STRING_TABLE:
          loadStringTable(is);
          break;
        case ERASURE_CODING:
          dumpErasureCodingSection(is);
          break;
        case INODE:
          dumpINodeSection(is);
          break;
        case INODE_REFERENCE:
          dumpINodeReferenceSection(is);
          break;
	  
        <snipped>
	
        default:
          break;
        }
      }
      out.print("</fsimage>\n");
    }

Note the default clause, where it does nothing if it encounters a section name it does not expect.

I tested running the other processors, File Distribution, DetectCorruption and Web and they all worked with no issues.

Two future improvements we could do in a new Jiras, are:

  1. Make the ReverseXML processor write out the sub-section headers so it creates a parallel enabled image (if the relevant settings are enabled)

  2. Investigate allowing OIV to process the image in parallel if it has the sub-sections in the index and parallel is enabled.

@jojochuang

This comment has been minimized.

Copy link
Contributor

jojochuang commented Aug 14, 2019

manually trigger a precommit rebuild

@hadoop-yetus

This comment has been minimized.

Copy link

hadoop-yetus commented Aug 14, 2019

💔 -1 overall

Vote Subsystem Runtime Comment
0 reexec 43 Docker mode activated.
_ Prechecks _
+1 dupname 1 No case conflicting files found.
+1 @author 0 The patch does not contain any @author tags.
+1 test4tests 0 The patch appears to include 3 new or modified test files.
_ trunk Compile Tests _
+1 mvninstall 1066 trunk passed
+1 compile 54 trunk passed
+1 checkstyle 48 trunk passed
+1 mvnsite 62 trunk passed
+1 shadedclient 714 branch has no errors when building and testing our client artifacts.
+1 javadoc 47 trunk passed
0 spotbugs 157 Used deprecated FindBugs config; considering switching to SpotBugs.
+1 findbugs 156 trunk passed
_ Patch Compile Tests _
+1 mvninstall 55 the patch passed
+1 compile 51 the patch passed
+1 javac 51 the patch passed
-0 checkstyle 48 hadoop-hdfs-project/hadoop-hdfs: The patch generated 1 new + 605 unchanged - 3 fixed = 606 total (was 608)
+1 mvnsite 59 the patch passed
+1 whitespace 0 The patch has no whitespace issues.
+1 xml 2 The patch has no ill-formed XML file.
+1 shadedclient 685 patch has no errors when building and testing our client artifacts.
+1 javadoc 47 the patch passed
+1 findbugs 164 the patch passed
_ Other Tests _
-1 unit 6384 hadoop-hdfs in the patch failed.
+1 asflicense 42 The patch does not generate ASF License warnings.
9775
Reason Tests
Failed junit tests hadoop.hdfs.server.mover.TestMover
hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy
hadoop.hdfs.qjournal.client.TestQJMWithFaults
hadoop.hdfs.server.namenode.TestNamenodeCapacityReport
hadoop.hdfs.TestFileChecksum
hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes
hadoop.hdfs.server.namenode.ha.TestBootstrapAliasmap
hadoop.hdfs.TestErasureCodingPolicyWithSnapshotWithRandomECPolicy
hadoop.hdfs.server.namenode.TestFsck
hadoop.hdfs.TestDFSStripedOutputStreamWithRandomECPolicy
hadoop.hdfs.TestDistributedFileSystemWithECFileWithRandomECPolicy
hadoop.hdfs.TestErasureCodingPoliciesWithRandomECPolicy
hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks
hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy
hadoop.hdfs.server.mover.TestStorageMover
hadoop.hdfs.TestDFSStripedOutputStream
hadoop.hdfs.server.balancer.TestBalancer
Subsystem Report/Notes
Docker Client=19.03.1 Server=19.03.1 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1028/8/artifact/out/Dockerfile
GITHUB PR #1028
Optional Tests dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle xml
uname Linux bd793cb1fcaf 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
Build tool maven
Personality personality/hadoop.sh
git revision trunk / c720441
Default Java 1.8.0_222
checkstyle https://builds.apache.org/job/hadoop-multibranch/job/PR-1028/8/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
unit https://builds.apache.org/job/hadoop-multibranch/job/PR-1028/8/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
Test Results https://builds.apache.org/job/hadoop-multibranch/job/PR-1028/8/testReport/
Max. process+thread count 3969 (vs. ulimit of 5500)
modules C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs
Console output https://builds.apache.org/job/hadoop-multibranch/job/PR-1028/8/console
versions git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1
Powered by Apache Yetus 0.10.0 http://yetus.apache.org

This message was automatically generated.

@Hexiaoqiao

This comment has been minimized.

Copy link
Contributor

Hexiaoqiao commented Aug 15, 2019

@sodonnel for OIV tools, based on branch-2.7 I got the different result IIRC. unfortunately I do not dig it deeply at that time. I would like to test it again later and report it if meet some exceptions.

Two future improvements we could do in a new Jiras, are:
Make the ReverseXML processor write out the sub-section headers so it creates a parallel enabled image (if the relevant settings are enabled)
Investigate allowing OIV to process the image in parallel if it has the sub-sections in the index and parallel is enabled.

+1. Thanks @sodonnel

@jojochuang

This comment has been minimized.

Copy link
Contributor

jojochuang commented Aug 16, 2019

+1 from me. I've reviewed it several times and I think this is good. Will let it sit for a few days for other folks to comment on.

@hadoop-yetus

This comment has been minimized.

Copy link

hadoop-yetus commented Aug 16, 2019

💔 -1 overall

Vote Subsystem Runtime Comment
0 reexec 39 Docker mode activated.
_ Prechecks _
+1 dupname 0 No case conflicting files found.
+1 @author 0 The patch does not contain any @author tags.
+1 test4tests 0 The patch appears to include 3 new or modified test files.
_ trunk Compile Tests _
+1 mvninstall 1034 trunk passed
+1 compile 52 trunk passed
+1 checkstyle 42 trunk passed
+1 mvnsite 59 trunk passed
+1 shadedclient 689 branch has no errors when building and testing our client artifacts.
+1 javadoc 47 trunk passed
0 spotbugs 156 Used deprecated FindBugs config; considering switching to SpotBugs.
+1 findbugs 153 trunk passed
_ Patch Compile Tests _
+1 mvninstall 60 the patch passed
+1 compile 54 the patch passed
+1 javac 54 the patch passed
-0 checkstyle 43 hadoop-hdfs-project/hadoop-hdfs: The patch generated 1 new + 605 unchanged - 3 fixed = 606 total (was 608)
+1 mvnsite 61 the patch passed
+1 whitespace 0 The patch has no whitespace issues.
+1 xml 2 The patch has no ill-formed XML file.
+1 shadedclient 678 patch has no errors when building and testing our client artifacts.
+1 javadoc 44 the patch passed
+1 findbugs 163 the patch passed
_ Other Tests _
-1 unit 5039 hadoop-hdfs in the patch failed.
-1 asflicense 31 The patch generated 1 ASF License warnings.
8340
Reason Tests
Failed junit tests hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean
hadoop.hdfs.server.balancer.TestBalancer
hadoop.hdfs.server.datanode.TestDirectoryScanner
Subsystem Report/Notes
Docker Client=19.03.1 Server=19.03.1 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1028/9/artifact/out/Dockerfile
GITHUB PR #1028
Optional Tests dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle xml
uname Linux 0e586a10e79b 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
Build tool maven
Personality personality/hadoop.sh
git revision trunk / e356e4f
Default Java 1.8.0_222
checkstyle https://builds.apache.org/job/hadoop-multibranch/job/PR-1028/9/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
unit https://builds.apache.org/job/hadoop-multibranch/job/PR-1028/9/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
Test Results https://builds.apache.org/job/hadoop-multibranch/job/PR-1028/9/testReport/
asflicense https://builds.apache.org/job/hadoop-multibranch/job/PR-1028/9/artifact/out/patch-asflicense-problems.txt
Max. process+thread count 4960 (vs. ulimit of 5500)
modules C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs
Console output https://builds.apache.org/job/hadoop-multibranch/job/PR-1028/9/console
versions git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1
Powered by Apache Yetus 0.10.0 http://yetus.apache.org

This message was automatically generated.

@Hexiaoqiao

This comment has been minimized.

Copy link
Contributor

Hexiaoqiao commented Aug 17, 2019

+1 (non-binding) from me.

@hadoop-yetus

This comment has been minimized.

Copy link

hadoop-yetus commented Aug 20, 2019

💔 -1 overall

Vote Subsystem Runtime Comment
0 reexec 39 Docker mode activated.
_ Prechecks _
+1 dupname 0 No case conflicting files found.
+1 @author 0 The patch does not contain any @author tags.
+1 test4tests 0 The patch appears to include 3 new or modified test files.
_ trunk Compile Tests _
+1 mvninstall 1045 trunk passed
+1 compile 54 trunk passed
+1 checkstyle 47 trunk passed
+1 mvnsite 58 trunk passed
+1 shadedclient 724 branch has no errors when building and testing our client artifacts.
+1 javadoc 49 trunk passed
0 spotbugs 158 Used deprecated FindBugs config; considering switching to SpotBugs.
+1 findbugs 155 trunk passed
_ Patch Compile Tests _
+1 mvninstall 58 the patch passed
+1 compile 52 the patch passed
+1 javac 52 the patch passed
-0 checkstyle 45 hadoop-hdfs-project/hadoop-hdfs: The patch generated 1 new + 605 unchanged - 3 fixed = 606 total (was 608)
+1 mvnsite 57 the patch passed
+1 whitespace 0 The patch has no whitespace issues.
+1 xml 1 The patch has no ill-formed XML file.
+1 shadedclient 711 patch has no errors when building and testing our client artifacts.
+1 javadoc 50 the patch passed
+1 findbugs 160 the patch passed
_ Other Tests _
-1 unit 4953 hadoop-hdfs in the patch failed.
+1 asflicense 30 The patch does not generate ASF License warnings.
8342
Reason Tests
Failed junit tests hadoop.hdfs.server.namenode.TestReencryptionWithKMS
hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean
hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency
hadoop.hdfs.server.namenode.TestNameNodeAcl
hadoop.hdfs.server.namenode.TestFsck
Subsystem Report/Notes
Docker Client=19.03.1 Server=19.03.1 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1028/10/artifact/out/Dockerfile
GITHUB PR #1028
Optional Tests dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle xml
uname Linux 4d1a3f250407 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
Build tool maven
Personality personality/hadoop.sh
git revision trunk / 094d736
Default Java 1.8.0_222
checkstyle https://builds.apache.org/job/hadoop-multibranch/job/PR-1028/10/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
unit https://builds.apache.org/job/hadoop-multibranch/job/PR-1028/10/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
Test Results https://builds.apache.org/job/hadoop-multibranch/job/PR-1028/10/testReport/
Max. process+thread count 4337 (vs. ulimit of 5500)
modules C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs
Console output https://builds.apache.org/job/hadoop-multibranch/job/PR-1028/10/console
versions git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1
Powered by Apache Yetus 0.10.0 http://yetus.apache.org

This message was automatically generated.

@hadoop-yetus

This comment has been minimized.

Copy link

hadoop-yetus commented Aug 22, 2019

💔 -1 overall

Vote Subsystem Runtime Comment
0 reexec 77 Docker mode activated.
_ Prechecks _
+1 dupname 0 No case conflicting files found.
+1 @author 0 The patch does not contain any @author tags.
+1 test4tests 0 The patch appears to include 3 new or modified test files.
_ trunk Compile Tests _
+1 mvninstall 1262 trunk passed
+1 compile 59 trunk passed
+1 checkstyle 59 trunk passed
+1 mvnsite 74 trunk passed
+1 shadedclient 862 branch has no errors when building and testing our client artifacts.
+1 javadoc 50 trunk passed
0 spotbugs 163 Used deprecated FindBugs config; considering switching to SpotBugs.
+1 findbugs 162 trunk passed
_ Patch Compile Tests _
+1 mvninstall 58 the patch passed
+1 compile 54 the patch passed
+1 javac 54 the patch passed
-0 checkstyle 44 hadoop-hdfs-project/hadoop-hdfs: The patch generated 1 new + 604 unchanged - 3 fixed = 605 total (was 607)
+1 mvnsite 63 the patch passed
+1 whitespace 0 The patch has no whitespace issues.
+1 xml 1 The patch has no ill-formed XML file.
+1 shadedclient 810 patch has no errors when building and testing our client artifacts.
+1 javadoc 59 the patch passed
+1 findbugs 195 the patch passed
_ Other Tests _
-1 unit 7026 hadoop-hdfs in the patch failed.
+1 asflicense 46 The patch does not generate ASF License warnings.
11006
Reason Tests
Failed junit tests hadoop.hdfs.TestReconstructStripedFileWithRandomECPolicy
hadoop.hdfs.TestBlockStoragePolicy
Subsystem Report/Notes
Docker Client=19.03.1 Server=19.03.1 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1028/11/artifact/out/Dockerfile
GITHUB PR #1028
Optional Tests dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle xml
uname Linux a03fe679fbae 4.15.0-48-generic #51-Ubuntu SMP Wed Apr 3 08:28:49 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
Build tool maven
Personality personality/hadoop.sh
git revision trunk / 69ddb36
Default Java 1.8.0_212
checkstyle https://builds.apache.org/job/hadoop-multibranch/job/PR-1028/11/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
unit https://builds.apache.org/job/hadoop-multibranch/job/PR-1028/11/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
Test Results https://builds.apache.org/job/hadoop-multibranch/job/PR-1028/11/testReport/
Max. process+thread count 2739 (vs. ulimit of 5500)
modules C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs
Console output https://builds.apache.org/job/hadoop-multibranch/job/PR-1028/11/console
versions git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1
Powered by Apache Yetus 0.10.0 http://yetus.apache.org

This message was automatically generated.

@jojochuang jojochuang merged commit b67812e into apache:trunk Aug 23, 2019
@@ -883,6 +883,22 @@
public static final String DFS_IMAGE_TRANSFER_CHUNKSIZE_KEY = "dfs.image.transfer.chunksize";
public static final int DFS_IMAGE_TRANSFER_CHUNKSIZE_DEFAULT = 64 * 1024;

This comment has been minimized.

Copy link
@xiaoxiaopan118

xiaoxiaopan118 Aug 30, 2019

can add annotation?
// NameNode fsimage start parallel

arp7 pushed a commit to arp7/hadoop that referenced this pull request Sep 18, 2019
…fsimage index (apache#1028). Contributed by  Stephen O'Donnell.

Reviewed-by: He Xiaoqiao <hexiaoqiao@apache.org>
(cherry picked from commit b67812e)

Change-Id: Ib5b171d1431ecbebade9c730bc9744dc68e6128d
smengcl pushed a commit to smengcl/hadoop that referenced this pull request Oct 8, 2019
…fsimage index (apache#1028). Contributed by  Stephen O'Donnell.

Ref: CDH-80870

Reviewed-by: He Xiaoqiao <hexiaoqiao@apache.org>
(cherry picked from commit b67812e)
Change-Id: I5ecc31d20ff930df7642fcce43abc81bb96bdefe
shanthoosh pushed a commit to shanthoosh/hadoop that referenced this pull request Oct 15, 2019
… used to merge PRs (see Contributor's Corner doc) (apache#1028)
shanthoosh pushed a commit to shanthoosh/hadoop that referenced this pull request Oct 15, 2019
…o longer used to merge PRs (see Contributor's Corner doc) (apache#1028)" (apache#1062)

This reverts commit 7aa6195.

Some committers still use this merge script instead of merging directly
from GitHub, so adding this back for now. We can remove this merge
script once everyone moves to the GitHub flow.
amahussein pushed a commit to amahussein/hadoop that referenced this pull request Oct 29, 2019
…fsimage index (apache#1028). Contributed by  Stephen O'Donnell.

Reviewed-by: He Xiaoqiao <hexiaoqiao@apache.org>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
5 participants
You can’t perform that action at this time.