Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

HDDS-1054. List Multipart uploads in a bucket #1277

Merged
merged 9 commits into from Sep 19, 2019

Conversation

elek
Copy link
Member

@elek elek commented Aug 11, 2019

This Jira is to implement in ozone to list of in-progress multipart uploads in a bucket.

[https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadListMPUpload.html]

 

See: https://issues.apache.org/jira/browse/HDDS-1054

@elek elek added the ozone label Aug 11, 2019
@bharatviswa504
Copy link
Contributor

In the list, we have different parameters like key-marker, max-uploads and few other parameters.
Are we planning to do this in a new Jira?

@apache apache deleted a comment from hadoop-yetus Aug 23, 2019
@apache apache deleted a comment from hadoop-yetus Aug 23, 2019
@apache apache deleted a comment from hadoop-yetus Aug 23, 2019
@apache apache deleted a comment from hadoop-yetus Aug 23, 2019
@elek
Copy link
Member Author

elek commented Aug 27, 2019

In the list, we have different parameters like key-marker, max-uploads and few other parameters.
Are we planning to do this in a new Jira?

Yes. I pagination is not yet implemented. This is the minimal implementation to support docker registry. Will create an other jira to add all the pagination magic.

@anuengineer
Copy link
Contributor

We need to rebase this patch since we have removed the Rest Client. Thanks.

metadataManager.getOpenKeyTable();

OmKeyInfo omKeyInfo =
openKeyTable.get(upload.getDbKey());
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here we are reading openKeyTable only for getting creation time. If we can have this information in omMultipartKeyInfo, we could avoid DB calls for openKeyTable.

To do this, We can set creationTime in OmMultipartKeyInfo during initiateMultipartUpload . In this way, we can get all the required information from the MultipartKeyInfo table.

And also StorageClass is missing from the returned OmMultipartUpload, as listMultipartUploads shows StorageClass information. For this, if we can return replicationType and depending on this value, we can set StorageClass in the listMultipartUploads Response.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You are 100% right. But it seems to be a bigger change. Let's do it in https://issues.apache.org/jira/browse/HDDS-2131

(On the other hand I added audit + metrics support because they were just a few lines)

public OmMultipartUploadList listMultipartUploads(String volumeName,
String bucketName, String prefix) throws OMException {
Preconditions.checkNotNull(volumeName);
Preconditions.checkNotNull(bucketName);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

prefix also should not be null. As prefix is also required in ListMultipartUploadRequest in proto.

And also here we using "+" for concatentation, so if we pass null for prefix, then it will be /volume/bucket/null. The below method is called by getMultipartUploadKeys.
public static String getDbKey(String volume, String bucket, String key) {
return OM_KEY_PREFIX + volume + OM_KEY_PREFIX + bucket +
OM_KEY_PREFIX + key;
}

@bharatviswa504
Copy link
Contributor

bharatviswa504 commented Sep 3, 2019

When i run the command I see below output, whereas it is not showing up bucketName, keyMarker other fields.

bash-4.2$ aws s3api --endpoint http://s3g:9878 list-multipart-uploads --bucket b1234 --prefix mpu
{
    "Uploads": [
        {
            "Initiator": {
                "DisplayName": "Not Supported", 
                "ID": "NOT-SUPPORTED"
            }, 
            "Initiated": "2019-09-03T18:39:35.916Z", 
            "UploadId": "24eea7f4-52db-4a0f-978a-f06cb7a57657-102730037717565440", 
            "StorageClass": "STANDARD", 
            "Key": "mpukey", 
            "Owner": {
                "DisplayName": "Not Supported", 
                "ID": "NOT-SUPPORTED"
            }
        }, 
        {
            "Initiator": {
                "DisplayName": "Not Supported", 
                "ID": "NOT-SUPPORTED"
            }, 
            "Initiated": "2019-09-03T18:39:37.816Z", 
            "UploadId": "81c0e5c2-db11-4b11-a5f7-81f48bdbfb04-102730037842083841", 
            "StorageClass": "STANDARD", 
            "Key": "mpukey1", 
            "Owner": {
                "DisplayName": "Not Supported", 
                "ID": "NOT-SUPPORTED"
            }
        }, 
        {
            "Initiator": {
                "DisplayName": "Not Supported", 
                "ID": "NOT-SUPPORTED"
            }, 
            "Initiated": "2019-09-03T18:39:39.259Z", 
            "UploadId": "4aab75b8-1954-4e8a-a658-0d403bcbc42f-102730037936717826", 
            "StorageClass": "STANDARD", 
            "Key": "mpukey2", 
            "Owner": {
                "DisplayName": "Not Supported", 
                "ID": "NOT-SUPPORTED"
            }
        }
    ]
}

Copy link
Contributor

@bharatviswa504 bharatviswa504 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Have few comments in place.
Can we open Jira's for all TODO's in this Jira for tracking purpose?

  1. Audit support for the new method.
  2. Pagination support and other query parameters support.
  3. If replication type will not be handled in this Jira, can you open Jira for this one too.

OmMultipartUpload.getDbKey(volumeName, bucketName, prefix);
iterator.seek(prefixKey);

while (iterator.hasNext()) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here, we need to consider table cache also now HA/Non-HA code path is merged (HDDS-1909)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you please help me to understand how should it be done? What about if an MPU is finished and deleted? How is it cached? I think I can't return with (cached values + db values) because the deletions....

@apache apache deleted a comment from hadoop-yetus Sep 12, 2019
@apache apache deleted a comment from hadoop-yetus Sep 12, 2019
@apache apache deleted a comment from hadoop-yetus Sep 12, 2019
@apache apache deleted a comment from hadoop-yetus Sep 12, 2019
@apache apache deleted a comment from hadoop-yetus Sep 12, 2019
@apache apache deleted a comment from hadoop-yetus Sep 12, 2019
@apache apache deleted a comment from hadoop-yetus Sep 12, 2019
@apache apache deleted a comment from hadoop-yetus Sep 12, 2019
@elek
Copy link
Member Author

elek commented Sep 14, 2019

Have few comments in place.
Can we open Jira's for all TODO's in this Jira for tracking purpose?

1. Audit support for the new method.

2. Pagination support and other query parameters support.

3. If replication type will not be handled in this Jira, can you open Jira for this one too.

Sure. Audit + the replication type support is added to this patch. I created HDDS-2130 for the pagination support.

@hadoop-yetus
Copy link

💔 -1 overall

Vote Subsystem Runtime Comment
0 reexec 1208 Docker mode activated.
_ Prechecks _
+1 dupname 1 No case conflicting files found.
+1 @author 0 The patch does not contain any @author tags.
+1 test4tests 0 The patch appears to include 3 new or modified test files.
_ trunk Compile Tests _
0 mvndep 77 Maven dependency ordering for branch
-1 mvninstall 32 hadoop-ozone in trunk failed.
-1 compile 19 hadoop-ozone in trunk failed.
+1 checkstyle 100 trunk passed
+1 mvnsite 0 trunk passed
+1 shadedclient 937 branch has no errors when building and testing our client artifacts.
-1 javadoc 13 hadoop-hdds in trunk failed.
-1 javadoc 15 hadoop-ozone in trunk failed.
0 spotbugs 159 Used deprecated FindBugs config; considering switching to SpotBugs.
-1 findbugs 23 hadoop-ozone in trunk failed.
_ Patch Compile Tests _
0 mvndep 23 Maven dependency ordering for patch
-1 mvninstall 31 hadoop-ozone in the patch failed.
-1 compile 22 hadoop-ozone in the patch failed.
-1 cc 22 hadoop-ozone in the patch failed.
-1 javac 22 hadoop-ozone in the patch failed.
-0 checkstyle 24 hadoop-hdds: The patch generated 5 new + 9 unchanged - 1 fixed = 14 total (was 10)
-0 checkstyle 71 hadoop-ozone: The patch generated 373 new + 2410 unchanged - 15 fixed = 2783 total (was 2425)
+1 mvnsite 0 the patch passed
+1 whitespace 1 The patch has no whitespace issues.
+1 shadedclient 726 patch has no errors when building and testing our client artifacts.
-1 javadoc 14 hadoop-hdds in the patch failed.
-1 javadoc 15 hadoop-ozone in the patch failed.
-1 findbugs 23 hadoop-ozone in the patch failed.
_ Other Tests _
-1 unit 163 hadoop-hdds in the patch failed.
-1 unit 25 hadoop-ozone in the patch failed.
+1 asflicense 29 The patch does not generate ASF License warnings.
4264
Reason Tests
Failed junit tests hadoop.ozone.container.ozoneimpl.TestOzoneContainer
hadoop.ozone.container.keyvalue.TestKeyValueContainer
Subsystem Report/Notes
Docker Client=19.03.2 Server=19.03.2 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1277/11/artifact/out/Dockerfile
GITHUB PR #1277
Optional Tests dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle cc
uname Linux 86494a8d4639 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
Build tool maven
Personality personality/hadoop.sh
git revision trunk / 6a9f7ca
Default Java 1.8.0_222
mvninstall https://builds.apache.org/job/hadoop-multibranch/job/PR-1277/11/artifact/out/branch-mvninstall-hadoop-ozone.txt
compile https://builds.apache.org/job/hadoop-multibranch/job/PR-1277/11/artifact/out/branch-compile-hadoop-ozone.txt
javadoc https://builds.apache.org/job/hadoop-multibranch/job/PR-1277/11/artifact/out/branch-javadoc-hadoop-hdds.txt
javadoc https://builds.apache.org/job/hadoop-multibranch/job/PR-1277/11/artifact/out/branch-javadoc-hadoop-ozone.txt
findbugs https://builds.apache.org/job/hadoop-multibranch/job/PR-1277/11/artifact/out/branch-findbugs-hadoop-ozone.txt
mvninstall https://builds.apache.org/job/hadoop-multibranch/job/PR-1277/11/artifact/out/patch-mvninstall-hadoop-ozone.txt
compile https://builds.apache.org/job/hadoop-multibranch/job/PR-1277/11/artifact/out/patch-compile-hadoop-ozone.txt
cc https://builds.apache.org/job/hadoop-multibranch/job/PR-1277/11/artifact/out/patch-compile-hadoop-ozone.txt
javac https://builds.apache.org/job/hadoop-multibranch/job/PR-1277/11/artifact/out/patch-compile-hadoop-ozone.txt
checkstyle https://builds.apache.org/job/hadoop-multibranch/job/PR-1277/11/artifact/out/diff-checkstyle-hadoop-hdds.txt
checkstyle https://builds.apache.org/job/hadoop-multibranch/job/PR-1277/11/artifact/out/diff-checkstyle-hadoop-ozone.txt
javadoc https://builds.apache.org/job/hadoop-multibranch/job/PR-1277/11/artifact/out/patch-javadoc-hadoop-hdds.txt
javadoc https://builds.apache.org/job/hadoop-multibranch/job/PR-1277/11/artifact/out/patch-javadoc-hadoop-ozone.txt
findbugs https://builds.apache.org/job/hadoop-multibranch/job/PR-1277/11/artifact/out/patch-findbugs-hadoop-ozone.txt
unit https://builds.apache.org/job/hadoop-multibranch/job/PR-1277/11/artifact/out/patch-unit-hadoop-hdds.txt
unit https://builds.apache.org/job/hadoop-multibranch/job/PR-1277/11/artifact/out/patch-unit-hadoop-ozone.txt
Test Results https://builds.apache.org/job/hadoop-multibranch/job/PR-1277/11/testReport/
Max. process+thread count 306 (vs. ulimit of 5500)
modules C: hadoop-hdds/common hadoop-ozone/client hadoop-ozone/common hadoop-ozone/dist hadoop-ozone/ozone-manager hadoop-ozone/s3gateway U: .
Console output https://builds.apache.org/job/hadoop-multibranch/job/PR-1277/11/console
versions git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1
Powered by Apache Yetus 0.10.0 http://yetus.apache.org

This message was automatically generated.

@elek
Copy link
Member Author

elek commented Sep 16, 2019

/retest

2 similar comments
@elek
Copy link
Member Author

elek commented Sep 16, 2019

/retest

@elek
Copy link
Member Author

elek commented Sep 16, 2019

/retest

public static S3StorageType fromReplicationType(
ReplicationType replicationType) {
if (replicationType == ReplicationType.STAND_ALONE) {
return S3StorageType.REDUCED_REDUNDANCY;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

just a question, I think this is before even this patch.
Previously we use STAND_ALONE for replication factor one, now we use RATIS with one and three. So, I think we need to change the code for this right?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sounds reasonable but it requires the change of the protobuf (factor is not available from the message). Seems to be a bigger change but I added it to this patch (sorry, now it's an other commit to review :-P )

Copy link
Contributor

@bharatviswa504 bharatviswa504 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank You @elek for the updated PR.
Mostly looks good to me. Few minor comments/questions.
And there are few acceptance test failures. Mostly looks unrelated, can you verify them.

@bharatviswa504
Copy link
Contributor

/retest

int nextMarker, boolean truncate) {
this.replicationType = type;
this.replicationFactor = factor;

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

whitespace:end of line

@hadoop-yetus
Copy link

💔 -1 overall

Vote Subsystem Runtime Comment
0 reexec 43 Docker mode activated.
_ Prechecks _
+1 dupname 1 No case conflicting files found.
+1 @author 0 The patch does not contain any @author tags.
+1 test4tests 0 The patch appears to include 4 new or modified test files.
_ trunk Compile Tests _
0 mvndep 48 Maven dependency ordering for branch
-1 mvninstall 33 hadoop-ozone in trunk failed.
-1 compile 23 hadoop-ozone in trunk failed.
+1 checkstyle 124 trunk passed
+1 mvnsite 0 trunk passed
+1 shadedclient 1003 branch has no errors when building and testing our client artifacts.
+1 javadoc 164 trunk passed
0 spotbugs 184 Used deprecated FindBugs config; considering switching to SpotBugs.
-1 findbugs 26 hadoop-ozone in trunk failed.
_ Patch Compile Tests _
0 mvndep 32 Maven dependency ordering for patch
-1 mvninstall 35 hadoop-ozone in the patch failed.
-1 compile 27 hadoop-ozone in the patch failed.
-1 cc 27 hadoop-ozone in the patch failed.
-1 javac 27 hadoop-ozone in the patch failed.
-0 checkstyle 31 hadoop-hdds: The patch generated 5 new + 9 unchanged - 1 fixed = 14 total (was 10)
-0 checkstyle 97 hadoop-ozone: The patch generated 448 new + 2401 unchanged - 92 fixed = 2849 total (was 2493)
+1 mvnsite 0 the patch passed
-1 whitespace 0 The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix <<patch_file>>. Refer https://git-scm.com/docs/git-apply
+1 shadedclient 765 patch has no errors when building and testing our client artifacts.
-1 javadoc 91 hadoop-ozone generated 9 new + 249 unchanged - 7 fixed = 258 total (was 256)
-1 findbugs 28 hadoop-ozone in the patch failed.
_ Other Tests _
-1 unit 247 hadoop-hdds in the patch failed.
-1 unit 31 hadoop-ozone in the patch failed.
+1 asflicense 35 The patch does not generate ASF License warnings.
3706
Reason Tests
Failed junit tests hadoop.hdds.scm.container.placement.algorithms.TestSCMContainerPlacementRackAware
Subsystem Report/Notes
Docker Client=19.03.1 Server=19.03.1 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1277/12/artifact/out/Dockerfile
GITHUB PR #1277
Optional Tests dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle cc
uname Linux 4392a4c17837 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
Build tool maven
Personality personality/hadoop.sh
git revision trunk / c474e24
Default Java 1.8.0_222
mvninstall https://builds.apache.org/job/hadoop-multibranch/job/PR-1277/12/artifact/out/branch-mvninstall-hadoop-ozone.txt
compile https://builds.apache.org/job/hadoop-multibranch/job/PR-1277/12/artifact/out/branch-compile-hadoop-ozone.txt
findbugs https://builds.apache.org/job/hadoop-multibranch/job/PR-1277/12/artifact/out/branch-findbugs-hadoop-ozone.txt
mvninstall https://builds.apache.org/job/hadoop-multibranch/job/PR-1277/12/artifact/out/patch-mvninstall-hadoop-ozone.txt
compile https://builds.apache.org/job/hadoop-multibranch/job/PR-1277/12/artifact/out/patch-compile-hadoop-ozone.txt
cc https://builds.apache.org/job/hadoop-multibranch/job/PR-1277/12/artifact/out/patch-compile-hadoop-ozone.txt
javac https://builds.apache.org/job/hadoop-multibranch/job/PR-1277/12/artifact/out/patch-compile-hadoop-ozone.txt
checkstyle https://builds.apache.org/job/hadoop-multibranch/job/PR-1277/12/artifact/out/diff-checkstyle-hadoop-hdds.txt
checkstyle https://builds.apache.org/job/hadoop-multibranch/job/PR-1277/12/artifact/out/diff-checkstyle-hadoop-ozone.txt
whitespace https://builds.apache.org/job/hadoop-multibranch/job/PR-1277/12/artifact/out/whitespace-eol.txt
javadoc https://builds.apache.org/job/hadoop-multibranch/job/PR-1277/12/artifact/out/diff-javadoc-javadoc-hadoop-ozone.txt
findbugs https://builds.apache.org/job/hadoop-multibranch/job/PR-1277/12/artifact/out/patch-findbugs-hadoop-ozone.txt
unit https://builds.apache.org/job/hadoop-multibranch/job/PR-1277/12/artifact/out/patch-unit-hadoop-hdds.txt
unit https://builds.apache.org/job/hadoop-multibranch/job/PR-1277/12/artifact/out/patch-unit-hadoop-ozone.txt
Test Results https://builds.apache.org/job/hadoop-multibranch/job/PR-1277/12/testReport/
Max. process+thread count 534 (vs. ulimit of 5500)
modules C: hadoop-hdds/common hadoop-ozone/client hadoop-ozone/common hadoop-ozone/dist hadoop-ozone/ozone-manager hadoop-ozone/s3gateway U: .
Console output https://builds.apache.org/job/hadoop-multibranch/job/PR-1277/12/console
versions git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1
Powered by Apache Yetus 0.10.0 http://yetus.apache.org

This message was automatically generated.

Copy link
Contributor

@bharatviswa504 bharatviswa504 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. I have one minor comment.

@hadoop-yetus
Copy link

💔 -1 overall

Vote Subsystem Runtime Comment
0 reexec 3434 Docker mode activated.
_ Prechecks _
+1 dupname 2 No case conflicting files found.
+1 @author 0 The patch does not contain any @author tags.
+1 test4tests 0 The patch appears to include 4 new or modified test files.
_ trunk Compile Tests _
0 mvndep 44 Maven dependency ordering for branch
-1 mvninstall 31 hadoop-ozone in trunk failed.
-1 compile 22 hadoop-ozone in trunk failed.
+1 checkstyle 125 trunk passed
+1 mvnsite 0 trunk passed
+1 shadedclient 897 branch has no errors when building and testing our client artifacts.
-1 javadoc 47 hadoop-ozone in trunk failed.
0 spotbugs 164 Used deprecated FindBugs config; considering switching to SpotBugs.
-1 findbugs 27 hadoop-ozone in trunk failed.
_ Patch Compile Tests _
0 mvndep 29 Maven dependency ordering for patch
-1 mvninstall 33 hadoop-ozone in the patch failed.
-1 compile 25 hadoop-ozone in the patch failed.
-1 cc 25 hadoop-ozone in the patch failed.
-1 javac 25 hadoop-ozone in the patch failed.
-0 checkstyle 29 hadoop-hdds: The patch generated 6 new + 8 unchanged - 2 fixed = 14 total (was 10)
-0 checkstyle 96 hadoop-ozone: The patch generated 370 new + 2478 unchanged - 15 fixed = 2848 total (was 2493)
+1 mvnsite 0 the patch passed
+1 whitespace 0 The patch has no whitespace issues.
+1 shadedclient 660 patch has no errors when building and testing our client artifacts.
-1 javadoc 51 hadoop-ozone in the patch failed.
-1 findbugs 27 hadoop-ozone in the patch failed.
_ Other Tests _
+1 unit 249 hadoop-hdds in the patch passed.
-1 unit 29 hadoop-ozone in the patch failed.
+1 asflicense 32 The patch does not generate ASF License warnings.
6687
Subsystem Report/Notes
Docker Client=19.03.1 Server=19.03.1 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1277/13/artifact/out/Dockerfile
GITHUB PR #1277
Optional Tests dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle cc
uname Linux 68f3b34ec47c 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
Build tool maven
Personality personality/hadoop.sh
git revision trunk / 1029060
Default Java 1.8.0_222
mvninstall https://builds.apache.org/job/hadoop-multibranch/job/PR-1277/13/artifact/out/branch-mvninstall-hadoop-ozone.txt
compile https://builds.apache.org/job/hadoop-multibranch/job/PR-1277/13/artifact/out/branch-compile-hadoop-ozone.txt
javadoc https://builds.apache.org/job/hadoop-multibranch/job/PR-1277/13/artifact/out/branch-javadoc-hadoop-ozone.txt
findbugs https://builds.apache.org/job/hadoop-multibranch/job/PR-1277/13/artifact/out/branch-findbugs-hadoop-ozone.txt
mvninstall https://builds.apache.org/job/hadoop-multibranch/job/PR-1277/13/artifact/out/patch-mvninstall-hadoop-ozone.txt
compile https://builds.apache.org/job/hadoop-multibranch/job/PR-1277/13/artifact/out/patch-compile-hadoop-ozone.txt
cc https://builds.apache.org/job/hadoop-multibranch/job/PR-1277/13/artifact/out/patch-compile-hadoop-ozone.txt
javac https://builds.apache.org/job/hadoop-multibranch/job/PR-1277/13/artifact/out/patch-compile-hadoop-ozone.txt
checkstyle https://builds.apache.org/job/hadoop-multibranch/job/PR-1277/13/artifact/out/diff-checkstyle-hadoop-hdds.txt
checkstyle https://builds.apache.org/job/hadoop-multibranch/job/PR-1277/13/artifact/out/diff-checkstyle-hadoop-ozone.txt
javadoc https://builds.apache.org/job/hadoop-multibranch/job/PR-1277/13/artifact/out/patch-javadoc-hadoop-ozone.txt
findbugs https://builds.apache.org/job/hadoop-multibranch/job/PR-1277/13/artifact/out/patch-findbugs-hadoop-ozone.txt
unit https://builds.apache.org/job/hadoop-multibranch/job/PR-1277/13/artifact/out/patch-unit-hadoop-ozone.txt
Test Results https://builds.apache.org/job/hadoop-multibranch/job/PR-1277/13/testReport/
Max. process+thread count 498 (vs. ulimit of 5500)
modules C: hadoop-hdds/common hadoop-ozone/client hadoop-ozone/common hadoop-ozone/dist hadoop-ozone/ozone-manager hadoop-ozone/s3gateway U: .
Console output https://builds.apache.org/job/hadoop-multibranch/job/PR-1277/13/console
versions git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1
Powered by Apache Yetus 0.10.0 http://yetus.apache.org

This message was automatically generated.

@bharatviswa504
Copy link
Contributor

+1 LGTM.
Thank You @elek for the contribution.
I will commit this to the trunk.

@bharatviswa504 bharatviswa504 merged commit da1c67e into apache:trunk Sep 19, 2019
amahussein pushed a commit to amahussein/hadoop that referenced this pull request Oct 29, 2019
RogPodge pushed a commit to RogPodge/hadoop that referenced this pull request Mar 25, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
4 participants