Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

HADOOP-18344. Upgrade AWS SDK to 1.12.262 #4637

Merged

Conversation

steveloughran
Copy link
Contributor

@steveloughran steveloughran commented Jul 26, 2022

Description of PR

How was this patch tested?

object store test in progress. first run had two failures I'd not see before

  • landsat test timeout
  • oom in a test writing data. I'd set my connector to use bytebuffer for block buffering and it ran out. But why now? SDK update or just upload delays triggering it?

@steveloughran
Copy link
Contributor Author

Test failures

[ERROR] Errors: 
[ERROR]   ITestS3AFailureHandling.testMultiObjectDeleteLargeNumKeys » OutOfMemory Direct...
[ERROR]   ITestS3ADeleteFilesOneByOne.testBulkRenameAndDelete » OutOfMemory Direct buffe...
[ERROR]   ITestS3SelectLandsat.testSelectSeekFullLandsat:427->AbstractS3SelectTest.seek:701 » TestTimedOut
[INFO] 
  • OOM in HADOOP-17937; ITestS3ADeleteFilesOneByOne. testBulkRenameAndDelete. This shows we would benefit from lazy creation of buffers.
  • ITestS3AStorageClass.testCreateAndCopyObjectWithStorageClassGlacier
[ERROR] Failures: 
[ERROR]   ITestS3AStorageClass.testCreateAndCopyObjectWithStorageClassGlacier:129->assertObjectHasStorageClass:215 [Storage class of object s3a://stevel-london/fork-0001/test/testCreateAndCopyObjectWithStorageClassGlacier/file1] 
Expecting:
 <null>
to be equal to:
 <"glacier">
ignoring case considerations
[ERROR]   ITestS3AStorageClass.testCreateAndCopyObjectWithStorageClassReducedRedundancy:104->assertObjectHasStorageClass:215 [Storage class of object s3a://stevel-london/fork-0001/test/testCreateAndCopyObjectWithStorageClassReducedRedundancy/file1] 
Expecting:
 <null>
to be equal to:
 <"reduced_redundancy">
ignoring case considerations

I will need to see if this is a regression. I believe i have disabled setting storage class.

* [ERROR] testSelectSeekFullLandsat(org.apache.hadoop.fs.s3a.select.ITestS3SelectLandsat)  Time elapsed: 600.013 s  <<< ERROR!
org.junit.runners.model.TestTimedOutException: test timed out after 600000 milliseconds

@steveloughran
Copy link
Contributor Author

CLI tests good. but they are all logging a warning about unset storage class. Filed HADOOP-18371

@steveloughran
Copy link
Contributor Author

setup failure in ILoadTestS3ABulkDeleteThrottling; filed HADOOP-18372

[INFO] Running org.apache.hadoop.fs.s3a.scale.ILoadTestS3ABulkDeleteThrottling
[ERROR] Tests run: 12, Failures: 12, Errors: 0, Skipped: 0, Time elapsed: 10.114 s <<< FAILURE! - in org.apache.hadoop.fs.s3a.scale.ILoadTestS3ABulkDeleteThrottling
[ERROR] test_010_Reset[bulk-delete-aws-retry=false-requests=100-size=250](org.apache.hadoop.fs.s3a.scale.ILoadTestS3ABulkDeleteThrottling)  Time elapsed: 2.291 s  <<< FAILURE!
java.lang.AssertionError: 
[page size] 
Expecting actual not to be null
        at org.apache.hadoop.fs.s3a.scale.ILoadTestS3ABulkDeleteThrottling.setup(ILoadTestS3ABulkDeleteThrottling.java:175)

@steveloughran
Copy link
Contributor Author

test failures occur on trunk even without sdk upgrade

[ERROR] Failures: 
[ERROR]   ITestS3AStorageClass.testCreateAndCopyObjectWithStorageClassGlacier:129->assertObjectHasStorageClass:215 [Storage class of object s3a://stevel-london/fork-0003/test/testCreateAndCopyObjectWithStorageClassGlacier/file1] 
Expecting:
 <null>
to be equal to:
 <"glacier">
ignoring case considerations
[ERROR]   ITestS3AStorageClass.testCreateAndCopyObjectWithStorageClassReducedRedundancy:104->assertObjectHasStorageClass:215 [Storage class of object s3a://stevel-london/fork-0003/test/testCreateAndCopyObjectWithStorageClassReducedRedundancy/file1] 
Expecting:
 <null>
to be equal to:
 <"reduced_redundancy">
ignoring case considerations
[ERROR] Errors: 
[ERROR]   ITestS3ADeleteFilesOneByOne.testBulkRenameAndDelete » OutOfMemory Direct buffe...
[ERROR]   ITestS3SelectLandsat.testSelectSeekFullLandsat:427->AbstractS3SelectTest.seek:701 » TestTimedOut
[INFO] 

@ahmarsuhail
Copy link
Contributor

ILoadTestS3ABulkDeleteThrottling (also failing without the upgrade) is fixed in #4642

@steveloughran
Copy link
Contributor Author

the storage class tests all work when i switch from bytebuffer as the upload buffering, as does the OOM.

@steveloughran
Copy link
Contributor Author

steveloughran commented Jul 27, 2022

CSE-KMS mode failing, but as i've not tried it before, can't blame the SDK

bin/hadoop fs -copyFromLocal -t 10  share/hadoop/tools/lib/hadoop-openstack-3.4.0-SNAPSHOT.jar $BUCKET/

2022-07-27 15:12:50,709 [main] WARN  s3a.S3AFileSystem (S3AFileSystem.java:createRequestFactory(1004)) - Unknown storage class property fs.s3a.create.storage.class: ; falling back to default storage class
2022-07-27 15:12:51,258 [main] WARN  s3.AmazonS3EncryptionClientV2 (AmazonS3EncryptionClientV2.java:warnOnLegacyCryptoMode(409)) - The S3 Encryption Client is configured to read encrypted data with legacy encryption modes through the CryptoMode setting. If you don't have objects encrypted with these legacy modes, you should disable support for them to enhance security. See https://docs.aws.amazon.com/general/latest/gr/aws_sdk_cryptography.html
2022-07-27 15:12:51,258 [main] WARN  s3.AmazonS3EncryptionClientV2 (AmazonS3EncryptionClientV2.java:warnOnRangeGetsEnabled(401)) - The S3 Encryption Client is configured to support range get requests. Range gets do not provide authenticated encryption properties even when used with an authenticated mode (AES-GCM). See https://docs.aws.amazon.com/general/latest/gr/aws_sdk_cryptography.html
2022-07-27 15:12:51,259 [main] INFO  s3a.DefaultS3ClientFactory (LogExactlyOnce.java:info(44)) - S3 client-side encryption enabled: Ignore S3-CSE Warnings.
2022-07-27 15:12:51,269 [main] INFO  impl.DirectoryPolicyImpl (DirectoryPolicyImpl.java:getDirectoryPolicy(189)) - Directory markers will be kept
2022-07-27 15:12:52,738 [main] WARN  s3a.S3AInstrumentation (S3AInstrumentation.java:close(1532)) - Closing output stream statistics while data is still marked as pending upload in OutputStreamStatistics{counters=((stream_write_total_time=0) (object_put_request=0) (stream_write_queue_duration=0) (object_multipart_initiated=0) (stream_write_bytes=125289) (stream_write_block_uploads=1) (stream_write_exceptions_completing_upload=0) (object_multipart_aborted.failures=0) (multipart_upload_completed.failures=0) (committer_magic_marker_put.failures=0) (op_abort.failures=0) (committer_magic_marker_put=0) (object_put_request.failures=0) (action_executor_acquired.failures=0) (op_abort=0) (object_multipart_initiated.failures=0) (op_hsync=0) (object_multipart_aborted=0) (action_executor_acquired=1) (stream_write_total_data=169) (multipart_upload_completed=0) (stream_write_exceptions=0) (op_hflush=0));
gauges=((stream_write_block_uploads_pending=1) (stream_write_block_uploads_data_pending=125120));
minimums=((action_executor_acquired.failures.min=-1) (object_multipart_initiated.min=-1) (committer_magic_marker_put.min=-1) (object_multipart_aborted.min=-1) (op_abort.min=-1) (multipart_upload_completed.failures.min=-1) (multipart_upload_completed.min=-1) (object_put_request.failures.min=-1) (object_multipart_aborted.failures.min=-1) (committer_magic_marker_put.failures.min=-1) (op_abort.failures.min=-1) (object_multipart_initiated.failures.min=-1) (action_executor_acquired.min=0) (object_put_request.min=-1));
maximums=((op_abort.max=-1) (object_multipart_aborted.max=-1) (object_multipart_initiated.failures.max=-1) (committer_magic_marker_put.failures.max=-1) (object_multipart_initiated.max=-1) (multipart_upload_completed.max=-1) (action_executor_acquired.max=0) (multipart_upload_completed.failures.max=-1) (object_put_request.max=-1) (op_abort.failures.max=-1) (object_multipart_aborted.failures.max=-1) (action_executor_acquired.failures.max=-1) (committer_magic_marker_put.max=-1) (object_put_request.failures.max=-1));
means=((object_multipart_initiated.failures.mean=(samples=0, sum=0, mean=0.0000)) (committer_magic_marker_put.failures.mean=(samples=0, sum=0, mean=0.0000)) (object_put_request.mean=(samples=0, sum=0, mean=0.0000)) (multipart_upload_completed.mean=(samples=0, sum=0, mean=0.0000)) (committer_magic_marker_put.mean=(samples=0, sum=0, mean=0.0000)) (op_abort.failures.mean=(samples=0, sum=0, mean=0.0000)) (action_executor_acquired.failures.mean=(samples=0, sum=0, mean=0.0000)) (object_put_request.failures.mean=(samples=0, sum=0, mean=0.0000)) (multipart_upload_completed.failures.mean=(samples=0, sum=0, mean=0.0000)) (object_multipart_aborted.mean=(samples=0, sum=0, mean=0.0000)) (object_multipart_aborted.failures.mean=(samples=0, sum=0, mean=0.0000)) (object_multipart_initiated.mean=(samples=0, sum=0, mean=0.0000)) (op_abort.mean=(samples=0, sum=0, mean=0.0000)) (action_executor_acquired.mean=(samples=1, sum=0, mean=0.0000)));
, blocksActive=0, blockUploadsCompleted=0, blocksAllocated=1, blocksReleased=1, blocksActivelyAllocated=0, transferDuration=0 ms, totalUploadDuration=0 ms, effectiveBandwidth=0.0 bytes/s}
2022-07-27 15:12:52,819 [main] DEBUG shell.Command (Command.java:displayError(476)) - copyFromLocal failure
org.apache.hadoop.fs.s3a.AWSBadRequestException: Writing Object on hadoop-openstack-3.4.0-SNAPSHOT.jar._COPYING_: com.amazonaws.services.kms.model.NotFoundException: Invalid arn eu-west-2 (Service: AWSKMS; Status Code: 400; Error Code: NotFoundException; Request ID: dcedb31f-e4aa-4a70-8321-d131211558c8; Proxy: null):NotFoundException: Invalid arn eu-west-2 (Service: AWSKMS; Status Code: 400; Error Code: NotFoundException; Request ID: dcedb31f-e4aa-4a70-8321-d131211558c8; Proxy: null)
        at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:241)
        at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:124)
        at org.apache.hadoop.fs.s3a.Invoker.lambda$retry$4(Invoker.java:376)
        at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:468)
        at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:372)
        at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:347)
        at org.apache.hadoop.fs.s3a.WriteOperationHelper.retry(WriteOperationHelper.java:205)
        at org.apache.hadoop.fs.s3a.WriteOperationHelper.putObject(WriteOperationHelper.java:570)
        at org.apache.hadoop.fs.s3a.S3ABlockOutputStream.lambda$putObject$0(S3ABlockOutputStream.java:595)
        at org.apache.hadoop.thirdparty.com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125)
        at org.apache.hadoop.thirdparty.com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:69)
        at org.apache.hadoop.thirdparty.com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78)
        at org.apache.hadoop.util.SemaphoredDelegatingExecutor$RunnableWithPermitRelease.run(SemaphoredDelegatingExecutor.java:225)
        at org.apache.hadoop.util.SemaphoredDelegatingExecutor$RunnableWithPermitRelease.run(SemaphoredDelegatingExecutor.java:225)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:750)
Caused by: com.amazonaws.services.kms.model.NotFoundException: Invalid arn eu-west-2 (Service: AWSKMS; Status Code: 400; Error Code: NotFoundException; Request ID: dcedb31f-e4aa-4a70-8321-d131211558c8; Proxy: null)
        at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1879)
        at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleServiceErrorResponse(AmazonHttpClient.java:1418)
        at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1387)
        at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1157)
        at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:814)
        at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:781)
        at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:755)
        at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:715)
        at com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:697)
        at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:561)
        at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:541)
        at com.amazonaws.services.kms.AWSKMSClient.doInvoke(AWSKMSClient.java:8153)
        at com.amazonaws.services.kms.AWSKMSClient.invoke(AWSKMSClient.java:8120)
        at com.amazonaws.services.kms.AWSKMSClient.invoke(AWSKMSClient.java:8109)
        at com.amazonaws.services.kms.AWSKMSClient.executeGenerateDataKey(AWSKMSClient.java:3624)
        at com.amazonaws.services.kms.AWSKMSClient.generateDataKey(AWSKMSClient.java:3593)
        at com.amazonaws.services.s3.internal.crypto.v2.S3CryptoModuleBase.buildContentCryptoMaterial(S3CryptoModuleBase.java:533)
        at com.amazonaws.services.s3.internal.crypto.v2.S3CryptoModuleBase.newContentCryptoMaterial(S3CryptoModuleBase.java:481)
        at com.amazonaws.services.s3.internal.crypto.v2.S3CryptoModuleBase.createContentCryptoMaterial(S3CryptoModuleBase.java:447)
        at com.amazonaws.services.s3.internal.crypto.v2.S3CryptoModuleBase.putObjectUsingMetadata(S3CryptoModuleBase.java:160)
        at com.amazonaws.services.s3.internal.crypto.v2.S3CryptoModuleBase.putObjectSecurely(S3CryptoModuleBase.java:156)
        at com.amazonaws.services.s3.AmazonS3EncryptionClientV2.putObject(AmazonS3EncryptionClientV2.java:236)
        at org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$putObjectDirect$18(S3AFileSystem.java:2837)
        at org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.trackDurationOfSupplier(IOStatisticsBinding.java:651)
        at org.apache.hadoop.fs.s3a.S3AFileSystem.putObjectDirect(S3AFileSystem.java:2834)
        at org.apache.hadoop.fs.s3a.WriteOperationHelper.lambda$putObject$7(WriteOperationHelper.java:573)
        at org.apache.hadoop.fs.store.audit.AuditingFunctions.lambda$withinAuditSpan$0(AuditingFunctions.java:62)
        at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:122)
        ... 15 more
copyFromLocal: Writing Object on hadoop-openstack-3.4.0-SNAPSHOT.jar._COPYING_: com.amazonaws.services.kms.model.NotFoundException: Invalid arn eu-west-2 (Service: AWSKMS; Status Code: 400; Error Code: NotFoundException; Request ID: dcedb31f-e4aa-4a70-8321-d131211558c8; Proxy: null):NotFoundException: Invalid arn eu-west-2 (Service: AWSKMS; Status Code: 400; Error Code: NotFoundException; Request ID: dcedb31f-e4aa-4a70-8321-d131211558c8; Proxy: null)

SSE-KMS works so it could be key config/permissions.

Fixes CVE-2018-7489 in shaded jackson.

+Add more commands in testing.md
 to the CLI tests needed when qualifying
 a release

Change-Id: If8020ad581d290f0cbe322184e860b1e3f4aeffe
@hadoop-yetus
Copy link

💔 -1 overall

Vote Subsystem Runtime Logfile Comment
+0 🆗 reexec 1m 3s Docker mode activated.
_ Prechecks _
+1 💚 dupname 0m 0s No case conflicting files found.
+0 🆗 codespell 0m 1s codespell was not available.
+0 🆗 detsecrets 0m 1s detect-secrets was not available.
+0 🆗 xmllint 0m 1s xmllint was not available.
+0 🆗 markdownlint 0m 1s markdownlint was not available.
+1 💚 @author 0m 0s The patch does not contain any @author tags.
-1 ❌ test4tests 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch.
_ trunk Compile Tests _
+0 🆗 mvndep 14m 52s Maven dependency ordering for branch
+1 💚 mvninstall 28m 10s trunk passed
+1 💚 compile 24m 36s trunk passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1
+1 💚 compile 20m 46s trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07
+1 💚 mvnsite 2m 41s trunk passed
+1 💚 javadoc 2m 24s trunk passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1
+1 💚 javadoc 2m 27s trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07
+1 💚 shadedclient 117m 43s branch has no errors when building and testing our client artifacts.
_ Patch Compile Tests _
+0 🆗 mvndep 0m 50s Maven dependency ordering for patch
+1 💚 mvninstall 1m 0s the patch passed
+1 💚 compile 24m 6s the patch passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1
+1 💚 javac 24m 6s the patch passed
+1 💚 compile 22m 46s the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07
+1 💚 javac 22m 46s the patch passed
-1 ❌ blanks 0m 0s /blanks-eol.txt The patch has 1 line(s) that end in blanks. Use git apply --whitespace=fix <<patch_file>>. Refer https://git-scm.com/docs/git-apply
+1 💚 mvnsite 2m 19s the patch passed
+1 💚 javadoc 1m 57s the patch passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1
+1 💚 javadoc 2m 9s the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07
+1 💚 shadedclient 32m 22s patch has no errors when building and testing our client artifacts.
_ Other Tests _
+1 💚 unit 1m 3s hadoop-project in the patch passed.
+1 💚 unit 3m 21s hadoop-aws in the patch passed.
+1 💚 asflicense 1m 17s The patch does not generate ASF License warnings.
207m 35s
Subsystem Report/Notes
Docker ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4637/2/artifact/out/Dockerfile
GITHUB PR #4637
Optional Tests dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient codespell detsecrets xmllint markdownlint
uname Linux 4d5009939df0 4.15.0-65-generic #74-Ubuntu SMP Tue Sep 17 17:06:04 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
Build tool maven
Personality dev-support/bin/hadoop.sh
git revision trunk / 9800dd2f1d5ee5d6fd728aae748f6dd7d7897faa
Default Java Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07
Multi-JDK versions /usr/lib/jvm/java-11-openjdk-amd64:Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07
Test Results https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4637/2/testReport/
Max. process+thread count 719 (vs. ulimit of 5500)
modules C: hadoop-project hadoop-tools/hadoop-aws U: .
Console output https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4637/2/console
versions git=2.25.1 maven=3.6.3
Powered by Apache Yetus 0.14.0 https://yetus.apache.org

This message was automatically generated.

@apache apache deleted a comment from hadoop-yetus Jul 27, 2022
remove a space at an EOL

Change-Id: I819e210f988b58281440d6834aea4203e20ccee1
@mukund-thakur
Copy link
Contributor

Okay looks good. running the tests once complete will give a +1

@hadoop-yetus
Copy link

💔 -1 overall

Vote Subsystem Runtime Logfile Comment
+0 🆗 reexec 1m 5s Docker mode activated.
_ Prechecks _
+1 💚 dupname 0m 0s No case conflicting files found.
+0 🆗 codespell 0m 0s codespell was not available.
+0 🆗 detsecrets 0m 0s detect-secrets was not available.
+0 🆗 xmllint 0m 0s xmllint was not available.
+0 🆗 markdownlint 0m 0s markdownlint was not available.
+1 💚 @author 0m 0s The patch does not contain any @author tags.
-1 ❌ test4tests 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch.
_ trunk Compile Tests _
+0 🆗 mvndep 15m 4s Maven dependency ordering for branch
+1 💚 mvninstall 28m 2s trunk passed
+1 💚 compile 26m 24s trunk passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1
+1 💚 compile 21m 47s trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07
+1 💚 mvnsite 2m 21s trunk passed
+1 💚 javadoc 1m 52s trunk passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1
+1 💚 javadoc 2m 9s trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07
+1 💚 shadedclient 119m 5s branch has no errors when building and testing our client artifacts.
_ Patch Compile Tests _
+0 🆗 mvndep 0m 47s Maven dependency ordering for patch
+1 💚 mvninstall 1m 3s the patch passed
+1 💚 compile 22m 27s the patch passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1
+1 💚 javac 22m 27s the patch passed
+1 💚 compile 20m 55s the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07
+1 💚 javac 20m 55s the patch passed
-1 ❌ blanks 0m 0s /blanks-eol.txt The patch has 1 line(s) that end in blanks. Use git apply --whitespace=fix <<patch_file>>. Refer https://git-scm.com/docs/git-apply
+1 💚 mvnsite 2m 39s the patch passed
+1 💚 javadoc 2m 21s the patch passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1
+1 💚 javadoc 2m 28s the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07
+1 💚 shadedclient 33m 5s patch has no errors when building and testing our client artifacts.
_ Other Tests _
+1 💚 unit 1m 9s hadoop-project in the patch passed.
+1 💚 unit 3m 22s hadoop-aws in the patch passed.
+1 💚 asflicense 1m 37s The patch does not generate ASF License warnings.
206m 49s
Subsystem Report/Notes
Docker ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4637/3/artifact/out/Dockerfile
GITHUB PR #4637
Optional Tests dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient codespell detsecrets xmllint markdownlint
uname Linux 0a1dfd256d92 4.15.0-65-generic #74-Ubuntu SMP Tue Sep 17 17:06:04 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
Build tool maven
Personality dev-support/bin/hadoop.sh
git revision trunk / 7e1b620
Default Java Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07
Multi-JDK versions /usr/lib/jvm/java-11-openjdk-amd64:Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07
Test Results https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4637/3/testReport/
Max. process+thread count 648 (vs. ulimit of 5500)
modules C: hadoop-project hadoop-tools/hadoop-aws U: .
Console output https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4637/3/console
versions git=2.25.1 maven=3.6.3
Powered by Apache Yetus 0.14.0 https://yetus.apache.org

This message was automatically generated.

Copy link
Contributor

@mukund-thakur mukund-thakur left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM +1, test run successful with my configs in us-west-1.

# expect the iostatistics object_list_request value to be O(directories)
bin/hadoop fs -ls -R $BUCKET/

# expect the iostatistics object_list_request and op_get_content_summary values to be 1
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes there is a extra space here at the end as pointed by Yetus,

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed that last night; will push and then merge

@steveloughran steveloughran merged commit 58ed621 into apache:trunk Jul 28, 2022
asfgit pushed a commit that referenced this pull request Jul 28, 2022
Update LICENSE-binary with the new AWS SDK version.
Followup to #4637.

Contributed by Steve Loughran
asfgit pushed a commit that referenced this pull request Jul 28, 2022
Fixes CVE-2018-7489 in shaded jackson.

+Add more commands in testing.md
 to the CLI tests needed when qualifying
 a release

Contributed by Steve Loughran
asfgit pushed a commit that referenced this pull request Jul 28, 2022
Fixes CVE-2018-7489 in shaded jackson.

+Add more commands in testing.md
 to the CLI tests needed when qualifying
 a release

Contributed by Steve Loughran
@steveloughran
Copy link
Contributor Author

aah, forgot that license-binary change. pushed up a followup, and for the backports merged the two changes

@hadoop-yetus
Copy link

💔 -1 overall

Vote Subsystem Runtime Logfile Comment
+0 🆗 reexec 0m 56s Docker mode activated.
_ Prechecks _
+1 💚 dupname 0m 0s No case conflicting files found.
+0 🆗 codespell 0m 0s codespell was not available.
+0 🆗 detsecrets 0m 0s detect-secrets was not available.
+0 🆗 xmllint 0m 0s xmllint was not available.
+0 🆗 markdownlint 0m 0s markdownlint was not available.
+1 💚 @author 0m 0s The patch does not contain any @author tags.
-1 ❌ test4tests 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch.
_ trunk Compile Tests _
+0 🆗 mvndep 15m 18s Maven dependency ordering for branch
+1 💚 mvninstall 26m 2s trunk passed
+1 💚 compile 24m 50s trunk passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1
+1 💚 compile 22m 13s trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07
+1 💚 mvnsite 2m 36s trunk passed
+1 💚 javadoc 2m 23s trunk passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1
+1 💚 javadoc 2m 8s trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07
+1 💚 shadedclient 117m 3s branch has no errors when building and testing our client artifacts.
_ Patch Compile Tests _
+0 🆗 mvndep 0m 47s Maven dependency ordering for patch
+1 💚 mvninstall 0m 59s the patch passed
+1 💚 compile 25m 15s the patch passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1
+1 💚 javac 25m 15s the patch passed
+1 💚 compile 22m 42s the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07
+1 💚 javac 22m 42s the patch passed
+1 💚 blanks 0m 0s The patch has no blanks issues.
+1 💚 mvnsite 2m 12s the patch passed
+1 💚 javadoc 1m 48s the patch passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1
+1 💚 javadoc 1m 59s the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07
+1 💚 shadedclient 30m 32s patch has no errors when building and testing our client artifacts.
_ Other Tests _
+1 💚 unit 1m 8s hadoop-project in the patch passed.
+1 💚 unit 3m 20s hadoop-aws in the patch passed.
+1 💚 asflicense 1m 35s The patch does not generate ASF License warnings.
206m 30s
Subsystem Report/Notes
Docker ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4637/4/artifact/out/Dockerfile
GITHUB PR #4637
Optional Tests dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient codespell detsecrets xmllint markdownlint
uname Linux 58a8f285c8fd 4.15.0-65-generic #74-Ubuntu SMP Tue Sep 17 17:06:04 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
Build tool maven
Personality dev-support/bin/hadoop.sh
git revision trunk / e67b4e8
Default Java Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07
Multi-JDK versions /usr/lib/jvm/java-11-openjdk-amd64:Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07
Test Results https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4637/4/testReport/
Max. process+thread count 698 (vs. ulimit of 5500)
modules C: hadoop-project hadoop-tools/hadoop-aws U: .
Console output https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4637/4/console
versions git=2.25.1 maven=3.6.3
Powered by Apache Yetus 0.14.0 https://yetus.apache.org

This message was automatically generated.

HarshitGupta11 pushed a commit to HarshitGupta11/hadoop that referenced this pull request Nov 28, 2022
Fixes CVE-2018-7489 in shaded jackson.

+Add more commands in testing.md
 to the CLI tests needed when qualifying
 a release

Contributed by Steve Loughran
HarshitGupta11 pushed a commit to HarshitGupta11/hadoop that referenced this pull request Nov 28, 2022
Update LICENSE-binary with the new AWS SDK version.
Followup to apache#4637.

Contributed by Steve Loughran
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants