Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

HADOOP-18068. upgrade AWS SDK to 1.12.132 #3864

Merged
merged 2 commits into from
Jan 18, 2022

Conversation

steveloughran
Copy link
Contributor

@steveloughran steveloughran commented Jan 5, 2022

Description of PR

move to latest AWS SDK

How was this patch tested?

itests without s3guard

 -Dparallel-tests -DtestsThreadCount=6  -Dmarkers=keep -Dscale

full manual qualification as covered in testing doc

For code changes:

  • Does the title or this PR starts with the corresponding JIRA issue id (e.g. 'HADOOP-17799. Your PR title ...')?
  • Object storage: have the integration tests been executed and the endpoint declared according to the connector-specific documentation?
  • If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under ASF 2.0?
  • If applicable, have you updated the LICENSE, LICENSE-binary, NOTICE-binary files?

@steveloughran
Copy link
Contributor Author

yetus seems to have blocked; well rebase and resubmit.

@steveloughran
Copy link
Contributor Author

dependencies

dependencies look good; nothing accidentally referred to.

[INFO] +- org.apache.hadoop:hadoop-common:test-jar:tests:3.4.0-SNAPSHOT:test
[INFO] +- com.amazonaws:aws-java-sdk-bundle:jar:1.12.132:compile
[INFO] +- org.assertj:assertj-core:jar:3.12.2:test

size is now 264 MB, up from 226. that is huge, but it allows us to dodge all classpath problems we used to get about version incompatibilities of things like json parsing, http client etc.

Test run with -Dparallel-tests -DtestsThreadCount=6 -Dmarkers=keep -Dscale

ITestS3AInputStreamPerformance output shows a bug in the throughput calculation; no new errors from the SDK though.

2022-01-05 17:32:29,352[JUnit-testReadWithNormalPolicy]INFO scale.ITestS3AInputStreamPerformance(ITestS3AInputStreamPerformance.java:executeSeekReadSequence(437))-Effective bandwidth-3.478002MB/S

Test run with -Dparallel-tests -DtestsThreadCount=6 -Dmarkers=delete -Ddynamo -Ds3guard

[ERROR] Failures: 
[ERROR]   ITestS3AContractRootDir>AbstractContractRootDirectoryTest.testRecursiveRootListing:268->Assert.assertTrue:42->Assert.fail:89 treewalk vs listFiles(/, true) mismatch: between 
  "s3a://stevel-london/file.txt"
] and 
  "s3a://stevel-london/dir-no-trailing/file2"
  "s3a://stevel-london/file.txt"
  "s3a://stevel-london/p1/p2/file"
]
[INFO] 
[ERROR] Tests run: 151, Failures: 1, Errors: 0, Skipped: 82

Test run with -Dparallel-tests -DtestsThreadCount=6 -Dmarkers=delete -Ddynamo -Ds3guard -Dscale

This time ITestS3AContractRootDir was happy.

manual command line tests

All executed against AWS London with no problems in any of the commands. as S3Guard was disabled, I skipped some of the commands.

~/P/R/hadoop-3.4.0-SNAPSHOT bin/hadoop s3guard bucket-info -markers aware $BUCKET
2022-01-06 13:37:48,612 [main] INFO  Configuration.deprecation (Configuration.java:logDeprecation(1459)) - fs.s3a.server-side-encryption.key is deprecated. Instead, use fs.s3a.encryption.key
2022-01-06 13:37:48,615 [main] INFO  Configuration.deprecation (Configuration.java:logDeprecation(1459)) - fs.s3a.server-side-encryption-algorithm is deprecated. Instead, use fs.s3a.encryption.algorithm
2022-01-06 13:37:49,759 [main] INFO  impl.DirectoryPolicyImpl (DirectoryPolicyImpl.java:getDirectoryPolicy(189)) - Directory markers will be kept
Filesystem s3a://stevel-london
Location: eu-west-2
Filesystem s3a://stevel-london is not using S3Guard

S3A Client
        Signing Algorithm: fs.s3a.signing-algorithm=(unset)
        Endpoint: fs.s3a.endpoint=(unset)
        Encryption: fs.s3a.encryption.algorithm=SSE-KMS
        Input seek policy: fs.s3a.experimental.input.fadvise=normal
        Change Detection Source: fs.s3a.change.detection.source=etag
        Change Detection Mode: fs.s3a.change.detection.mode=server

S3A Committers
        The "magic" committer is supported in the filesystem
        S3A Committer factory class: mapreduce.outputcommitter.factory.scheme.s3a=org.apache.hadoop.fs.s3a.commit.S3ACommitterFactory
        S3A Committer name: fs.s3a.committer.name=magic
        Store magic committer integration: fs.s3a.committer.magic.enabled=true

Security
        Delegation token support is disabled

Security
        The directory marker policy is "keep"
        Available Policies: delete, keep, authoritative
        Authoritative paths: fs.s3a.authoritative.path=
        The S3A connector is compatible with buckets where directory markers are not deleted
2022-01-06 13:37:50,622 [main] INFO  statistics.IOStatisticsLogging (IOStatisticsLogging.java:logIOStatisticsAtLevel(269)) - IOStatistics: counters=((audit_request_execution=1)
(audit_span_creation=2)
(store_exists_probe=1)
(store_io_request=2));

gauges=();

minimums=((store_exists_probe.min=842));

maximums=((store_exists_probe.max=842));

means=((store_exists_probe.mean=(samples=1, sum=842, mean=842.0000)));

I'm using the old properties for encryption because I still want tests against older builds to also be encrypted. Maybe I should set both properties just to get rid of the warning.

and a storediag command

 bin/hadoop jar $CLOUDSTORE storediag $BUCKET 

Store Diagnostics for stevel (auth:SIMPLE) on stevel-mbp1376/127.0.0.1
======================================================================


Diagnostics for filesystem s3a://stevel-london/
===============================================

S3A FileSystem Connector
ASF Filesystem Connector to Amazon S3 Storage and compatible stores
https://hadoop.apache.org/docs/current/hadoop-aws/tools/hadoop-aws/index.html

Hadoop information
==================

  Hadoop 3.4.0-SNAPSHOT
  Compiled by stevel on 2022-01-05T18:07Z
  Compiled with protoc 3.7.1
  From source with checksum e249b256a31f772e4220f92668721349

Determining OS version
======================

Darwin stevel-mbp1376 21.2.0 Darwin Kernel Version 21.2.0: Sun Nov 28 20:28:54 PST 2021; root:xnu-8019.61.5~1/RELEASE_X86_64 x86_64

Selected System Properties
==========================

aws.accessKeyId = (unset)
aws.secretKey = (unset)
aws.sessionToken = (unset)
aws.region = (unset)
com.amazonaws.regions.RegionUtils.fileOverride = (unset)
com.amazonaws.regions.RegionUtils.disableRemote = (unset)
com.amazonaws.sdk.disableCertChecking = (unset)
com.amazonaws.sdk.ec2MetadataServiceEndpointOverride = (unset)
com.amazonaws.sdk.enableDefaultMetrics = (unset)
com.amazonaws.sdk.enableInRegionOptimizedMode = (unset)
com.amazonaws.sdk.enableThrottledRetry = (unset)
com.amazonaws.services.s3.disableImplicitGlobalClients = (unset)
com.amazonaws.services.s3.enableV4 = (unset)
com.amazonaws.services.s3.enforceV4 = (unset)

Environment Variables
=====================

AWS_ACCESS_KEY_ID = (unset)
AWS_ACCESS_KEY = (unset)
AWS_SECRET_KEY = (unset)
AWS_SECRET_ACCESS_KEY = (unset)
AWS_SESSION_TOKEN = (unset)
AWS_REGION = (unset)
AWS_S3_US_EAST_1_REGIONAL_ENDPOINT = (unset)
AWS_CBOR_DISABLE = (unset)
AWS_CONTAINER_CREDENTIALS_RELATIVE_URI = (unset)
AWS_CONTAINER_CREDENTIALS_FULL_URI = (unset)
AWS_CONTAINER_AUTHORIZATION_TOKEN = (unset)
AWS_EC2_METADATA_DISABLED = "true"
AWS_EC2_METADATA_SERVICE_ENDPOINT = (unset)
AWS_MAX_ATTEMPTS = (unset)
AWS_RETRY_MODE = (unset)
HADOOP_CONF_DIR = "/Users/stevel/Projects/Releases/hadoop-3.4.0-SNAPSHOT/etc/hadoop"
HADOOP_CREDSTORE_PASSWORD = (unset)
HADOOP_HEAPSIZE = (unset)
HADOOP_HEAPSIZE_MIN = (unset)
HADOOP_HOME = "/Users/stevel/Projects/Releases/hadoop-3.4.0-SNAPSHOT"
HADOOP_LOG_DIR = (unset)
HADOOP_OPTIONAL_TOOLS = "hadoop-azure,hadoop-aws,hadoop-openstack"
HADOOP_OPTS = "-Djava.net.preferIPv4Stack=true  -Dyarn.log.dir=/Users/stevel/Projects/Releases/hadoop-3.4.0-SNAPSHOT/logs -Dyarn.log.file=hadoop.log -Dyarn.home.dir=/Users/stevel/Projects/Releases/hadoop-3.4.0-SNAPSHOT -Dyarn.root.logger=INFO,console -Dhadoop.log.dir=/Users/stevel/Projects/Releases/hadoop-3.4.0-SNAPSHOT/logs -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/Users/stevel/Projects/Releases/hadoop-3.4.0-SNAPSHOT -Dhadoop.id.str=stevel -Dhadoop.root.logger=INFO,console -Dhadoop.policy.file=hadoop-policy.xml -Dhadoop.security.logger=INFO,NullAppender"
HADOOP_SHELL_SCRIPT_DEBUG = (unset)
HADOOP_TOKEN = (unset)
HADOOP_TOKEN_FILE_LOCATION = (unset)
HADOOP_TOOLS_HOME = (unset)
HADOOP_TOOLS_OPTIONS = (unset)
HADOOP_YARN_HOME = "/Users/stevel/Projects/Releases/hadoop-3.4.0-SNAPSHOT"
HDP_VERSION = (unset)
JAVA_HOME = "/Library/Java/JavaVirtualMachines/temurin-8.jdk/Contents/Home"
LOCAL_DIRS = (unset)
OPENSSL_ROOT_DIR = "/usr/local/opt/openssl/"
PYSPARK_DRIVER_PYTHON = (unset)
SPARK_HOME = (unset)
SPARK_CONF_DIR = (unset)
SPARK_SCALA_VERSION = (unset)
YARN_CONF_DIR = (unset)

Security
========

Security Enabled: false
Keytab login: false
Ticket login: false
Current user: stevel (auth:SIMPLE)
Token count: 0

Hadoop Options
==============

fs.defaultFS = "file:///" [core-default.xml]
2022-01-06 13:41:51,986 [main] INFO  Configuration.deprecation (Configuration.java:logDeprecation(1459)) - fs.default.name is deprecated. Instead, use fs.defaultFS
fs.default.name = "file:///" 
fs.permissions.umask-mode = "022" [core-default.xml]
fs.trash.classname = (unset)
fs.trash.interval = "0" [core-default.xml]
fs.trash.checkpoint.interval = "0" [core-default.xml]
hadoop.tmp.dir = "/tmp/hadoop-stevel" [core-default.xml]
hdp.version = (unset)
yarn.resourcemanager.address = "0.0.0.0:8032" [yarn-default.xml]
yarn.resourcemanager.principal = (unset)
yarn.resourcemanager.webapp.address = "0.0.0.0:8088" [yarn-default.xml]
yarn.resourcemanager.webapp.https.address = "0.0.0.0:8090" [yarn-default.xml]
mapreduce.input.fileinputformat.list-status.num-threads = "1" [mapred-default.xml]
mapreduce.jobtracker.kerberos.principal = (unset)
mapreduce.job.hdfs-servers.token-renewal.exclude = (unset)
mapreduce.application.framework.path = (unset)
fs.iostatistics.logging.level = "info" [core-site.xml]

Security Options
================

dfs.data.transfer.protection = (unset)
hadoop.http.authentication.simple.anonymous.allowed = "true" [core-default.xml]
hadoop.http.authentication.type = "simple" [core-default.xml]
hadoop.kerberos.min.seconds.before.relogin = "60" [core-default.xml]
hadoop.kerberos.keytab.login.autorenewal.enabled = "false" [core-default.xml]
hadoop.security.authentication = "simple" [core-default.xml]
hadoop.security.authorization = "false" [core-default.xml]
hadoop.security.credential.provider.path = (unset)
hadoop.security.credstore.java-keystore-provider.password-file = (unset)
hadoop.security.credential.clear-text-fallback = "true" [core-default.xml]
hadoop.security.key.provider.path = (unset)
hadoop.security.crypto.jceks.key.serialfilter = (unset)
hadoop.rpc.protection = "authentication" [core-default.xml]
hadoop.tokens = (unset)
hadoop.token.files = (unset)

Selected Configuration Options
==============================


fs.s3a.session.token = (unset)
fs.s3a.server-side-encryption-algorithm = "SSE-KMS" [fs.s3a.bucket.stevel-london.server-side-encryption-algorithm via [core-site.xml]]
fs.s3a.server-side-encryption.key = "ar*********************************************************************3bf6" [75] [fs.s3a.bucket.stevel-london.server-side-encryption.key via [core-site.xml]]
fs.s3a.encryption-algorithm = (unset)
fs.s3a.encryption.key = (unset)
fs.s3a.aws.credentials.provider = "
    org.apache.hadoop.fs.s3a.TemporaryAWSCredentialsProvider,
    org.apache.hadoop.fs.s3a.SimpleAWSCredentialsProvider,
    com.amazonaws.auth.EnvironmentVariableCredentialsProvider,
    org.apache.hadoop.fs.s3a.auth.IAMInstanceCredentialsProvider
  " [core-default.xml]
fs.s3a.endpoint = (unset)
fs.s3a.endpoint.region = (unset)
fs.s3a.signing-algorithm = (unset)
fs.s3a.acl.default = (unset)
fs.s3a.attempts.maximum = "20" [core-default.xml]
fs.s3a.authoritative.path = (unset)
fs.s3a.block.size = "32M" [core-default.xml]
fs.s3a.bucket.probe = "0" [core-site.xml]
fs.s3a.buffer.dir = "/tmp/hadoop-stevel/s3a" [core-default.xml]
fs.s3a.bulk.delete.page.size = (unset)
fs.s3a.change.detection.source = "etag" [core-default.xml]
fs.s3a.change.detection.mode = "server" [core-default.xml]
fs.s3a.change.detection.version.required = "true" [core-default.xml]
fs.s3a.connection.ssl.enabled = "true" [core-default.xml]
fs.s3a.connection.maximum = "256" [core-site.xml]
fs.s3a.connection.establish.timeout = "5000" [core-site.xml]
fs.s3a.connection.request.timeout = "5s" [core-site.xml]
fs.s3a.connection.timeout = "5000" [core-site.xml]
fs.s3a.custom.signers = (unset)
fs.s3a.directory.marker.retention = "keep" [fs.s3a.bucket.stevel-london.directory.marker.retention via [core-site.xml]]
fs.s3a.downgrade.syncable.exceptions = "false" [core-site.xml]
fs.s3a.etag.checksum.enabled = "false" [core-default.xml]
fs.s3a.experimental.input.fadvise = (unset)
fs.s3a.experimental.aws.s3.throttling = (unset)
fs.s3a.experimental.optimized.directory.operations = (unset)
fs.s3a.fast.buffer.size = (unset)
fs.s3a.fast.upload.buffer = "disk" [core-default.xml]
fs.s3a.fast.upload.active.blocks = "8" [core-site.xml]
fs.s3a.impl.disable.cache = (unset)
fs.s3a.list.version = "2" [core-default.xml]
fs.s3a.max.total.tasks = "128" [core-site.xml]
fs.s3a.multipart.size = "32M" [core-site.xml]
fs.s3a.multiobjectdelete.enable = "true" [core-default.xml]
fs.s3a.multipart.purge = "false" [core-site.xml]
fs.s3a.multipart.purge.age = "3600000" [core-site.xml]
fs.s3a.paging.maximum = "5000" [core-default.xml]
fs.s3a.path.style.access = "false" [core-site.xml]
fs.s3a.proxy.host = (unset)
fs.s3a.proxy.port = (unset)
fs.s3a.proxy.username = (unset)
fs.s3a.proxy.password = (unset)
fs.s3a.proxy.domain = (unset)
fs.s3a.proxy.workstation = (unset)
fs.s3a.rename.raises.exceptions = "true" [core-site.xml]
fs.s3a.readahead.range = "524288" [core-site.xml]
fs.s3a.retry.limit = "7" [core-default.xml]
fs.s3a.retry.interval = "500ms" [core-default.xml]
fs.s3a.retry.throttle.limit = "20" [core-default.xml]
fs.s3a.retry.throttle.interval = "100ms" [core-default.xml]
fs.s3a.ssl.channel.mode = "default_jsse" [core-default.xml]
fs.s3a.s3.client.factory.impl = (unset)
fs.s3a.threads.max = "128" [core-site.xml]
fs.s3a.threads.keepalivetime = "60" [core-default.xml]
fs.s3a.user.agent.prefix = (unset)
fs.s3a.assumed.role.arn = "arn:aws:iam::152813717728:role/stevel-assumed-role" [core-site.xml]
fs.s3a.assumed.role.sts.endpoint = "sts.eu-west-2.amazonaws.com" [core-site.xml]
fs.s3a.assumed.role.sts.endpoint.region = "eu-west-2" [core-site.xml]
fs.s3a.assumed.role.session.name = (unset)
fs.s3a.assumed.role.session.duration = "12h" [core-site.xml]
fs.s3a.assumed.role.credentials.provider = "org.apache.hadoop.fs.s3a.SimpleAWSCredentialsProvider" [core-default.xml]
fs.s3a.assumed.role.policy = (unset)
fs.s3a.metadatastore.impl = "org.apache.hadoop.fs.s3a.s3guard.NullMetadataStore" [core-default.xml]
fs.s3a.metadatastore.authoritative = "false" [core-default.xml]
fs.s3a.metadatastore.authoritative.dir.ttl = (unset)
fs.s3a.metadatastore.fail.on.write.error = "true" [core-default.xml]
fs.s3a.metadatastore.metadata.ttl = "15m" [core-default.xml]
fs.s3a.s3guard.consistency.retry.interval = "2s" [core-default.xml]
fs.s3a.s3guard.consistency.retry.limit = "7" [core-default.xml]
fs.s3a.s3guard.ddb.table = (unset)
fs.s3a.s3guard.ddb.region = "eu-west-2" [fs.s3a.bucket.stevel-london.s3guard.ddb.region via [core-site.xml]]
fs.s3a.s3guard.ddb.background.sleep = "25ms" [core-default.xml]
fs.s3a.s3guard.ddb.max.retries = "9" [core-default.xml]
fs.s3a.s3guard.ddb.table.capacity.read = "0" [core-default.xml]
fs.s3a.s3guard.ddb.table.capacity.write = "0" [core-default.xml]
fs.s3a.s3guard.ddb.table.create = "false" [core-default.xml]
fs.s3a.s3guard.ddb.throttle.retry.interval = "100ms" [core-default.xml]
fs.s3a.s3guard.local.max_records = (unset)
fs.s3a.s3guard.local.ttl = (unset)
fs.s3a.committer.name = "magic" [core-site.xml]
fs.s3a.committer.magic.enabled = "true" [core-default.xml]
fs.s3a.committer.staging.abort.pending.uploads = (unset)
fs.s3a.committer.staging.conflict-mode = "append" [core-default.xml]
fs.s3a.committer.staging.tmp.path = "tmp/staging" [core-default.xml]
fs.s3a.committer.threads = "8" [core-default.xml]
fs.s3a.committer.staging.unique-filenames = "false" [core-site.xml]
mapreduce.outputcommitter.factory.scheme.s3a = "org.apache.hadoop.fs.s3a.commit.S3ACommitterFactory" [mapred-default.xml]
mapreduce.fileoutputcommitter.marksuccessfuljobs = (unset)
fs.s3a.delegation.token.binding = (unset)
fs.s3a.delegation.token.secondary.bindings = (unset)
fs.s3a.audit.referrer.enabled = (unset)
fs.s3a.audit.referrer.filter = (unset)
fs.s3a.audit.reject.out.of.span.operations = "true" [core-site.xml]
fs.s3a.audit.request.handlers = (unset)
fs.s3a.audit.service.classname = (unset)

Required Classes
================

All these classes must be on the classpath

class: org.apache.hadoop.fs.s3a.S3AFileSystem
       file:/Users/stevel/Projects/Releases/hadoop-3.4.0-SNAPSHOT/share/hadoop/tools/lib/hadoop-aws-3.4.0-SNAPSHOT.jar
class: com.amazonaws.services.s3.AmazonS3
       file:/Users/stevel/Projects/Releases/hadoop-3.4.0-SNAPSHOT/share/hadoop/tools/lib/aws-java-sdk-bundle-1.12.132.jar
class: com.amazonaws.ClientConfiguration
       file:/Users/stevel/Projects/Releases/hadoop-3.4.0-SNAPSHOT/share/hadoop/tools/lib/aws-java-sdk-bundle-1.12.132.jar
class: java.lang.System

Optional Classes
================

These classes are needed in some versions of Hadoop.
And/or for optional features to work.

class: com.amazonaws.services.dynamodbv2.AmazonDynamoDB
       file:/Users/stevel/Projects/Releases/hadoop-3.4.0-SNAPSHOT/share/hadoop/tools/lib/aws-java-sdk-bundle-1.12.132.jar
class: com.amazonaws.services.securitytoken.AWSSecurityTokenServiceClient
       file:/Users/stevel/Projects/Releases/hadoop-3.4.0-SNAPSHOT/share/hadoop/tools/lib/aws-java-sdk-bundle-1.12.132.jar
class: com.fasterxml.jackson.annotation.JacksonAnnotation
       file:/Users/stevel/Projects/Releases/hadoop-3.4.0-SNAPSHOT/share/hadoop/common/lib/jackson-annotations-2.13.0.jar
class: com.fasterxml.jackson.core.JsonParseException
       file:/Users/stevel/Projects/Releases/hadoop-3.4.0-SNAPSHOT/share/hadoop/common/lib/jackson-core-2.13.0.jar
class: com.fasterxml.jackson.databind.ObjectMapper
       file:/Users/stevel/Projects/Releases/hadoop-3.4.0-SNAPSHOT/share/hadoop/common/lib/jackson-databind-2.13.0.jar
class: org.joda.time.Interval
       Not found on classpath: org.joda.time.Interval
class: org.apache.hadoop.fs.s3a.s3guard.S3Guard
       file:/Users/stevel/Projects/Releases/hadoop-3.4.0-SNAPSHOT/share/hadoop/tools/lib/hadoop-aws-3.4.0-SNAPSHOT.jar
class: org.apache.hadoop.fs.s3a.commit.staging.StagingCommitter
       file:/Users/stevel/Projects/Releases/hadoop-3.4.0-SNAPSHOT/share/hadoop/tools/lib/hadoop-aws-3.4.0-SNAPSHOT.jar
class: org.apache.hadoop.fs.s3a.commit.magic.MagicS3GuardCommitter
       file:/Users/stevel/Projects/Releases/hadoop-3.4.0-SNAPSHOT/share/hadoop/tools/lib/hadoop-aws-3.4.0-SNAPSHOT.jar
class: org.apache.hadoop.fs.s3a.Invoker
       file:/Users/stevel/Projects/Releases/hadoop-3.4.0-SNAPSHOT/share/hadoop/tools/lib/hadoop-aws-3.4.0-SNAPSHOT.jar
class: org.apache.hadoop.fs.s3a.auth.AssumedRoleCredentialProvider
       file:/Users/stevel/Projects/Releases/hadoop-3.4.0-SNAPSHOT/share/hadoop/tools/lib/hadoop-aws-3.4.0-SNAPSHOT.jar
class: org.apache.hadoop.fs.s3a.TemporaryAWSCredentialsProvider
       file:/Users/stevel/Projects/Releases/hadoop-3.4.0-SNAPSHOT/share/hadoop/tools/lib/hadoop-aws-3.4.0-SNAPSHOT.jar
class: org.apache.hadoop.fs.s3a.auth.delegation.S3ADelegationTokens
       file:/Users/stevel/Projects/Releases/hadoop-3.4.0-SNAPSHOT/share/hadoop/tools/lib/hadoop-aws-3.4.0-SNAPSHOT.jar
class: com.amazonaws.services.s3.model.SelectObjectContentRequest
       file:/Users/stevel/Projects/Releases/hadoop-3.4.0-SNAPSHOT/share/hadoop/tools/lib/aws-java-sdk-bundle-1.12.132.jar
class: org.apache.hadoop.fs.s3a.select.SelectInputStream
       file:/Users/stevel/Projects/Releases/hadoop-3.4.0-SNAPSHOT/share/hadoop/tools/lib/hadoop-aws-3.4.0-SNAPSHOT.jar
class: org.apache.hadoop.fs.s3a.impl.RenameOperation
       file:/Users/stevel/Projects/Releases/hadoop-3.4.0-SNAPSHOT/share/hadoop/tools/lib/hadoop-aws-3.4.0-SNAPSHOT.jar
class: org.apache.hadoop.fs.s3a.impl.NetworkBinding
       file:/Users/stevel/Projects/Releases/hadoop-3.4.0-SNAPSHOT/share/hadoop/tools/lib/hadoop-aws-3.4.0-SNAPSHOT.jar
class: org.apache.hadoop.fs.s3a.impl.DirectoryPolicy
       file:/Users/stevel/Projects/Releases/hadoop-3.4.0-SNAPSHOT/share/hadoop/tools/lib/hadoop-aws-3.4.0-SNAPSHOT.jar
class: org.apache.hadoop.fs.s3a.audit.AuditManagerS3A
       file:/Users/stevel/Projects/Releases/hadoop-3.4.0-SNAPSHOT/share/hadoop/tools/lib/hadoop-aws-3.4.0-SNAPSHOT.jar
class: org.apache.knox.gateway.cloud.idbroker.s3a.IDBDelegationTokenBinding
       Not found on classpath: org.apache.knox.gateway.cloud.idbroker.s3a.IDBDelegationTokenBinding
class: org.wildfly.openssl.OpenSSLProvider
       file:/Users/stevel/Projects/Releases/hadoop-3.4.0-SNAPSHOT/share/hadoop/tools/lib/wildfly-openssl-1.0.7.Final.jar

At least one optional class was missing -the filesystem client *may* still work

S3A Config validation
=====================

Buffer configuration option fs.s3a.buffer.dir = /tmp/hadoop-stevel/s3a
Temporary files created in /tmp/hadoop-stevel/s3a

Configuration options with prefix fs.s3a.ext.
=============================================


Locating implementation class for Filesystem scheme s3a://
==========================================================

FileSystem for s3a:// is: org.apache.hadoop.fs.s3a.S3AFileSystem
Loaded from: file:/Users/stevel/Projects/Releases/hadoop-3.4.0-SNAPSHOT/share/hadoop/tools/lib/hadoop-aws-3.4.0-SNAPSHOT.jar

Endpoints
=========

Attempting to list and connect to public service endpoints,
without any authentication credentials. 
This is just testing the reachability of the URLs.
If the request fails with any network error it is likely
to be configuration problem with address, proxy, etc%n
If it is some authentication error, then don't worry so much%n-look for the results of the filesystem operations

Endpoint: https://stevel-london.s3.amazonaws.com/
=================================================

Canonical hostname s3-w.eu-west-2.amazonaws.com
  IP address 52.95.148.69
Proxy: none

Connecting to https://stevel-london.s3.amazonaws.com/

Response: 403 : Forbidden
HTTP response 403 from https://stevel-london.s3.amazonaws.com/: Forbidden
Using proxy: false 
Transfer-Encoding: chunked
null: HTTP/1.1 403 Forbidden
Server: AmazonS3
x-amz-request-id: 8YZH01Y5KCVP5TM9
x-amz-id-2: 0XTVOvJ5Z7p1FiMmGomfymm3sqqNp7Qbf0n8dcshA8npKgA72d5M4mY90P1Quvv/+jGojQw7KcU=
Date: Thu, 06 Jan 2022 13:41:52 GMT
x-amz-bucket-region: eu-west-2
Content-Type: application/xml

<?xml version="1.0" encoding="UTF-8"?>
<Error><Code>AccessDenied</Code><Message>Access Denied</Message><RequestId>8YZH01Y5KCVP5TM9</RequestId><HostId>0XTVOvJ5Z7p1FiMmGomfymm3sqqNp7Qbf0n8dcshA8npKgA72d5M4mY90P1Quvv/+jGojQw7KcU=</HostId></Error>


Endpoint: https://sts.eu-west-2.amazonaws.com/
==============================================

Canonical hostname 52.94.48.43
  IP address 52.94.48.43
Proxy: none

Connecting to https://sts.eu-west-2.amazonaws.com/

Response: 200 : OK
HTTP response 200 from https://sts.eu-west-2.amazonaws.com/: OK
Using proxy: false 
Transfer-Encoding: chunked
null: HTTP/1.1 200 OK
X-Cache: Miss from cloudfront
Server: Server
X-Content-Type-Options: nosniff
X-Amz-Cf-Pop: IAD79-C1
Permissions-Policy: interest-cohort=()
Connection: keep-alive
Last-Modified: Mon, 03 Jan 2022 20:29:37 GMT
x-amz-rid: NFFCXAKR69RKGD9S6BWE
Date: Thu, 06 Jan 2022 13:41:55 GMT
Via: 1.1 39174a6a452e175e6e614ff396a4ca4f.cloudfront.net (CloudFront)
X-Frame-Options: SAMEORIGIN
Strict-Transport-Security: max-age=300
X-Amz-Cf-Id: 0aTRgu2cx13Puh4L0m8tfbZ68KFe821nDGOxRGXpcu1VTyFiXE30DA==
Vary: accept-encoding,Content-Type,Accept-Encoding,X-Amzn-CDN-Cache,X-Amzn-AX-Treatment,User-Agent
Set-Cookie: aws_lang=en; Domain=.amazon.com; Path=/,aws-priv=eyJ2IjoxLCJldSI6MSwic3QiOjB9; Version=1; Comment="Anonymous cookie for privacy regulations"; Domain=.aws.amazon.com; Max-Age=31536000; Expires=Fri, 06-Jan-2023 13:41:55 GMT; Path=/
x-amz-id-1: NFFCXAKR69RKGD9S6BWE
X-XSS-Protection: 1; mode=block
Content-Security-Policy-Report-Only: default-src *; connect-src *; font-src * data:; frame-src *; img-src * data:; media-src *; object-src *; script-src 'unsafe-eval' 'unsafe-inline' *; style-src 'unsafe-inline' *; report-uri https://prod-us-west-2.csp-report.marketing.aws.dev/submit
Content-Type: text/html;charset=UTF-8

<!doctype html>
<html class="no-js aws-lng-en_US aws-with-target" lang="en-US" data-static-assets="https://a0.awsstatic.com" data-js-version="1.0.413" data-css-version="1.0.400">
 <head> 
  <meta http-equiv="Content-Security-Policy" content="default-src 'self' data: https://a0.awsstatic.com; connect-src 'self' https://112-tzm-766.mktoresp.com https://112-tzm-766.mktoutil.com https://a0.awsstatic.com https://a0.p.awsstatic.com https://a1.awsstatic.com https://amazonwebservices.d2.sc.omtrdc.net https://amazonwebservicesinc.tt.omtrdc.net https://api.regional-table.region-services.aws.a2z.com https://api.us-west-2.prod.pricing.aws.a2z.com https://aws.amazon.com https://aws.demdex.net https://b0.p.awsstatic.com https://c0.b0.p.awsstatic.com https://calculator.aws https://cm.everesttech.net https://d0.awsstatic.com https://d1.awsstatic.com https://d1fgizr415o1r6.cloudfront.net https://d3borx6sfvnesb.cloudfront.net https://dc.ads.linkedin.com https://dftu77xade0tc.cloudfront.net https://dpm.demdex.net https://fls-na

WARNING: this unauthenticated operation was not rejected.
 This may mean the store is world-readable.
 Check this by pasting https://sts.eu-west-2.amazonaws.com/ into your browser

Test filesystem s3a://stevel-london/
====================================

Trying some operations against the filesystem
Starting with some read operations, then trying to write
2022-01-06 13:41:55,180 [main] INFO  diag.StoreDiag (StoreDurationInfo.java:<init>(56)) - Starting: Creating filesystem for s3a://stevel-london/
2022-01-06 13:41:55,201 [main] INFO  Configuration.deprecation (Configuration.java:logDeprecation(1459)) - fs.s3a.server-side-encryption.key is deprecated. Instead, use fs.s3a.encryption.key
2022-01-06 13:41:55,201 [main] INFO  Configuration.deprecation (Configuration.java:logDeprecation(1459)) - fs.s3a.server-side-encryption-algorithm is deprecated. Instead, use fs.s3a.encryption.algorithm
2022-01-06 13:41:55,983 [main] INFO  impl.DirectoryPolicyImpl (DirectoryPolicyImpl.java:getDirectoryPolicy(189)) - Directory markers will be kept
2022-01-06 13:41:55,985 [main] INFO  diag.StoreDiag (StoreDurationInfo.java:close(115)) - Creating filesystem for s3a://stevel-london/: duration 0:00:807
S3AFileSystem{uri=s3a://stevel-london, workingDir=s3a://stevel-london/user/stevel, inputPolicy=normal, partSize=33554432, enableMultiObjectsDelete=true, maxKeys=5000, readAhead=524288, blockSize=33554432, multiPartThreshold=134217728, s3EncryptionAlgorithm='SSE_KMS', blockFactory=org.apache.hadoop.fs.s3a.S3ADataBlocks$DiskBlockFactory@2db2cd5, auditManager=Service ActiveAuditManagerS3A in state ActiveAuditManagerS3A: STARTED, auditor=LoggingAuditor{ID='b09ad642-5323-4ebd-b67a-3870d6efbeed', headerEnabled=true, rejectOutOfSpan=true}}, metastore=NullMetadataStore, authoritativeStore=false, authoritativePath=[], useListV1=false, magicCommitter=true, boundedExecutor=BlockingThreadPoolExecutorService{SemaphoredDelegatingExecutor{permitCount=384, available=384, waiting=0}, activeCount=0}, unboundedExecutor=java.util.concurrent.ThreadPoolExecutor@70e659aa[Running, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 0], credentials=AWSCredentialProviderList[refcount= 1: [TemporaryAWSCredentialsProvider, SimpleAWSCredentialsProvider, EnvironmentVariableCredentialsProvider, org.apache.hadoop.fs.s3a.auth.IAMInstanceCredentialsProvider@615f972], delegation tokens=disabled, DirectoryMarkerRetention{policy='keep'}, instrumentation {S3AInstrumentation{}}, ClientSideEncryption=false}
Implementation class class org.apache.hadoop.fs.s3a.S3AFileSystem

Path Capabilities
=================

fs.capability.etags.available   true
fs.capability.etags.preserved.in.rename false
fs.capability.paths.checksums   false
fs.capability.multipart.uploader        true
fs.capability.outputstream.abortable    true
fs.s3a.capability.magic.committer       true
fs.s3a.capability.select.sql    true
fs.s3a.capability.directory.marker.aware        true
fs.s3a.capability.directory.marker.policy.keep  true
fs.s3a.capability.directory.marker.policy.delete        false
fs.s3a.capability.directory.marker.policy.authoritative false
fs.s3a.capability.directory.marker.action.keep  true
fs.s3a.capability.directory.marker.action.delete        false
2022-01-06 13:41:55,993 [main] INFO  diag.StoreDiag (StoreDurationInfo.java:<init>(56)) - Starting: GetFileStatus s3a://stevel-london/
root entry S3AFileStatus{path=s3a://stevel-london/; isDirectory=true; modification_time=0; access_time=0; owner=stevel; group=stevel; permission=rwxrwxrwx; isSymlink=false; hasAcl=false; isEncrypted=true; isErasureCoded=false} isEmptyDirectory=UNKNOWN eTag=null versionId=null
2022-01-06 13:41:56,027 [main] INFO  diag.StoreDiag (StoreDurationInfo.java:close(115)) - GetFileStatus s3a://stevel-london/: duration 0:00:034
2022-01-06 13:41:56,027 [main] INFO  diag.StoreDiag (StoreDurationInfo.java:<init>(56)) - Starting: First 25 entries of listStatus(s3a://stevel-london/)
s3a://stevel-london/ : scanned 0 entries
2022-01-06 13:41:56,601 [main] INFO  diag.StoreDiag (StoreDurationInfo.java:close(115)) - First 25 entries of listStatus(s3a://stevel-london/): duration 0:00:574
2022-01-06 13:41:56,602 [main] INFO  diag.StoreDiag (StoreDurationInfo.java:<init>(56)) - Starting: First 25 entries of listFiles(s3a://stevel-london/)
Files listing provided by: FunctionRemoteIterator{FileStatusListingIterator[Object listing iterator against s3a://stevel-london/; listing count 1; isTruncated=false; counters=((object_continue_list_request=0) (object_continue_list_request.failures=0) (object_list_request.failures=0) (object_list_request=1));
gauges=();
minimums=((object_continue_list_request.min=-1) (object_list_request.failures.min=-1) (object_continue_list_request.failures.min=-1) (object_list_request.min=50));
maximums=((object_list_request.max=50) (object_continue_list_request.max=-1) (object_continue_list_request.failures.max=-1) (object_list_request.failures.max=-1));
means=((object_continue_list_request.mean=(samples=0, sum=0, mean=0.0000)) (object_list_request.failures.mean=(samples=0, sum=0, mean=0.0000)) (object_continue_list_request.failures.mean=(samples=0, sum=0, mean=0.0000)) (object_list_request.mean=(samples=1, sum=50, mean=50.0000)));
]}
2022-01-06 13:41:56,663 [main] INFO  diag.StoreDiag (StoreDurationInfo.java:close(115)) - First 25 entries of listFiles(s3a://stevel-london/): duration 0:00:061

Security and Delegation Tokens
==============================

Security is disabled
Filesystem s3a://stevel-london does not/is not configured to issue delegation tokens (at least while security is disabled)
2022-01-06 13:41:56,663 [main] INFO  diag.StoreDiag (StoreDurationInfo.java:<init>(56)) - Starting: probe for a directory which does not yet exist s3a://stevel-london/dir-893a3615-18ac-4f2d-8ec7-d9501e019921
2022-01-06 13:41:56,738 [main] INFO  diag.StoreDiag (StoreDurationInfo.java:close(115)) - probe for a directory which does not yet exist s3a://stevel-london/dir-893a3615-18ac-4f2d-8ec7-d9501e019921: duration 0:00:075

Filesystem Write Operations
===========================

2022-01-06 13:41:56,738 [main] INFO  diag.StoreDiag (StoreDurationInfo.java:<init>(56)) - Starting: creating a directory s3a://stevel-london/dir-893a3615-18ac-4f2d-8ec7-d9501e019921
2022-01-06 13:41:56,925 [main] INFO  diag.StoreDiag (StoreDurationInfo.java:close(115)) - creating a directory s3a://stevel-london/dir-893a3615-18ac-4f2d-8ec7-d9501e019921: duration 0:00:187
2022-01-06 13:41:56,926 [main] INFO  diag.StoreDiag (StoreDurationInfo.java:<init>(56)) - Starting: create directory s3a://stevel-london/dir-893a3615-18ac-4f2d-8ec7-d9501e019921
2022-01-06 13:41:56,994 [main] INFO  diag.StoreDiag (StoreDurationInfo.java:close(115)) - create directory s3a://stevel-london/dir-893a3615-18ac-4f2d-8ec7-d9501e019921: duration 0:00:067
2022-01-06 13:41:56,994 [main] INFO  diag.StoreDiag (StoreDurationInfo.java:<init>(56)) - Starting: probing path s3a://stevel-london/dir-893a3615-18ac-4f2d-8ec7-d9501e019921/file
2022-01-06 13:41:57,060 [main] INFO  diag.StoreDiag (StoreDurationInfo.java:close(115)) - probing path s3a://stevel-london/dir-893a3615-18ac-4f2d-8ec7-d9501e019921/file: duration 0:00:066
2022-01-06 13:41:57,060 [main] INFO  diag.StoreDiag (StoreDurationInfo.java:<init>(56)) - Starting: creating a file s3a://stevel-london/dir-893a3615-18ac-4f2d-8ec7-d9501e019921/file
Output stream summary: FSDataOutputStream{wrappedStream=S3ABlockOutputStream{WriteOperationHelper {bucket=stevel-london}, blockSize=33554432 Statistics=counters=((object_multipart_aborted=0) (stream_write_total_time=0) (op_hsync=0) (stream_write_exceptions_completing_upload=0) (stream_write_total_data=7) (action_executor_acquired=1) (op_abort.failures=0) (stream_write_exceptions=0) (op_hflush=0) (op_abort=0) (action_executor_acquired.failures=0) (stream_write_queue_duration=0) (multipart_upload_completed=0) (stream_write_block_uploads=1) (object_multipart_aborted.failures=0) (multipart_upload_completed.failures=0) (stream_write_bytes=7));
gauges=((stream_write_block_uploads_pending=1) (stream_write_block_uploads_data_pending=0));
minimums=((object_multipart_aborted.failures.min=-1) (multipart_upload_completed.failures.min=-1) (op_abort.min=-1) (action_executor_acquired.min=0) (op_abort.failures.min=-1) (action_executor_acquired.failures.min=-1) (object_multipart_aborted.min=-1) (multipart_upload_completed.min=-1));
maximums=((object_multipart_aborted.max=-1) (multipart_upload_completed.failures.max=-1) (op_abort.max=-1) (action_executor_acquired.failures.max=-1) (multipart_upload_completed.max=-1) (op_abort.failures.max=-1) (object_multipart_aborted.failures.max=-1) (action_executor_acquired.max=0));
means=((op_abort.failures.mean=(samples=0, sum=0, mean=0.0000)) (action_executor_acquired.mean=(samples=1, sum=0, mean=0.0000)) (action_executor_acquired.failures.mean=(samples=0, sum=0, mean=0.0000)) (multipart_upload_completed.failures.mean=(samples=0, sum=0, mean=0.0000)) (object_multipart_aborted.mean=(samples=0, sum=0, mean=0.0000)) (op_abort.mean=(samples=0, sum=0, mean=0.0000)) (object_multipart_aborted.failures.mean=(samples=0, sum=0, mean=0.0000)) (multipart_upload_completed.mean=(samples=0, sum=0, mean=0.0000)));
}}
2022-01-06 13:41:57,291 [main] INFO  diag.StoreDiag (StoreDurationInfo.java:close(115)) - creating a file s3a://stevel-london/dir-893a3615-18ac-4f2d-8ec7-d9501e019921/file: duration 0:00:230
2022-01-06 13:41:57,291 [main] INFO  diag.StoreDiag (StoreDurationInfo.java:<init>(56)) - Starting: Listing  s3a://stevel-london/dir-893a3615-18ac-4f2d-8ec7-d9501e019921
2022-01-06 13:41:57,331 [main] INFO  diag.StoreDiag (StoreDurationInfo.java:close(115)) - Listing  s3a://stevel-london/dir-893a3615-18ac-4f2d-8ec7-d9501e019921: duration 0:00:040
2022-01-06 13:41:57,331 [main] INFO  diag.StoreDiag (StoreDurationInfo.java:<init>(56)) - Starting: Reading a file s3a://stevel-london/dir-893a3615-18ac-4f2d-8ec7-d9501e019921/file
input stream summary: org.apache.hadoop.fs.FSDataInputStream@26f3d90c: S3AInputStream{s3a://stevel-london/dir-893a3615-18ac-4f2d-8ec7-d9501e019921/file wrappedStream=closed read policy=normal pos=7 nextReadPos=7 contentLength=7 contentRangeStart=0 contentRangeFinish=7 remainingInCurrentRequest=0 ChangeTracker{ETagChangeDetectionPolicy mode=Server, revisionId='c96643be2404fdc0054404530b198c21'}
StreamStatistics{counters=((stream_read_bytes_discarded_in_close=0) (stream_read_operations_incomplete=0) (stream_read_close_operations=1) (stream_read_fully_operations=0) (stream_read_opened=1) (stream_read_seek_backward_operations=0) (stream_read_closed=1) (stream_read_operations=1) (stream_read_bytes_backwards_on_seek=0) (stream_read_seek_bytes_discarded=0) (action_http_get_request.failures=0) (stream_read_seek_policy_changed=1) (stream_aborted=0) (stream_read_unbuffered=0) (stream_read_seek_operations=0) (stream_read_total_bytes=7) (stream_read_exceptions=0) (stream_read_version_mismatches=0) (stream_read_seek_forward_operations=0) (stream_read_bytes_discarded_in_abort=0) (stream_read_bytes=7) (action_http_get_request=1) (stream_read_seek_bytes_skipped=0));
gauges=((stream_read_gauge_input_policy=0));
minimums=((action_http_get_request.min=45) (action_http_get_request.failures.min=-1));
maximums=((action_http_get_request.max=45) (action_http_get_request.failures.max=-1));
means=((action_http_get_request.mean=(samples=1, sum=45, mean=45.0000)) (action_http_get_request.failures.mean=(samples=0, sum=0, mean=0.0000)));
}}
2022-01-06 13:41:57,432 [main] INFO  diag.StoreDiag (StoreDurationInfo.java:close(115)) - Reading a file s3a://stevel-london/dir-893a3615-18ac-4f2d-8ec7-d9501e019921/file: duration 0:00:101
2022-01-06 13:41:57,432 [main] INFO  diag.StoreDiag (StoreDurationInfo.java:<init>(56)) - Starting: Renaming file s3a://stevel-london/dir-893a3615-18ac-4f2d-8ec7-d9501e019921/file under s3a://stevel-london/dir-893a3615-18ac-4f2d-8ec7-d9501e019921/subdir
2022-01-06 13:41:58,446 [main] INFO  diag.StoreDiag (StoreDurationInfo.java:close(115)) - Renaming file s3a://stevel-london/dir-893a3615-18ac-4f2d-8ec7-d9501e019921/file under s3a://stevel-london/dir-893a3615-18ac-4f2d-8ec7-d9501e019921/subdir: duration 0:01:014
2022-01-06 13:41:58,447 [main] INFO  diag.StoreDiag (StoreDurationInfo.java:<init>(56)) - Starting: probing path s3a://stevel-london/dir-893a3615-18ac-4f2d-8ec7-d9501e019921/subdir/subfile
2022-01-06 13:41:58,595 [main] INFO  diag.StoreDiag (StoreDurationInfo.java:close(115)) - probing path s3a://stevel-london/dir-893a3615-18ac-4f2d-8ec7-d9501e019921/subdir/subfile: duration 0:00:148
2022-01-06 13:41:58,595 [main] INFO  diag.StoreDiag (StoreDurationInfo.java:<init>(56)) - Starting: delete dir s3a://stevel-london/dir-893a3615-18ac-4f2d-8ec7-d9501e019921/subdir2
2022-01-06 13:41:58,854 [main] INFO  diag.StoreDiag (StoreDurationInfo.java:close(115)) - delete dir s3a://stevel-london/dir-893a3615-18ac-4f2d-8ec7-d9501e019921/subdir2: duration 0:00:259
2022-01-06 13:41:58,854 [main] INFO  diag.StoreDiag (StoreDurationInfo.java:<init>(56)) - Starting: probing path s3a://stevel-london/dir-893a3615-18ac-4f2d-8ec7-d9501e019921/subdir2
2022-01-06 13:41:58,921 [main] INFO  diag.StoreDiag (StoreDurationInfo.java:close(115)) - probing path s3a://stevel-london/dir-893a3615-18ac-4f2d-8ec7-d9501e019921/subdir2: duration 0:00:067
2022-01-06 13:41:58,921 [main] INFO  diag.StoreDiag (StoreDurationInfo.java:<init>(56)) - Starting: delete directory s3a://stevel-london/dir-893a3615-18ac-4f2d-8ec7-d9501e019921
2022-01-06 13:41:59,019 [main] INFO  diag.StoreDiag (StoreDurationInfo.java:close(115)) - delete directory s3a://stevel-london/dir-893a3615-18ac-4f2d-8ec7-d9501e019921: duration 0:00:098
2022-01-06 13:41:59,023 [main] INFO  statistics.IOStatisticsLogging (IOStatisticsLogging.java:logIOStatisticsAtLevel(269)) - IOStatistics: counters=((action_executor_acquired=1)
(action_http_get_request=1)
(action_http_head_request=17)
(audit_request_execution=50)
(audit_span_creation=18)
(directories_created=2)
(directories_deleted=1)
(files_copied=2)
(files_copied_bytes=14)
(files_created=1)
(files_deleted=4)
(object_bulk_delete_request=2)
(object_copy_requests=2)
(object_delete_objects=5)
(object_delete_request=2)
(object_list_request=21)
(object_metadata_request=17)
(object_put_bytes=7)
(object_put_request=3)
(object_put_request_completed=3)
(op_create=1)
(op_delete=2)
(op_get_file_status=6)
(op_get_file_status.failures=4)
(op_list_files=2)
(op_list_status=1)
(op_mkdirs=2)
(op_open=1)
(op_rename=2)
(store_io_request=51)
(stream_read_bytes=7)
(stream_read_close_operations=1)
(stream_read_closed=1)
(stream_read_opened=1)
(stream_read_operations=1)
(stream_read_seek_policy_changed=1)
(stream_read_total_bytes=7)
(stream_write_block_uploads=1)
(stream_write_bytes=7)
(stream_write_total_data=14));

gauges=((stream_write_block_uploads_pending=1));

minimums=((action_executor_acquired.min=0)
(action_http_get_request.min=45)
(action_http_head_request.min=27)
(object_bulk_delete_request.min=44)
(object_delete_request.min=38)
(object_list_request.min=32)
(object_put_request.min=73)
(op_create.min=41)
(op_delete.min=38)
(op_get_file_status.failures.min=66)
(op_get_file_status.min=3)
(op_list_files.min=40)
(op_list_status.min=571)
(op_mkdirs.min=185)
(op_rename.min=399));

maximums=((action_executor_acquired.max=0)
(action_http_get_request.max=45)
(action_http_head_request.max=110)
(object_bulk_delete_request.max=61)
(object_delete_request.max=41)
(object_list_request.max=558)
(object_put_request.max=167)
(op_create.max=41)
(op_delete.max=87)
(op_get_file_status.failures.max=148)
(op_get_file_status.max=67)
(op_list_files.max=53)
(op_list_status.max=571)
(op_mkdirs.max=186)
(op_rename.max=428));

means=((action_executor_acquired.mean=(samples=1, sum=0, mean=0.0000))
(action_http_get_request.mean=(samples=1, sum=45, mean=45.0000))
(action_http_head_request.mean=(samples=17, sum=629, mean=37.0000))
(object_bulk_delete_request.mean=(samples=2, sum=105, mean=52.5000))
(object_delete_request.mean=(samples=2, sum=79, mean=39.5000))
(object_list_request.mean=(samples=21, sum=1344, mean=64.0000))
(object_put_request.mean=(samples=3, sum=360, mean=120.0000))
(op_create.mean=(samples=1, sum=41, mean=41.0000))
(op_delete.mean=(samples=2, sum=125, mean=62.5000))
(op_get_file_status.failures.mean=(samples=4, sum=355, mean=88.7500))
(op_get_file_status.mean=(samples=2, sum=70, mean=35.0000))
(op_list_files.mean=(samples=2, sum=93, mean=46.5000))
(op_list_status.mean=(samples=1, sum=571, mean=571.0000))
(op_mkdirs.mean=(samples=2, sum=371, mean=185.5000))
(op_rename.mean=(samples=2, sum=827, mean=413.5000)));

JVM: memory=89262960

I'm actually going to list that on the commands to run during validation because it could potentially find a problem on the command line.

@hadoop-yetus
Copy link

💔 -1 overall

Vote Subsystem Runtime Logfile Comment
+0 🆗 reexec 1m 4s Docker mode activated.
_ Prechecks _
+1 💚 dupname 0m 0s No case conflicting files found.
+0 🆗 codespell 0m 1s codespell was not available.
+0 🆗 shelldocs 0m 1s Shelldocs was not available.
+1 💚 @author 0m 0s The patch does not contain any @author tags.
-1 ❌ test4tests 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch.
_ trunk Compile Tests _
+0 🆗 mvndep 13m 3s Maven dependency ordering for branch
+1 💚 mvninstall 22m 1s trunk passed
+1 💚 compile 22m 19s trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04
+1 💚 compile 19m 33s trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10
+1 💚 mvnsite 25m 48s trunk passed
+1 💚 javadoc 8m 4s trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04
+1 💚 javadoc 8m 5s trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10
+1 💚 shadedclient 35m 22s branch has no errors when building and testing our client artifacts.
_ Patch Compile Tests _
+0 🆗 mvndep 0m 44s Maven dependency ordering for patch
+1 💚 mvninstall 21m 30s the patch passed
+1 💚 compile 21m 38s the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04
+1 💚 javac 21m 38s the patch passed
+1 💚 compile 19m 34s the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10
+1 💚 javac 19m 34s the patch passed
+1 💚 blanks 0m 0s The patch has no blanks issues.
+1 💚 mvnsite 21m 22s the patch passed
+1 💚 shellcheck 0m 0s No new issues.
+1 💚 xml 0m 1s The patch has no ill-formed XML file.
+1 💚 javadoc 7m 58s the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04
+1 💚 javadoc 8m 3s the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10
+1 💚 shadedclient 36m 32s patch has no errors when building and testing our client artifacts.
_ Other Tests _
-1 ❌ unit 1130m 1s /patch-unit-root.txt root in the patch passed.
+1 💚 asflicense 1m 49s The patch does not generate ASF License warnings.
1395m 6s
Reason Tests
Failed junit tests hadoop.fs.s3a.TestArnResource
hadoop.hdfs.rbfbalance.TestRouterDistCpProcedure
hadoop.ipc.TestIPC
hadoop.yarn.csi.client.TestCsiClient
Subsystem Report/Notes
Docker ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3864/1/artifact/out/Dockerfile
GITHUB PR #3864
Optional Tests dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient codespell xml shellcheck shelldocs
uname Linux a50c2b179459 4.15.0-65-generic #74-Ubuntu SMP Tue Sep 17 17:06:04 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
Build tool maven
Personality dev-support/bin/hadoop.sh
git revision trunk / 6868238c575d531b760a0109ecfd03ecba8b1055
Default Java Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10
Multi-JDK versions /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10
Test Results https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3864/1/testReport/
Max. process+thread count 2873 (vs. ulimit of 5500)
modules C: hadoop-project . U: .
Console output https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3864/1/console
versions git=2.25.1 maven=3.6.3 shellcheck=0.7.0
Powered by Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org

This message was automatically generated.

@hadoop-yetus
Copy link

💔 -1 overall

Vote Subsystem Runtime Logfile Comment
+0 🆗 reexec 1m 3s Docker mode activated.
_ Prechecks _
+1 💚 dupname 0m 0s No case conflicting files found.
+0 🆗 codespell 0m 1s codespell was not available.
+0 🆗 shelldocs 0m 1s Shelldocs was not available.
+1 💚 @author 0m 0s The patch does not contain any @author tags.
-1 ❌ test4tests 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch.
_ trunk Compile Tests _
+0 🆗 mvndep 12m 48s Maven dependency ordering for branch
+1 💚 mvninstall 23m 10s trunk passed
+1 💚 compile 23m 43s trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04
+1 💚 compile 20m 17s trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10
+1 💚 mvnsite 26m 41s trunk passed
+1 💚 javadoc 8m 23s trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04
+1 💚 javadoc 8m 35s trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10
+1 💚 shadedclient 39m 3s branch has no errors when building and testing our client artifacts.
_ Patch Compile Tests _
+0 🆗 mvndep 0m 45s Maven dependency ordering for patch
+1 💚 mvninstall 24m 6s the patch passed
+1 💚 compile 23m 19s the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04
+1 💚 javac 23m 19s the patch passed
+1 💚 compile 20m 0s the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10
+1 💚 javac 20m 0s the patch passed
+1 💚 blanks 0m 0s The patch has no blanks issues.
+1 💚 mvnsite 22m 13s the patch passed
+1 💚 shellcheck 0m 0s No new issues.
+1 💚 xml 0m 1s The patch has no ill-formed XML file.
+1 💚 javadoc 8m 13s the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04
+1 💚 javadoc 8m 12s the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10
+1 💚 shadedclient 38m 9s patch has no errors when building and testing our client artifacts.
_ Other Tests _
-1 ❌ unit 951m 18s /patch-unit-root.txt root in the patch failed.
+0 🆗 asflicense 0m 51s ASF License check generated no output?
1230m 14s
Reason Tests
Failed junit tests hadoop.tools.TestFileBasedCopyListing
hadoop.mapreduce.TestMapReduceLazyOutput
hadoop.mapreduce.TestMRJobClient
hadoop.fs.http.client.TestHttpFSFileSystemLocalFileSystem
hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes
hadoop.ipc.TestIPC
Subsystem Report/Notes
Docker ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3864/2/artifact/out/Dockerfile
GITHUB PR #3864
Optional Tests dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient codespell xml shellcheck shelldocs
uname Linux f347eb64278e 4.15.0-65-generic #74-Ubuntu SMP Tue Sep 17 17:06:04 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
Build tool maven
Personality dev-support/bin/hadoop.sh
git revision trunk / ef8d5059d5d68c411d5cd511e4d5e252107f33fe
Default Java Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10
Multi-JDK versions /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10
Test Results https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3864/2/testReport/
Max. process+thread count 3102 (vs. ulimit of 5500)
modules C: hadoop-project . U: .
Console output https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3864/2/console
versions git=2.25.1 maven=3.6.3 shellcheck=0.7.0
Powered by Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org

This message was automatically generated.

@hadoop-yetus
Copy link

💔 -1 overall

Vote Subsystem Runtime Logfile Comment
+0 🆗 reexec 16m 47s Docker mode activated.
_ Prechecks _
+1 💚 dupname 0m 0s No case conflicting files found.
+0 🆗 codespell 0m 0s codespell was not available.
+0 🆗 markdownlint 0m 0s markdownlint was not available.
+0 🆗 shelldocs 0m 0s Shelldocs was not available.
+1 💚 @author 0m 0s The patch does not contain any @author tags.
-1 ❌ test4tests 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch.
_ trunk Compile Tests _
+0 🆗 mvndep 13m 5s Maven dependency ordering for branch
+1 💚 mvninstall 21m 35s trunk passed
+1 💚 compile 22m 52s trunk passed with JDK Ubuntu-11.0.13+8-Ubuntu-0ubuntu1.20.04
+1 💚 compile 19m 30s trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07
+1 💚 mvnsite 25m 48s trunk passed
+1 💚 javadoc 8m 0s trunk passed with JDK Ubuntu-11.0.13+8-Ubuntu-0ubuntu1.20.04
+1 💚 javadoc 8m 10s trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07
+1 💚 shadedclient 35m 18s branch has no errors when building and testing our client artifacts.
_ Patch Compile Tests _
+0 🆗 mvndep 0m 41s Maven dependency ordering for patch
+1 💚 mvninstall 22m 6s the patch passed
+1 💚 compile 21m 53s the patch passed with JDK Ubuntu-11.0.13+8-Ubuntu-0ubuntu1.20.04
+1 💚 javac 21m 53s the patch passed
+1 💚 compile 19m 34s the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07
+1 💚 javac 19m 34s the patch passed
+1 💚 blanks 0m 0s The patch has no blanks issues.
+1 💚 mvnsite 21m 16s the patch passed
+1 💚 shellcheck 0m 0s No new issues.
+1 💚 xml 0m 2s The patch has no ill-formed XML file.
+1 💚 javadoc 7m 56s the patch passed with JDK Ubuntu-11.0.13+8-Ubuntu-0ubuntu1.20.04
+1 💚 javadoc 8m 6s the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07
+1 💚 shadedclient 36m 41s patch has no errors when building and testing our client artifacts.
_ Other Tests _
-1 ❌ unit 1115m 2s /patch-unit-root.txt root in the patch passed.
+1 💚 asflicense 1m 52s The patch does not generate ASF License warnings.
1396m 50s
Reason Tests
Failed junit tests hadoop.fs.s3a.TestArnResource
hadoop.hdfs.rbfbalance.TestRouterDistCpProcedure
hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes
hadoop.hdfs.server.diskbalancer.command.TestDiskBalancerCommand
hadoop.ipc.TestIPC
hadoop.yarn.csi.client.TestCsiClient
hadoop.yarn.server.router.clientrm.TestFederationClientInterceptor
Subsystem Report/Notes
Docker ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3864/3/artifact/out/Dockerfile
GITHUB PR #3864
Optional Tests dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient codespell xml markdownlint shellcheck shelldocs
uname Linux d044f3599823 4.15.0-65-generic #74-Ubuntu SMP Tue Sep 17 17:06:04 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
Build tool maven
Personality dev-support/bin/hadoop.sh
git revision trunk / 09d4d7bf3f21928d4294ad9e6693453d1e072a09
Default Java Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07
Multi-JDK versions /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.13+8-Ubuntu-0ubuntu1.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07
Test Results https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3864/3/testReport/
Max. process+thread count 3158 (vs. ulimit of 5500)
modules C: hadoop-project hadoop-tools/hadoop-aws . U: .
Console output https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3864/3/console
versions git=2.25.1 maven=3.6.3 shellcheck=0.7.0
Powered by Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org

This message was automatically generated.

@steveloughran
Copy link
Contributor Author

i can't get the reports, but must consider hadoop.fs.s3a.TestArnResource to be a regression.

i don't see it -but my test setup does include ARN bindings...maybe that is a factor. will explore

@hadoop-yetus
Copy link

💔 -1 overall

Vote Subsystem Runtime Logfile Comment
+0 🆗 reexec 0m 59s Docker mode activated.
_ Prechecks _
+1 💚 dupname 0m 0s No case conflicting files found.
+0 🆗 codespell 0m 0s codespell was not available.
+0 🆗 markdownlint 0m 0s markdownlint was not available.
+0 🆗 shelldocs 0m 0s Shelldocs was not available.
+1 💚 @author 0m 0s The patch does not contain any @author tags.
-1 ❌ test4tests 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch.
_ trunk Compile Tests _
+0 🆗 mvndep 13m 10s Maven dependency ordering for branch
+1 💚 mvninstall 21m 41s trunk passed
+1 💚 compile 22m 30s trunk passed with JDK Ubuntu-11.0.13+8-Ubuntu-0ubuntu1.20.04
+1 💚 compile 19m 36s trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07
+1 💚 mvnsite 25m 53s trunk passed
+1 💚 javadoc 8m 2s trunk passed with JDK Ubuntu-11.0.13+8-Ubuntu-0ubuntu1.20.04
+1 💚 javadoc 8m 5s trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07
+1 💚 shadedclient 35m 8s branch has no errors when building and testing our client artifacts.
_ Patch Compile Tests _
+0 🆗 mvndep 0m 42s Maven dependency ordering for patch
+1 💚 mvninstall 22m 15s the patch passed
+1 💚 compile 21m 36s the patch passed with JDK Ubuntu-11.0.13+8-Ubuntu-0ubuntu1.20.04
+1 💚 javac 21m 36s the patch passed
+1 💚 compile 19m 31s the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07
+1 💚 javac 19m 31s the patch passed
+1 💚 blanks 0m 0s The patch has no blanks issues.
+1 💚 mvnsite 21m 20s the patch passed
+1 💚 shellcheck 0m 0s No new issues.
+1 💚 xml 0m 1s The patch has no ill-formed XML file.
+1 💚 javadoc 7m 56s the patch passed with JDK Ubuntu-11.0.13+8-Ubuntu-0ubuntu1.20.04
+1 💚 javadoc 8m 7s the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07
+1 💚 shadedclient 36m 33s patch has no errors when building and testing our client artifacts.
_ Other Tests _
-1 ❌ unit 1121m 43s /patch-unit-root.txt root in the patch passed.
+1 💚 asflicense 1m 57s The patch does not generate ASF License warnings.
1387m 24s
Reason Tests
Failed junit tests hadoop.fs.s3a.TestArnResource
hadoop.hdfs.rbfbalance.TestRouterDistCpProcedure
hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes
hadoop.ipc.TestIPC
hadoop.yarn.csi.client.TestCsiClient
hadoop.yarn.server.router.clientrm.TestFederationClientInterceptor
Subsystem Report/Notes
Docker ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3864/4/artifact/out/Dockerfile
GITHUB PR #3864
Optional Tests dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient codespell xml markdownlint shellcheck shelldocs
uname Linux f9dfce82791f 4.15.0-65-generic #74-Ubuntu SMP Tue Sep 17 17:06:04 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
Build tool maven
Personality dev-support/bin/hadoop.sh
git revision trunk / fa14e37ac55ff47197a77d2bd29494b2d0173bda
Default Java Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07
Multi-JDK versions /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.13+8-Ubuntu-0ubuntu1.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07
Test Results https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3864/4/testReport/
Max. process+thread count 2992 (vs. ulimit of 5500)
modules C: hadoop-project hadoop-tools/hadoop-aws . U: .
Console output https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3864/4/console
versions git=2.25.1 maven=3.6.3 shellcheck=0.7.0
Powered by Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org

This message was automatically generated.

@steveloughran
Copy link
Contributor Author

regressions in test. unsure why I`m not seeing this

INFO] Running org.apache.hadoop.fs.s3a.TestArnResource
[ERROR] Tests run: 4, Failures: 3, Errors: 0, Skipped: 0, Time elapsed: 0.85 s <<< FAILURE! - in org.apache.hadoop.fs.s3a.TestArnResource
[ERROR] parseAccessPointFromArn(org.apache.hadoop.fs.s3a.TestArnResource)  Time elapsed: 0.604 s  <<< FAILURE!
org.junit.ComparisonFailure: Endpoint does not match expected:<s3[-accesspoint.]eu-west-1.amazonaws....> but was:<s3[.accesspoint-]eu-west-1.amazonaws....>
	at org.junit.Assert.assertEquals(Assert.java:117)
	at org.apache.hadoop.fs.s3a.TestArnResource.parseAccessPointFromArn(TestArnResource.java:60)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
	at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
	at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299)
	at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.lang.Thread.run(Thread.java:748)

[ERROR] parseAccessPointFromArn(org.apache.hadoop.fs.s3a.TestArnResource)  Time elapsed: 0.002 s  <<< FAILURE!
org.junit.ComparisonFailure: Endpoint does not match expected:<s3[-accesspoint.]eu-west-1.amazonaws....> but was:<s3[.accesspoint-]eu-west-1.amazonaws....>
	at org.junit.Assert.assertEquals(Assert.java:117)
	at org.apache.hadoop.fs.s3a.TestArnResource.parseAccessPointFromArn(TestArnResource.java:60)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
	at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
	at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299)
	at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.lang.Thread.run(Thread.java:748)

[ERROR] parseAccessPointFromArn(org.apache.hadoop.fs.s3a.TestArnResource)  Time elapsed: 0.002 s  <<< FAILURE!
org.junit.ComparisonFailure: Endpoint does not match expected:<s3[-accesspoint.]eu-west-1.amazonaws....> but was:<s3[.accesspoint-]eu-west-1.amazonaws....>
	at org.junit.Assert.assertEquals(Assert.java:117)
	at org.apache.hadoop.fs.s3a.TestArnResource.parseAccessPointFromArn(TestArnResource.java:60)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
	at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
	at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299)
	at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.lang.Thread.run(Thread.java:748)

[INFO] Running org.apache.hadoop.fs.s3native.TestS3xLoginHelper
[INFO] Tests run: 16, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.225 s - in org.apache.hadoop.fs.s3native.TestS3xLoginHelper
[INFO] Running org.apache.hadoop.mapreduce.filecache.TestS3AResourceScope
[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.218 s - in org.apache.hadoop.mapreduce.filecache.TestS3AResourceScope
[INFO] Tests run: 63, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 66.723 s - in org.apache.hadoop.fs.s3a.commit.staging.TestStagingCommitter
[INFO] 
[INFO] Results:
[INFO] 
[ERROR] Failures: 
[ERROR] org.apache.hadoop.fs.s3a.TestArnResource.parseAccessPointFromArn(org.apache.hadoop.fs.s3a.TestArnResource)
[ERROR]   Run 1: TestArnResource.parseAccessPointFromArn:60->Assert.assertEquals:117 Endpoint does not match expected:<s3[-accesspoint.]eu-west-1.amazonaws....> but was:<s3[.accesspoint-]eu-west-1.amazonaws....>
[ERROR]   Run 2: TestArnResource.parseAccessPointFromArn:60->Assert.assertEquals:117 Endpoint does not match expected:<s3[-accesspoint.]eu-west-1.amazonaws....> but was:<s3[.accesspoint-]eu-west-1.amazonaws....>
[ERROR]   Run 3: TestArnResource.parseAccessPointFromArn:60->Assert.assertEquals:117 Endpoint does not match expected:<s3[-accesspoint.]eu-west-1.amazonaws....> but was:<s3[.accesspoint-]eu-west-1.amazonaws....>
[INFO] 

@steveloughran
Copy link
Contributor Author

I believe the test is failing because the hostnames returned as endpoints have been updated.

What to do?

  1. Identify the changes and update the constants in the test suite.
  2. Decided this is a bit brittle and disable the failing tests.

I'd like to go with the first option because it does mean we can track these changes and be aware of them.

We could try doing a DNS look up as Ultimate validation, but that would move it from being a unit test to an integration test.

> dig s3-accesspoint.eu-west-1.amazonaws.com

; <<>> DiG 9.10.6 <<>> s3-accesspoint.eu-west-1.amazonaws.com
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 52406
;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;s3-accesspoint.eu-west-1.amazonaws.com.	IN A

;; AUTHORITY SECTION:
s3-accesspoint.eu-west-1.amazonaws.com.	962 IN SOA ns-648.awsdns-17.net. awsdns-hostmaster.amazon.com. 1 7200 900 1209600 293

;; Query time: 173 msec
;; SERVER: 172.18.64.15#53(172.18.64.15)
;; WHEN: Tue Jan 11 13:08:41 GMT 2022
;; MSG SIZE  rcvd: 148

Side issue: I don't understand why I didn't see this problem myself. I did the integration test runs and they should run or the unit tests as well. Will investigate.

@steveloughran
Copy link
Contributor Author

I am not seeing this locally at all. Which makes me suspect that there is some network IO going on here.

Except: I just turn the Wi-Fi off re-run the tests and all was good. Which implies no DNS/rDNS.

I've relaxed the tests so they only check part of the FQDN.

Change-Id: I64886976b680b4918216fc7755b0b1c8d13f1242
Change-Id: I2ac1dd994dab3a7fe5266137a691c4d24cd4f11c
Copy link
Member

@aajisaka aajisaka left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. +1 pending Jenkins. Thank you @steveloughran

@steveloughran
Copy link
Contributor Author

thanks; will merge asap

@steveloughran steveloughran merged commit d8ab842 into apache:trunk Jan 18, 2022
asfgit pushed a commit that referenced this pull request Jan 18, 2022
With this update, the versions of key shaded dependencies are

  jackson    2.12.3
  httpclient 4.5.13

This backport patch does not include the TestArn changes needed
for the test to work with this version of the SDK; it is only
to be applied to branches without HADOOP-17198. "Support S3 Access Points".
If that patch is backported later, that test suite MUST be
updated to the latest version.

Contributed by Steve Loughran

Change-Id: I8d2b71781ee8472b16469531f9cd0de32dd3356f
bogthe pushed a commit to bogthe/hadoop that referenced this pull request Feb 2, 2022
With this update, the versions of key shaded dependencies are

  jackson    2.12.3
  httpclient 4.5.13

This backport patch does not include the TestArn changes needed
for the test to work with this version of the SDK; it is only
to be applied to branches without HADOOP-17198. "Support S3 Access Points".
If that patch is backported later, that test suite MUST be
updated to the latest version.

Contributed by Steve Loughran

Change-Id: I8d2b71781ee8472b16469531f9cd0de32dd3356f
steveloughran added a commit to steveloughran/hadoop that referenced this pull request Jun 24, 2022
With this update, the versions of key shaded dependencies are

  jackson    2.12.3
  httpclient 4.5.13

Contributed by Steve Loughran

Change-Id: Id9ed677352d54e8ea71b9729b6a4bfedc6142825
asfgit pushed a commit that referenced this pull request Jun 24, 2022
With this update, the versions of key shaded dependencies are

  jackson    2.12.3
  httpclient 4.5.13

Contributed by Steve Loughran

Change-Id: Id9ed677352d54e8ea71b9729b6a4bfedc6142825
HarshitGupta11 pushed a commit to HarshitGupta11/hadoop that referenced this pull request Nov 28, 2022
With this update, the versions of key shaded dependencies are

  jackson    2.12.3
  httpclient 4.5.13

Contributed by Steve Loughran
jojochuang pushed a commit to jojochuang/hadoop that referenced this pull request May 23, 2023
With this update, the versions of key shaded dependencies are

  jackson    2.12.3
  httpclient 4.5.13

This backport patch does not include the TestArn changes needed
for the test to work with this version of the SDK; it is only
to be applied to branches without HADOOP-17198. "Support S3 Access Points".
If that patch is backported later, that test suite MUST be
updated to the latest version.

Contributed by Steve Loughran

Change-Id: I8d2b71781ee8472b16469531f9cd0de32dd3356f
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
3 participants