Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

HADOOP-16371: Option to disable GCM for SSL connections when running on Java 8 #970

Closed
wants to merge 1 commit into from

Conversation

sahilTakiar
Copy link
Contributor

@sahilTakiar sahilTakiar commented Jun 14, 2019

HADOOP-16371: Option to disable GCM for SSL connections when running on Java 8

Changes:

  • Patch is based on the merged patch from HADOOP-16050
  • Decided a better name for SSLSocketFactoryEx would be DelegatingSSLSocketFactory because the class is not OpenSSL specific (e.g. it is capable of just delegating to the JSSE)
  • Add a bunch of code comments to DelegatingSSLSocketFactory
  • Documented fs.s3a.ssl.channel.mode in performance.md and core-default.xml
  • If a user tries to configure OpenSSL as the mode via fs.s3a.ssl.channel.mode an UnsupportedOperationException is thrown.

Testing Done:

  • Ran all S3 tests mvn verify and S3 scale tests mvn verify -Dparallel-tests -Dscale -DtestsThreadCount=16 (did not have S3Guard or kms tests setup)
  • Ran TestDelegatingSSLSocketFactory on Ubuntu and OSX with -Pnative and confirmed the test passes on both systems (on OSX it is skipped, on Ubuntu it actually runs)
  • Ran the ABFS tests against "East US 2" and the only failure was ITestGetNameSpaceEnabled.testNonXNSAccount (known issue)
  • Ran mvn package -Pdist -DskipTests -Dmaven.javadoc.skip=true -DskipShade, un-tarred hadoop-dist/target/hadoop-3.3.0-SNAPSHOT.tar.gz and ran ./bin/hadoop fs -ls s3a://[my-bucket-name]/ successfully and that I could upload and read a file to S3 using the CLI without the wildlfy jar on the classpath

@hadoop-yetus
Copy link

🎊 +1 overall

Vote Subsystem Runtime Comment
0 reexec 92 Docker mode activated.
_ Prechecks _
+1 dupname 1 No case conflicting files found.
+1 @author 0 The patch does not contain any @author tags.
+1 test4tests 0 The patch appears to include 4 new or modified test files.
_ trunk Compile Tests _
0 mvndep 30 Maven dependency ordering for branch
+1 mvninstall 1139 trunk passed
+1 compile 991 trunk passed
+1 checkstyle 144 trunk passed
+1 mvnsite 162 trunk passed
+1 shadedclient 1062 branch has no errors when building and testing our client artifacts.
+1 javadoc 124 trunk passed
0 spotbugs 58 Used deprecated FindBugs config; considering switching to SpotBugs.
+1 findbugs 237 trunk passed
_ Patch Compile Tests _
0 mvndep 20 Maven dependency ordering for patch
+1 mvninstall 103 the patch passed
+1 compile 1087 the patch passed
+1 javac 1087 the patch passed
+1 checkstyle 150 the patch passed
+1 mvnsite 168 the patch passed
+1 whitespace 0 The patch has no whitespace issues.
+1 xml 4 The patch has no ill-formed XML file.
+1 shadedclient 714 patch has no errors when building and testing our client artifacts.
+1 javadoc 122 the patch passed
+1 findbugs 260 the patch passed
_ Other Tests _
+1 unit 549 hadoop-common in the patch passed.
+1 unit 298 hadoop-aws in the patch passed.
+1 unit 86 hadoop-azure in the patch passed.
+1 asflicense 50 The patch does not generate ASF License warnings.
7572
Subsystem Report/Notes
Docker Client=18.09.5 Server=18.09.5 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-970/1/artifact/out/Dockerfile
GITHUB PR #970
JIRA Issue HADOOP-16371
Optional Tests dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient xml findbugs checkstyle
uname Linux bc86dcd2f684 4.15.0-48-generic #51-Ubuntu SMP Wed Apr 3 08:28:49 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
Build tool maven
Personality personality/hadoop.sh
git revision trunk / ae4143a
Default Java 1.8.0_212
Test Results https://builds.apache.org/job/hadoop-multibranch/job/PR-970/1/testReport/
Max. process+thread count 1347 (vs. ulimit of 5500)
modules C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws hadoop-tools/hadoop-azure U: .
Console output https://builds.apache.org/job/hadoop-multibranch/job/PR-970/1/console
versions git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1
Powered by Apache Yetus 0.10.0 http://yetus.apache.org

This message was automatically generated.

@@ -234,7 +274,8 @@ private void configureSocket(SSLSocket ss) throws SocketException {
// Remove GCM mode based ciphers from the supported list.
for (int i = 0; i < defaultCiphers.length; i++) {
if (defaultCiphers[i].contains("_GCM_")) {
LOG.debug("Removed Cipher - " + defaultCiphers[i]);
LOG.debug("Removed Cipher - " + defaultCiphers[i] + " from list of " +
"enabled SSLSocket ciphers");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

use SLF4J {} expansion rather than inline; there's a lot of commons-logging era log statements, and changing a line is the time to upgrade them

<name>fs.s3a.ssl.channel.mode</name>
<value>Default_JSSE</value>
<description>
If secure connections to S3 are enabled, configures the SSL
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't like having to remember the case of all these options, especially as every other s3a config option is all lower case. these should all be case insensitive.

/**
* Tests non-default values for {@link Constants#SSL_CHANNEL_MODE}.
*/
public class ITestS3ASSL extends AbstractS3ATestBase {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Better: A parameterized junit test (see ITestS3AMetadataPersistenceException) where the example isn't an enum but the string values we expect to see. I know hard-coding strings is not what I normally like, but doing it here would ensure that we'd have a regression test against the values changing.

@@ -34,6 +34,7 @@
import com.amazonaws.services.s3.model.AmazonS3Exception;
import com.amazonaws.services.s3.model.MultiObjectDeleteException;
import com.amazonaws.services.s3.model.S3ObjectSummary;
import com.amazonaws.thirdparty.apache.http.conn.ssl.SSLConnectionSocketFactory;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I worry about this as it then ensures that the s3a code will only load when the shaded aws JAR is on the CP; people won't be able to switch to the unshaded JAR with the unshaded httpclient code.

I'm also trying to move off adding more stuff to the S3AUtils class as its become a merge conflict point with no real structure: every patch adds something to it. See Refactoring S3A for my thoughts there.

As this is adding new network setup, I'm going propose

  • putting this into a new class in o.a.h.fs.s3a.impl; -something like "NetworkBinding", where we can add more stuff later on
  • whatever is done to set up the AWS networking is done through reflection; if the shaded class can't be found on the CP, then just skip trying to configure it.


## <a name="coding"></a> Tuning SSL Performance

By default, S3A uses HTTPS to communicate with S3. This means that all
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

communicate with AWS Services.

@steveloughran
Copy link
Contributor

(did not have S3Guard or kms tests setup)

You are going to have to do S3Guard tests here because that talks to another AWS endpoint, and it's important to know if there are any regressions. I'm not worried about the KMS stuff, or the assumed role code; there's enough test of AWS Session credentials that talking to AWS STS is handled.

FWIW, we directly talk to the following service s in production code

  • AWS S3
  • AWS Secure Token Service
  • DynamoDB

We need to make sure we are going near all of them.

I've just been looking to see where there's already a good parameterized test we could (abuse) to add another option to the config set, so guaranteeing coverage of interaction with these services without adding new tests (and new test delays). Nothing immediately springs to mind, though ITestS3AContractSeek seems like a good choice. Adding an extra column for each of the three options to set the SSL binding would verify that the seek code was happy with it and that, when run with s3guard enabled, we'd be implicitly testing that.

Sahil: what would you think about replacing ITestS3ASSL with some extra parameterization of the ITestS3AContractSeek test? I know it's nominally wrong to mix things up this way, but it would (a) not make test time any worse than it is and (b) do a more rigorous test: the seek code is skipping around, aborting connections etc?

break;
case Default_JSSE:
} catch (NoSuchAlgorithmException e) {
LOG.warn("Failed to load OpenSSL. Falling back to the JSSE default.");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i dont want a warning in default mode. log at debug unless openssl was explicitly asked for

channelMode = SSLChannelMode.Default_JSSE_with_GCM;
break;
default:
throw new AssertionError("Unknown channel mode: "
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

make it 'NoSuchAlgorithmException' so current code can catch it

@@ -194,7 +194,7 @@
<dependency>
<groupId>org.wildfly.openssl</groupId>
<artifactId>wildfly-openssl</artifactId>
<scope>compile</scope>
<scope>runtime</scope>
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

unsure about this as it is an existing dependency.
leave as is

@steveloughran
Copy link
Contributor

@sahilTakiar I've left some comments.

it'd be good for @DadanielZ to review too

@hadoop-yetus
Copy link

💔 -1 overall

Vote Subsystem Runtime Comment
0 reexec 87 Docker mode activated.
_ Prechecks _
+1 dupname 1 No case conflicting files found.
+1 @author 0 The patch does not contain any @author tags.
+1 test4tests 0 The patch appears to include 4 new or modified test files.
_ trunk Compile Tests _
0 mvndep 26 Maven dependency ordering for branch
+1 mvninstall 1263 trunk passed
+1 compile 1211 trunk passed
+1 checkstyle 155 trunk passed
+1 mvnsite 172 trunk passed
+1 shadedclient 1122 branch has no errors when building and testing our client artifacts.
+1 javadoc 131 trunk passed
0 spotbugs 64 Used deprecated FindBugs config; considering switching to SpotBugs.
+1 findbugs 274 trunk passed
_ Patch Compile Tests _
0 mvndep 27 Maven dependency ordering for patch
+1 mvninstall 126 the patch passed
+1 compile 1140 the patch passed
+1 javac 1140 the patch passed
-0 checkstyle 154 root: The patch generated 1 new + 16 unchanged - 0 fixed = 17 total (was 16)
+1 mvnsite 169 the patch passed
+1 whitespace 0 The patch has no whitespace issues.
+1 xml 3 The patch has no ill-formed XML file.
+1 shadedclient 753 patch has no errors when building and testing our client artifacts.
-1 javadoc 34 hadoop-tools_hadoop-aws generated 4 new + 1 unchanged - 0 fixed = 5 total (was 1)
+1 findbugs 310 the patch passed
_ Other Tests _
+1 unit 615 hadoop-common in the patch passed.
+1 unit 94 hadoop-aws in the patch passed.
+1 unit 83 hadoop-azure in the patch passed.
+1 asflicense 46 The patch does not generate ASF License warnings.
8051
Subsystem Report/Notes
Docker Client=19.03.2 Server=19.03.2 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-970/13/artifact/out/Dockerfile
GITHUB PR #970
JIRA Issue HADOOP-16371
Optional Tests dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient xml findbugs checkstyle
uname Linux 6153829a62f6 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
Build tool maven
Personality personality/hadoop.sh
git revision trunk / 524b553
Default Java 1.8.0_212
checkstyle https://builds.apache.org/job/hadoop-multibranch/job/PR-970/13/artifact/out/diff-checkstyle-root.txt
javadoc https://builds.apache.org/job/hadoop-multibranch/job/PR-970/13/artifact/out/diff-javadoc-javadoc-hadoop-tools_hadoop-aws.txt
Test Results https://builds.apache.org/job/hadoop-multibranch/job/PR-970/13/testReport/
Max. process+thread count 1348 (vs. ulimit of 5500)
modules C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws hadoop-tools/hadoop-azure U: .
Console output https://builds.apache.org/job/hadoop-multibranch/job/PR-970/13/console
versions git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1
Powered by Apache Yetus 0.10.0 http://yetus.apache.org

This message was automatically generated.

@apache apache deleted a comment from hadoop-yetus Sep 11, 2019
@apache apache deleted a comment from hadoop-yetus Sep 11, 2019
@apache apache deleted a comment from hadoop-yetus Sep 11, 2019
@apache apache deleted a comment from hadoop-yetus Sep 11, 2019
@apache apache deleted a comment from hadoop-yetus Sep 11, 2019
@apache apache deleted a comment from hadoop-yetus Sep 11, 2019
@apache apache deleted a comment from hadoop-yetus Sep 11, 2019
@apache apache deleted a comment from hadoop-yetus Sep 11, 2019
@apache apache deleted a comment from hadoop-yetus Sep 11, 2019
@apache apache deleted a comment from hadoop-yetus Sep 11, 2019
@steveloughran
Copy link
Contributor

thanks for updating this. Was this just a rebase + push or have you made other changes?

@sahilTakiar
Copy link
Contributor Author

Yes, I rebased this and addressed your comments. Having some trouble running all the tests though. I ran against us-east-1 and all the tests except the follow pass: mvn test -Dtest=ITestS3AContractRename,ITestAuthoritativePath,ITestS3GuardTtl . I think it is because I don't have S3Guard access (working on getting access). Getting a bunch of org.junit.AssumptionViolatedException: FS needs to have a metadatastore exceptions.

I'm working on getting S3Guard access, but wanted to update the PR in the meantime. Once I get access will re-run the tests with mvn verify -Ds3guard -Ddynamo.

Still trying to understand how to test against AWS Secure Token Service as well.

@sahilTakiar
Copy link
Contributor Author

Re-ran tests with:

<property>
  <name>test.fs.s3a.sts.enabled</name>
  <value>true</value>
</property>

In my auth-keys.xml file, and all the tests passed (except for ITestAuthoritativePath and ITestS3GuardTtl).

I re-ran ITestS3AContractRename and it works now, so the maybe the test was just flaky.

@steveloughran steveloughran self-assigned this Sep 16, 2019
case OpenSSL:
case Default:
if (!openSSLProviderRegistered) {
OpenSSLProvider.register();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

By calling wildfly methods without using reflection, this class can only be loaded or used if wildfly is on the CP. While this is already a requirement of hadoop-azure, I don't want to add it to hadoop-aws. Somehow reflection is going to be needed here.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The check in NetworkBinding#bindSSLChannelMode explicitly prevents S3A users from setting fs.s3a.ssl.channel.mode to default or OpenSSL, so there should be no way an S3A user can trigger the Wildfly jar from actually being used.

IIUC Java correctly, a class should still be able to load this class without Wildfly on the classpath. Java only looks for the Wildly classes when a Wildly class is initialized (in this case OpenSSLProvider). The import statements are only used during compilation. ref: https://stackoverflow.com/a/12620773/11511572

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks.

I always had the belief that integers and other simple constants may just get compiled in but that the class load Will kick off as soon as you reference a string in class. I should do more experiments. Maybe even learn Java assembly language. Anyway, as long as we don't try referencing, things like strings from the class, we should be okay. And as the little wild fly JAR is not currently on the hadoop-aws test CP, we are implicitly verifying this. We probably have to be careful once we had more support for it -we need to make sure we've not accidentally made it mandatory.

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.security.ssl.DelegatingSSLSocketFactory;

import org.slf4j.Logger;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

these should go up into the same import block as com.amazonaws

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

{INPUT_FADV_RANDOM},
{INPUT_FADV_NORMAL},
{INPUT_FADV_SEQUENTIAL},
{INPUT_FADV_RANDOM, Default_JSSE},
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

While it's good to see this coverage, we do now have a test which is taking 4 min, and I'm trying to keep execution time of a non-scale test with a runner pool of 12 down to < 20 mins -this isn't going to help.
I propose only having the random and normal as the seek policies for the GCM tests; and remove these from the default jsse. Yes, it's a simpler matrix but as the default_jsse option gets implicitly tested everywhere,we aren't losing coverage.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

@apache apache deleted a comment from hadoop-yetus Sep 16, 2019
@steveloughran
Copy link
Contributor

steveloughran commented Sep 16, 2019

Patch is coming together nicely -nearly there. Done CLI tests as well as the -aws suite,

A big fear of mine is that the current patch will, through transitive references, fail if the
wildfly JAR isn't on the CP.

But I couldn't actually create that failure condition when I tried on the CLI.

first, extended patched cloudstore s3a diagnostics to look for the new class

class: org.wildfly.openssl.OpenSSLProvider
       Not found on classpath: org.wildfly.openssl.OpenSSLProvider

tested IO against a store -all good.

and when I switch to an unsupported mode I get the expected stack trace

2019-09-16 13:06:11,124 [main] INFO  diag.StoreDiag (DurationInfo.java:<init>(53)) - Starting: Creating filesystem s3a://hwdev-steve-ireland-new/
2019-09-16 13:06:11,683 [main] INFO  diag.StoreDiag (DurationInfo.java:close(100)) - Creating filesystem s3a://hwdev-steve-ireland-new/: duration 0:00:561
java.lang.UnsupportedOperationException: S3A does not support setting fs.s3a.ssl.channel.mode OpenSSL or Default
	at org.apache.hadoop.fs.s3a.impl.NetworkBinding.bindSSLChannelMode(NetworkBinding.java:86)
	at org.apache.hadoop.fs.s3a.S3AUtils.initProtocolSettings(S3AUtils.java:1266)
	at org.apache.hadoop.fs.s3a.S3AUtils.initConnectionSettings(S3AUtils.java:1230)
	at org.apache.hadoop.fs.s3a.S3AUtils.createAwsConf(S3AUtils.java:1211)
	at org.apache.hadoop.fs.s3a.DefaultS3ClientFactory.createS3Client(DefaultS3ClientFactory.java:58)
	at org.apache.hadoop.fs.s3a.S3AFileSystem.bindAWSClient(S3AFileSystem.java:543)
	at org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:364)
	at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3370)
	at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:136)
	at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3419)
	at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3387)
	at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:502)
	at org.apache.hadoop.fs.Path.getFileSystem(Path.java:365)
	at org.apache.hadoop.fs.store.diag.StoreDiag.executeFileSystemOperations(StoreDiag.java:860)
	at org.apache.hadoop.fs.store.diag.StoreDiag.run(StoreDiag.java:409)
	at org.apache.hadoop.fs.store.diag.StoreDiag.run(StoreDiag.java:353)
	at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
	at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
	at org.apache.hadoop.fs.store.diag.StoreDiag.exec(StoreDiag.java:1163)
	at org.apache.hadoop.fs.store.diag.StoreDiag.main(StoreDiag.java:1172)
	at storediag.main(storediag.java:25)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.hadoop.util.RunJar.run(RunJar.java:323)
	at org.apache.hadoop.util.RunJar.main(RunJar.java:236)
2019-09-16 13:06:11,685 [main] INFO  util.ExitUtil (ExitUtil.java:t

which is telling me that my fears are misguided?

what do others say?

BTW, @bgaborg been having problems with STS tests too -try setting a region for the endpoint. Starting to suspect the latest SDK needs this now.

@steveloughran
Copy link
Contributor

Did a full test run -Ds3guard -Ddynamodb -Dscale -Dauth; all failures DDB table delete timeouts and the (known) prune one. All good otherwise.

@sahilTakiar
Copy link
Contributor Author

sahilTakiar commented Sep 16, 2019

Thanks for the feedback and running all the tests Steve! I left a comment above about why I think everything will still work without wildfly on the classpath.

Working on addressing the other comments.

@sahilTakiar
Copy link
Contributor Author

Addressed comments, re-ran tests, the only additional failure is ITestS3AFileOperationCost, which is failing on trunk for me as well.

@hadoop-yetus
Copy link

💔 -1 overall

Vote Subsystem Runtime Comment
0 reexec 147 Docker mode activated.
_ Prechecks _
+1 dupname 1 No case conflicting files found.
+1 @author 0 The patch does not contain any @author tags.
+1 test4tests 0 The patch appears to include 4 new or modified test files.
_ trunk Compile Tests _
0 mvndep 100 Maven dependency ordering for branch
+1 mvninstall 1532 trunk passed
+1 compile 1361 trunk passed
+1 checkstyle 167 trunk passed
+1 mvnsite 205 trunk passed
-1 shadedclient 1289 branch has errors when building and testing our client artifacts.
+1 javadoc 149 trunk passed
0 spotbugs 71 Used deprecated FindBugs config; considering switching to SpotBugs.
+1 findbugs 270 trunk passed
_ Patch Compile Tests _
0 mvndep 22 Maven dependency ordering for patch
+1 mvninstall 129 the patch passed
+1 compile 1130 the patch passed
+1 javac 1130 the patch passed
-0 checkstyle 181 root: The patch generated 1 new + 16 unchanged - 0 fixed = 17 total (was 16)
+1 mvnsite 169 the patch passed
+1 whitespace 1 The patch has no whitespace issues.
+1 xml 4 The patch has no ill-formed XML file.
+1 shadedclient 801 patch has no errors when building and testing our client artifacts.
-1 javadoc 33 hadoop-tools_hadoop-aws generated 4 new + 1 unchanged - 0 fixed = 5 total (was 1)
+1 findbugs 266 the patch passed
_ Other Tests _
+1 unit 545 hadoop-common in the patch passed.
+1 unit 87 hadoop-aws in the patch passed.
+1 unit 92 hadoop-azure in the patch passed.
+1 asflicense 48 The patch does not generate ASF License warnings.
8761
Subsystem Report/Notes
Docker Client=18.09.7 Server=18.09.7 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-970/14/artifact/out/Dockerfile
GITHUB PR #970
JIRA Issue HADOOP-16371
Optional Tests dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient xml findbugs checkstyle
uname Linux 3eeeb1eb687e 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
Build tool maven
Personality personality/hadoop.sh
git revision trunk / 66bd168
Default Java 1.8.0_222
checkstyle https://builds.apache.org/job/hadoop-multibranch/job/PR-970/14/artifact/out/diff-checkstyle-root.txt
javadoc https://builds.apache.org/job/hadoop-multibranch/job/PR-970/14/artifact/out/diff-javadoc-javadoc-hadoop-tools_hadoop-aws.txt
Test Results https://builds.apache.org/job/hadoop-multibranch/job/PR-970/14/testReport/
Max. process+thread count 1341 (vs. ulimit of 5500)
modules C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws hadoop-tools/hadoop-azure U: .
Console output https://builds.apache.org/job/hadoop-multibranch/job/PR-970/14/console
versions git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1
Powered by Apache Yetus 0.10.0 http://yetus.apache.org

This message was automatically generated.

@steveloughran
Copy link
Contributor

the only additional failure is ITestS3AFileOperationCost, which is failing on trunk for me as well.

Is it? That's not good. That test uses the file system metrics to make assertions about how many operations actually go to S3. If it's failing either there is something wrong with assertions in your environment, or the connector is potentially making to many or too few calls to S3.

What are your test S3Guard settings?

@steveloughran
Copy link
Contributor

thanks for the final revision and explanation of why classloading wasn't going to be a problem

+1 -committed to trunk. Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
3 participants