Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support cn-north-1 and us-gov-west-1 buckets #41

Merged
merged 2 commits into from Feb 20, 2017

Conversation

joelittlejohn
Copy link
Contributor

For cn-north-1 and us-gov-west-1 it's not possible to check the bucket location using a default endpoint. The AWS credentials for those regions are not valid for the default S3 endpoint since the users and keys exist in an entirely different AWS partition.

With this change s3-wagon-private will attempt to detect the region using the DefaultAwsRegionProviderChain (AWS_REGION env var, then current AWS profile, then instance metadata). If the region is detected and belongs to a different aws partition than the standard "aws", then it will let the AmazonS3 client decide the S3 endpoint. If the region is detected as belonging to the standard AWS partition "aws", then the endpoint is detected uses the bucket location, as it has in the past.

Note, as part of this change regions not yet present in the aws-maven project are also supported.

For cn-north-1 and us-gov-west-1 it's not possible to check the bucket
location using a default endpoint. The AWS credentials for those regions
are not valid for the default S3 endpoint since the users and keys exist
in an entirely different AWS partition.

With this change s3-wagon-private will attempt to detect the region
using the DefaultAwsRegionProviderChain (`AWS_REGION` env var, then
current AWS profile, then instance metadata). If the region is detected
and belongs to a different aws partition than the standard "aws", then
it will let the AmazonS3 client decide the S3 endpoint. If the region is
detected as belonging to the standard AWS partition "aws", then the
endpoint is detected uses the bucket location, as it has in the past.

Note, as part of this change regions not yet present in the aws-maven
project are also supported.

Closes s3-wagon-private#39
@joelittlejohn
Copy link
Contributor Author

joelittlejohn commented Feb 6, 2017

@sheelc @danielcompton I've tested this against eu-west-1 and cn-north-1 and I can now push artifacts to both.

I've tried to retain exactly the same behaviour as this plugin had before for the regions in the standard partition (aws). So if the region cannot be derived by the provider chain, or if the region derived is one that is within the standard partition, the behaviour should be the same.

You'll see that I've also added an extra clause to use the bucket location directly if org.springframework.build.aws.maven.Region.fromLocationConstraint throws an IllegalArgumentException. I didn't want to remove this class entirely as I wasn't sure if some of the mapping it does is useful (e.g. it supports location strings like "US" and "EU"). If it throws an exception then we simply set the region on the AmazonS3 client and let it worry about choosing an endpoint. This should allow s3-wagon-private to support regions like eu-west-2 that are not present in org.springframework.build.aws.maven.Region.

Comments and feedback welcome of course! Please shout if you'd like anything to change.

Copy link
Member

@danielcompton danielcompton left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This all looks pretty good. The only thing I'd like to see if possible is some tests around the behaviour. I don't know how easy/possible that is given the stateful nature of the provider chains, so don't worry if it's too hard.

@sheelc
Copy link
Member

sheelc commented Feb 7, 2017

This is awesome thanks @joelittlejohn! Hopefully eventually we can remove the fallback behavior from org.springframework.build.aws.maven.Region.fromLocationConstraint (and even our own detection through the aws sdk) since it probably makes sense for most people to go through the default region provider chain. Most of the same mechanisms to provide access key/secret key work for region so it's not that much more burdensome on users.

The only concern I have is on the general catch of AmazonClientException since that class is so generic -- might lead to a lot of confusion. From the documentation: "This class is primarily for errors that occur when unable to get a response from a service, or when the client is unable to parse the response from a service." I'm not sure what the lesser of evils is here:

  • Continue the general catch of AmazonClientException and hope other unrelated exceptions don't get lost here
  • Try to do some vague contains match on the error message and make sure it is specifically the one about not being able to load region (but then it'll likely stop working once someone changes that string)
  • Use the classes that implement the DefaultRegionProviderChain directly since those don't throw when they can't find a region and if after iterating through the (currently) three classes that the default chain uses, then we know that credentials were not provided that way (but then if the default region provider chain ever adds another provider class, we'll be out of date)

I'm leaning towards the third option since the region provider chain doesn't seem to add more providers very often. What do you all think? Am I being paranoid over catching the AmazonClientException?

@joelittlejohn
Copy link
Contributor Author

joelittlejohn commented Feb 7, 2017

@sheelc Very good point about the exception handling. My overriding concern here was ensuring that no current user of s3-wagon-private would start receiving exceptions when they had a working configuration previously. Another factor to throw into the mix is: the current implementation of AwsRegionProviderChain will only ever throw AmazonClientException in the case that we want to catch here. Obviously this is specific to the current implementation but in practice it does mean that the issue you mention is only theoretical. If this changes when we upgrade the AWS SDK then we can do something better with whatever new class of error is introduced. I'm happy to be driven by you guys but I feel like re-implementing the chain is a little too far to go.

@danielcompton I'll take a look at adding some tests. I didn't want to go as far as deciding the testing strategy for this project in this PR 😄 If I can find a way to do it (without obfuscating this code too much with indirection) then I'd like to test the fundamentals of how this change falls-back through the different options.

@danielcompton
Copy link
Member

Yeah totally :)

@joelittlejohn
Copy link
Contributor Author

@danielcompton I've attempted to write some tests for this but they're somewhat mocktastic and after putting in the third 'protected' hook to allow me to replace parts of the wagon I'm not sure these tests are useful. All the AWS objects (the chains, the s3 client itself) are created inside the wagon. I can externalise some of these parts and mock everything but I think the tests may be more of an annoyance than a help.

If you're interested in some tests with a lot of mocking then I'll continue. Otherwise I think some kind of end-to-end smoke tests might be more appropriate.

@danielcompton
Copy link
Member

Probably not worth it then at the moment. Smoke tests would be great, but I think we'd need an AWS China account to test with?

@joelittlejohn
Copy link
Contributor Author

Yes, absolutely. Some smoke tests for the core features would be good. Changes to the parts involving non-standard AWS partitions will probably always need someone with access to those partitions to run a test.

Even tests using some of the local mock/fake S3 libraries would be a little redundant here IMO, since the whole point of the rules here is that they should work with the specific access requirements required by the real S3 in different AWS partitions.

@sheelc
Copy link
Member

sheelc commented Feb 8, 2017

@joelittlejohn re: the exception handling, sounds good for now. We'll just keep it in mind for if there are any issues in the future.

@danielcompton
Copy link
Member

I'm happy to merge this and release a beta. Sound good?

@joelittlejohn
Copy link
Contributor Author

@danielcompton ready to go I think.

@danielcompton danielcompton merged commit aeb3346 into s3-wagon-private:master Feb 20, 2017
@danielcompton
Copy link
Member

Awesome, I've pushed an alpha here: https://clojars.org/s3-wagon-private/versions/1.3.1-alpha1

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants