New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support cn-north-1 and us-gov-west-1 buckets #41
Support cn-north-1 and us-gov-west-1 buckets #41
Conversation
For cn-north-1 and us-gov-west-1 it's not possible to check the bucket location using a default endpoint. The AWS credentials for those regions are not valid for the default S3 endpoint since the users and keys exist in an entirely different AWS partition. With this change s3-wagon-private will attempt to detect the region using the DefaultAwsRegionProviderChain (`AWS_REGION` env var, then current AWS profile, then instance metadata). If the region is detected and belongs to a different aws partition than the standard "aws", then it will let the AmazonS3 client decide the S3 endpoint. If the region is detected as belonging to the standard AWS partition "aws", then the endpoint is detected uses the bucket location, as it has in the past. Note, as part of this change regions not yet present in the aws-maven project are also supported. Closes s3-wagon-private#39
@sheelc @danielcompton I've tested this against eu-west-1 and cn-north-1 and I can now push artifacts to both. I've tried to retain exactly the same behaviour as this plugin had before for the regions in the standard partition ( You'll see that I've also added an extra clause to use the bucket location directly if Comments and feedback welcome of course! Please shout if you'd like anything to change. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This all looks pretty good. The only thing I'd like to see if possible is some tests around the behaviour. I don't know how easy/possible that is given the stateful nature of the provider chains, so don't worry if it's too hard.
This is awesome thanks @joelittlejohn! Hopefully eventually we can remove the fallback behavior from The only concern I have is on the general catch of
I'm leaning towards the third option since the region provider chain doesn't seem to add more providers very often. What do you all think? Am I being paranoid over catching the |
@sheelc Very good point about the exception handling. My overriding concern here was ensuring that no current user of s3-wagon-private would start receiving exceptions when they had a working configuration previously. Another factor to throw into the mix is: the current implementation of AwsRegionProviderChain will only ever throw AmazonClientException in the case that we want to catch here. Obviously this is specific to the current implementation but in practice it does mean that the issue you mention is only theoretical. If this changes when we upgrade the AWS SDK then we can do something better with whatever new class of error is introduced. I'm happy to be driven by you guys but I feel like re-implementing the chain is a little too far to go. @danielcompton I'll take a look at adding some tests. I didn't want to go as far as deciding the testing strategy for this project in this PR 😄 If I can find a way to do it (without obfuscating this code too much with indirection) then I'd like to test the fundamentals of how this change falls-back through the different options. |
Yeah totally :) |
@danielcompton I've attempted to write some tests for this but they're somewhat mocktastic and after putting in the third 'protected' hook to allow me to replace parts of the wagon I'm not sure these tests are useful. All the AWS objects (the chains, the s3 client itself) are created inside the wagon. I can externalise some of these parts and mock everything but I think the tests may be more of an annoyance than a help. If you're interested in some tests with a lot of mocking then I'll continue. Otherwise I think some kind of end-to-end smoke tests might be more appropriate. |
Probably not worth it then at the moment. Smoke tests would be great, but I think we'd need an AWS China account to test with? |
Yes, absolutely. Some smoke tests for the core features would be good. Changes to the parts involving non-standard AWS partitions will probably always need someone with access to those partitions to run a test. Even tests using some of the local mock/fake S3 libraries would be a little redundant here IMO, since the whole point of the rules here is that they should work with the specific access requirements required by the real S3 in different AWS partitions. |
@joelittlejohn re: the exception handling, sounds good for now. We'll just keep it in mind for if there are any issues in the future. |
I'm happy to merge this and release a beta. Sound good? |
@danielcompton ready to go I think. |
Awesome, I've pushed an alpha here: https://clojars.org/s3-wagon-private/versions/1.3.1-alpha1 |
For cn-north-1 and us-gov-west-1 it's not possible to check the bucket location using a default endpoint. The AWS credentials for those regions are not valid for the default S3 endpoint since the users and keys exist in an entirely different AWS partition.
With this change s3-wagon-private will attempt to detect the region using the DefaultAwsRegionProviderChain (
AWS_REGION
env var, then current AWS profile, then instance metadata). If the region is detected and belongs to a different aws partition than the standard "aws", then it will let the AmazonS3 client decide the S3 endpoint. If the region is detected as belonging to the standard AWS partition "aws", then the endpoint is detected uses the bucket location, as it has in the past.Note, as part of this change regions not yet present in the aws-maven project are also supported.