-
Notifications
You must be signed in to change notification settings - Fork 167
Issue #45: S3InboundFileSynchronizer failing when processing through … #46
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Issue #45: S3InboundFileSynchronizer failing when processing through … #46
Conversation
…essing through S3 directories - modified approach for splitPathToBucketAndKey to account for the fact that there will be more than one forward slash in the path when subdirectories are used in an S3 bucket. Set up split operation to use the limit parameter to force the bucket identifier into the first element and the remainder of the text (the key) into the second element.
@riversidetraveler Please sign the Contributor License Agreement! Click here to manually synchronize the status of this Pull Request. See the FAQ for frequently asked questions. |
…essing through S3 directories -modified unit test to include subdirectories.
@riversidetraveler Thank you for signing the Contributor License Agreement! |
@@ -191,7 +191,7 @@ public S3InboundFileSynchronizer s3InboundFileSynchronizer() { | |||
synchronizer.setPreserveTimestamp(true); | |||
synchronizer.setRemoteDirectory(S3_BUCKET); | |||
synchronizer.setFilter(new S3RegexPatternFileListFilter(".*\\.test$")); | |||
Expression expression = PARSER.parseExpression("#this.toUpperCase() + '.a'"); | |||
Expression expression = PARSER.parseExpression("(#this.contains('/') ? #this.substring(#this.lastIndexOf('/') + 1) : #this).toUpperCase() + '.a'"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you, @riversidetraveler , for the contribution!
Looks like we have here one more issue.
I formed it into the https://jira.spring.io/browse/INT-4086 because there is nothing to do on the SI-AWS side.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi Artem
My pleasure. Do you think the change will make it into the 1.0.1 release of spring-integration-aws? I'm considering "rolling-my-own" and deploying it for the time being to stick w/ my project schedule but I'd like to get back onboard with the official Spring release of sprint-integration-aws as soon as possible.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Well, I think we can come up with 1.0.1
release over a couple weeks. And I'm sure we will have some release for SI core 4.3.x
(or even 4.2.x
) the same time.
So, you will be able to upgrade smoothly and have all the functionality aboard.
…essing through S3 directories - fixed additional usages of the split operation without the match limit parameter. DRY'd up some code. Cleaned up unit tests.
LGTM. |
Author's list updated. Thanks! |
assertThat(directoryArray.length).isEqualTo(1); | ||
|
||
// get the files we downloaded | ||
File[] fileArray = directoryArray[0].listFiles(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I added
File subDirectory = directoryArray[0];
assertThat(subDirectory.getName()).isEqualTo("subdir");
to be sure that we restore the remote hierarchy properly.
Also added your name to this class.
Merging...
Merged as 72821bf Thank you for the contribution. Note: consider to PullRequest from top-level branches: https://github.com/spring-projects/spring-integration/blob/master/CONTRIBUTING.adoc. |
This pull request addresses an issue I encountered when trying to use the S3InboundFileSynchronizer on a bucket that was organized with subdirectories. The code in the method splitPathToBucketAndKey does not anticipate more than one forward slash /. However for buckets with subdirectories there will be one or more slashes in the key.
I modified the approach for splitPathToBucketAndKey to account for the fact that there will be more than one forward slash in the path when subdirectories are used in an S3 bucket. I set up the String.split operation to use the limit parameter to force the bucket identifier into the first element and the remainder of the text (the key) into the second element.
The unit tests continue to run and I've run a successful test of the code against a live S3 bucket organized with subdirectories.