-
Notifications
You must be signed in to change notification settings - Fork 647
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
S3: Bucket management #1608
S3: Bucket management #1608
Conversation
e336091
to
edca02f
Compare
Thank you for this PR! It would be more natural to expose this functionality as Futures/CompletionStages. |
@ennru Could you explain why it would be more natural to use Futures/CompletionStages rather than Sources? |
For asynchronous work that has at most a single result, futures express that better. As you see in your tests, all you'd normally do is to run the source with a |
I see the point, but while writing it i was looking also on method:
I believe that As for the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can keep the Source
versions and add Future
as an alternative.
Please add these new great features to the docs, as well. (For that you'll need some Java snippets and can try out how it looks from Java.)
|
||
response match { | ||
case HttpResponse(status, _, entity, _) if status.isSuccess() => | ||
Source.fromFuture { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This wrapping of Future
s into Source
s should be removed, a mapAsync
is much leaner than flatMapConcat
.
@@ -324,7 +324,7 @@ object S3 { | |||
* @param key the s3 object key | |||
* @param contentType an optional [[akka.http.javadsl.model.ContentType ContentType]] | |||
* @param s3Headers any headers you want to add | |||
* @return a [[akka.stream.javadsl.Sink Sink]] that accepts [[akka.util.ByteString ByteString]]'s and materializes to a [[ava.util.concurrent.CompletionStage CompletionStage]] of [[MultipartUploadResult]] | |||
* @return a [[akka.stream.javadsl.Sink Sink]] that accepts [[akka.util.ByteString ByteString]]'s and materializes to a [[java.util.concurrent.CompletionStage CompletionStage]] of [[MultipartUploadResult]] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you!
*/ | ||
sealed trait BucketAccess | ||
|
||
case object AccessDenied extends BucketAccess |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For Java-friendliness:
- change
BucketAccess
fromtrait
toclass
- Put the objects in
object BucketAccess
- Add values in the
BucketAccess
object (val accessDenied: BucketAccess = AccessDenied
)
|
||
val request: Future[Option[Done]] = S3 | ||
.makeBucket(bucketName) | ||
.runWith(Sink.headOption) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there anything the Option
expresses? Doesn't Sink.ignore
say everything?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good now! Most importantly exists
is spelt wrong in a few places.
Co-Authored-By: MarcinAman <marcin.aman@gmail.com>
Please note that there is a tiny conflict with master. |
Yes, i know. Currently i am working on finding more typos and this match statement |
d7fd3d2
to
699c592
Compare
@ennru I think i am quite happy with current state of this code. Could have a "final" look? |
* @param bucketName bucket name | ||
* @return [[scala.concurrent.Future Future]] with type [[Done]] as API doesn't return any additional information | ||
*/ | ||
def makeBucket(bucketName: String)(implicit mat: Materializer, attr: Attributes = Attributes()): Future[Done] = |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm a bit surprised the overload works for picking the correct method based on the return type. And I wonder if it would be better to come up with different names to properly distinguish them.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I mean it is kind of good and bad. It is consistent and clear but only works when you specify type.
Do you think that it would be better when Source
version would be makeBucket
and future versions: makeBucketF
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think I'd prefer to append Source
.
def makeBucketSource(bucketName: String): Source[Done, NotUsed]
looks OK, doesn't it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Makes sense 👍
I will submit a updated version in few hours
79fae88
to
981df23
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM.
For more information about attributes see: @scaladoc[S3Attributes](akka.stream.alpakka.s3.S3Attributes$) and @scaladoc[Attributes](akka.stream.Attributes) | ||
|
||
### Make bucket | ||
In order to create a bucket in S3 you need to specify it's unique name. This value has to be set accordingly to the [requirements](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-s3-bucket-naming-requirements.html). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The link seems to be the wrong one (cloudtrail), guess this is the right one ? -> https://docs.aws.amazon.com/AmazonS3/latest/dev/BucketRestrictions.html#bucketnamingrules
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, thank you for noticing. My bad on not checking it before merging. I will submit another PR tomorrow referencing that issue
Pull Request Checklist
Fixes
Fixes #1546
Purpose
What does this PR do?
Regarding the issue that i posted i've created some bucket management functionality. It allows end user to create, delete and check if bucket of a given name exists.
As for now the docs created by AWS doesn't specify a request to list all buckets so that functionality was not implemented.
Note
I'd be glad if someone would check whether the naming in the enum is understandable for others and is there any better solution to have the same request but with different mapping functions (i am talking about explicitly passing this implicit in S3Stream to processing functions)?