-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
S3 Output Enhancements: Track S3 feature requests here #2700
Comments
@elrob 's requests: #1004 (comment)
My response: #1004 (comment)
For this one, I'm considering adding another special format string in the s3 key, That's not a perfect solution though... The PutObject API is called under two circumstances:
In both cases I want to force some sort of UUID interpolation to ensure the key is unique. I suppose one thing I could do is split the S3 Key on Another option would just be to include the Thoughts? |
@shailegu requests pre-signed URLs: #1004 (comment) I am very doubtful on the use case though; I think the pre-signed URLs are one time use only- it does not really fit a project like Fluent Bit that is meant to be continually uploading data. |
Supporting parquet as an output format was requested as well: #1004 (comment) |
@PettitWesley Thank you. I think adding the UUID part before the last |
Hi @PettitWesley I came across an issue when configuring S3 Output to use an Object Lock enabled S3 bucket. Would it be possible to include the From the AWS Object Lock doc...
The logging provided by Fluent Bit supports the documentation:
Thanks! |
@PettitWesley Can we expect the gzip compression support for the S3 output plugin to be added anytime soon? It's the only impediment for our team migrating to using fluent-bit with S3 |
@PettitWesley Per our discussion on Slack - it's pretty important that the S3 plugin be able to set the ACL on the file its uploading to S3. Without that, you cannot do cross-account writing safely even with the https://docs.aws.amazon.com/AmazonS3/latest/user-guide/add-object-ownership.html feature. At a minimum, there should be a canned ACL usage of the default "bucket-owner-full-control" policy. Better would be for us to be able to configure the ACL applied to the files. I think this should be a pretty simple change overall. |
General note- I can not make any definite promises on timeline- but we are watching this issue and my team and I will be making our way through these requests over the next few weeks and months. |
@diranged Hi, I am from Wesley's team and am working on the issue you mentioned above, to support ACL in S3. Do you think the canned ACL is good enough: https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html? Like you are able to request the policy you need and we apply it to your bucket. Or is canned ACL insufficient and you want to grant permissions to specific users or aws accounts? |
Hi @PettitWesley , thanks so much for your work on the s3 plugin. Just wondering if the compression is still being worked on? 😄 |
@hawkesnc it has been merged but not released IIRC. CC @zhonghui12 |
Some discussion on |
Documentation: we did not found how to configure IAM, what are permissions used? Compression +1, it reduces overall costs. |
Gzip compression is available in S3, and you can |
I'm expecting |
Hi @zhonghui12, IAM permissions aren't predictable, the S3 plugin page misses this information. I looked for the Edit: |
Hi @fvasco, thanks for the suggestions. We've submitted the PRs and they are ready to be merged. The documentation will be updated soon. |
@fvasco Use v1.6.10 or the code in the 1.6 branch for compression support |
Hi @PettitWesley 👋 I wrote up this feature request TL;DR: It would be great if we could configure the aws s3 credentials via the output configuration 😄 |
I have explained my issue in #2962 The last chunk of the log (which is not meeting the minimum size of the multipart upload) should be part of multipart upload before the log router exits. This will help us not to stich the logs together to maintain the chronological order later. Here we don't expect the container to restart and hence the request. |
I found that if I plan to query the logs via AWS Athena. One issue I ran into with using compression is that Athena does not read the logs as gzip format unless the extension is |
@tchen I believe we already put up a PR for those docs. CC @zhonghui12
Interesting. We are planning on fixing this in the next few months. I think I made a comment earlier in this issue on the plan. You'll have the option of configuring where the randomness gets added to the file name, which will let you set any extension which is needed. |
We will track the request from @bksteiny (#2700 (comment)) and brunosimsenhor (#3035) for AWS Object Lock here. |
This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 5 days. |
This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 5 days. |
Those stable comments are annoying 😐 What are folks looking for us to improve in the S3 output? |
I am not sure if I am just missing it, but I can't find anywhere to jam in an access key/secret access key for the plugin to use. It seems this does the more sophisticated ARN->STS->temporary access key/secret access key lookups supporting machines which are running in AWS workloads, but not just bypassing that with a provided access key/secret access key. Background: I have a developing use case where I would like to store logs of device I control, but is temporarily on someone else's network. Therefore, my log server isn't accessible and I am prohibited at the network level from opening a VPN session or IPSec tunnel. However, I can hit AWS S3 so I was thinking of using fluent bit to upload the logs to S3, and then on my local log server have it pull down those logs (likely in fluentd, since fluent bit doesn't support S3 input). However, I can't figure out how to tell fluent bit to use the access key/secret access key to upload the logs. |
@PettitWesley I have another request: 🙏 |
@elrob I believe that exists, S3 supports |
@justchris1 Fluent BIt supports all standard AWS Credential sources, including environment variables and a local credentials file via https://docs.aws.amazon.com/sdk-for-go/v1/developer-guide/configuring-sdk.html |
Thanks @PettitWesley |
@PettitWesley Here's a feature request for S3 output - Ability to configure the content formatting of uploaded S3 objects. The uploaded objects are always newline separated json files, it will be great to allow a new key called I am not familiar with the internal workings of this plugin, but if this feature holds weight for the community, I can look into it and possibly raise a PR. |
This issue is stale because it has been open 90 days with no activity. Remove stale label or comment or this will be closed in 5 days. Maintainers can add the |
This issue was closed because it has been stalled for 5 days with no activity. |
We're not actively planning any more S3 enhancements right now; but keeping this open for new requests. |
Request: i.e. To send different logs to different buckets, instead of:
it would be great if I could just do:
This seems like a fairly obvious use case so if I'm mistaken and this is already possible, or if there is some technical reason why this will NEVER be possible, please feel free to let me know! |
@pranavmarla this would be possible to implement. I think we will prioritize record accessor for the cloudwatch_logs plugin first tho. |
Sure, thanks @PettitWesley |
Plus one here that would like the bucket value to be able to accept record accessor like @pranavmarla said above, or accept tags . |
S3 Support was released in 1.6, however, there are a bunch of outstanding requests for improvements in the original ticket: #1004
Please comment with new S3 feature requests here.
The text was updated successfully, but these errors were encountered: