-
Notifications
You must be signed in to change notification settings - Fork 1.7k
[aws_s3 sink] output can't get files over 256Mb #22866
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
@cduverne so on the one hand you may be able to trim down or compress your event using some remap transformer before your event hits the sink. or we need to look forward to progress on #10281 and its sub tasks. |
Hi @scMarkus ! Thanks a lot for answering ! Let's breakdown your comment so I'm sure I fully get it. (Quite new to Vector, sorry)
Regarding your comment about using "some remap transformer", here is what we implemented :
I'm a bit lost though. We should get 20M events a day, we get 11M. |
hi @cduverne to to to your transformer seams to only add attributes to your event. If possible you may parse your incoming event end extract only data needed. If all the info from the event is needed then it can not be helped. I do not spot anything which would be dropping data as far as my understanding goes. If you are concerned I recommend having a look at vector top this shows live counter for all the components in your topology. |
Uh oh!
There was an error while loading. Please reload this page.
A note for the community
Problem
Hello,
We're setting up a new vector sink towards AWS S3, after reading data from Kafka.
Unfortunately, eventhough there are Gb of data read from Kafka, the output files are always capped at approx. 256Mb.
We've setup disk buffer, with a large max buffer size, without success.
Any help would be much appreciated.
Configuration
Version
0.46.0
Debug Output
Example Data
No response
Additional Context
No response
References
#22839
The text was updated successfully, but these errors were encountered: