You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@duxthefux, sorry that you ran into this. Is there anything in the CloudWatch logs that indicate exactly why it is failing?
As a quick fix, have you tried increasing the MemorySize and Timeout configuration of your lambda function? (What values do you currently have them set to?)
Regarding changes to the code: The current version downloads the entire S3 object before processing and uploading the results to S3. The processing-and-upload side uses a bufferStream, so I think the code could be modified to make the call to s3.getObject() be the head of the stream.
I think you could just remove the download function from the async pipeline and replace bufferStream at line 309 with: s3.getObject({ Bucket: bucket, Key: key}).createReadStream()
This would only help if Lambda timeouts or memory size is the reason for it failing.
Hey, I don't think the problems are caused by MemorySize or Timeout because Timeout is already set to 4m50 and MemorySize to the maximum.
Moreover I get nice logs from CloudWatch: http://pastebin.com/jFyNrJ43
(It's only parsing 5000 events because I added a limit because I thought the problem is caused by loggly's 5MB bulk limit)...in the very beginning it worked a few times but not any longer...it's pretty weird...:-/
Hey,
I'm trying to get everything working since a couple of hours and I just figured out it's not working because of the log size (around 4MB).
If I test it with a smaller log file it's easily pushing to Loggly.
Do have know a workaround? Maybe trying to avoid to send the whole logs at once?
Thanks!
The text was updated successfully, but these errors were encountered: