-
-
Notifications
You must be signed in to change notification settings - Fork 309
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Lambda Timeouts #573
Comments
Thanks for reaching out. Sorry for the late reply (I was on vacation :)) I heard about the issue but haven't been able to reproduce it. Will run some tests at the weekend. It's really weird that it occurs in two independent lambdas at the same time. |
Thanks, let me know if you need any additional details. |
Any news on this? Do you call any external APIs that may take longer? |
So far no solution; increasing the lambda's memory seems to have helped, but we've also had a decrease in users around the same time, so it's hard to know for sure which caused a decrease in errors. Only APIs we used were analytics, and were turned off to no noticeable effect. After looking at x-ray logs (which mostly error out at AWS:Lambda), we're looking into database stuff again (though I'm not overly hopeful) |
I have been having these issues as well. The error rate is low, but its quite noticeable given that I'm handling millions of calls a month. I tried removing dashbot - doesn't seem to have been the fix. There aren't really any external calls I'm doing besides dynamo through the framework |
Hey @gshenar, could you take a look at your DynamoDb monitoring? We've had issues in the past where read/write capacity was reached during peak times |
I had a similar problem with jovo-db-mysql (reproducible). Not sure what config triggered the issue, but it looked like the connection pooling held the lambda function from terminating.
|
Thanks @klapper42 , the solution you provided solved my issue. My use case was a basic one, I wanted my hello world jovo project to run in AWS lambda and for session data instead of filedDB chose Mysql DB. When added to run in lambda ,it timed out. Steps:
{ |
I close this issue due to incativity. |
I'm submitting a...
Expected Behavior
Builds should finish in ~1s without erroring/timing out
Current Behavior
Somewhat randomly (though often in groups) error spikes will happen, accompanied by time outs.
Average error rate seems to be ~2-3% per day according to Alexa Dev Console session metrics (although it has gone above it by a lot for a few days, higher lambda timeouts seem to help a bit)
What I've Tried
I narrowed down what external services are used in my skill to: external assets, dynamoDB, analytics.
Based on logs, it does seem to save (
Saved user: [id]
), analytics return, it sends a response (Response JSON (jovo-framework)
/Response (framework)
) and ends with-- middleware 'platform.output' done
- however at this point it eventually times out (based on how long my lambda timeout is, currently 8s).Our working guess is that despite a response being sent the lambda never closes, which eventually sends and
END RequestId: [id]
, causing an error despite everything going alright. However, nothing other than the lambda seem to be an issue when tested (everything seems to return, DB metrics look fine, mp3 files are only used on device side).Small side note is that we have two jovo skills, and interestingly the large error / duration spikes happen at almost exactly the same time / amount (looking at cloudwatch metrics). Another skill the client has also uses Jovo and while it does have timeouts, they don't mirror the results like my two skills do. Note: My two skills use separate databases / lambdas, but do use the same server for assets (though nothing fetched server side) and same analytics platform (though again it always returns way before timeout); nothing else should be shared.
Logs
Logs look something like this (with DEBUG level on):
==Conclusion==
I'm not sure if I am perhaps missing something obvious from the logs or if it's some internal Jovo thing I may not have accounted for, but after working with amazon a bit on trying to narrow down the issue we decided it would be best to reach out to you. Any help would of course be appreciated.
Your Environment
The text was updated successfully, but these errors were encountered: