-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Document testing to reduce lambda cold start time #693
Conversation
Few other things you could try from simple to more difficult:
Please be careful in the the idea of rewriting in another language.The |
Thanks for these suggestions @mchv :) I'll take another look at the tiered compilation setting and how to reduce the dependencies. I'd also tested using arm64 as the runtime but didn't notice an impact, I missed this out of my test results (sorry!) Great insight about jvm vs other languages. At the moment I don't think we have enough data/evidence to suggest migrating the sender lambdas to another language would solve our problem. Our next set of steps include understanding more about how the sender lambdas are operating (inc more detailed breakdown of how time is being spent/lost) & trying to increase the concurrency of our lambdas (e.g. by defining a provisioned concurrency). If you have any other suggestions or thoughts, would love to hear them! |
@DavidLawes You are welcome, 2 other small suggestions:
|
I did try sbt-proguard and something else similar once but it only works if you can accurately define the "roots" of your code. Where things rely on reflection to find classes (usually loggers and I think AWS client (v1?) did) then you have to also define those also. Mario G did a load of work on making lambdas fast, he did a simple one with minimal libraries (upickle/ujson, built in Java http client, built in logger) and it was super fast. I can dig out some info if it's useful. Also he (and Adam F separately and possibly also Regis) did experimentation with native compilation (graalvm) and had some success, but again with some issues around reflection in AWS library. Again I can try to dig more info out if it's useful. If it is the cold start time that is the issue, I would consider whether it would be economical to either run the lambda in a dry-run mode on a schedule to keep it warm, or pay for a burstable ec2 instance to run all the time which would only be tens of dollars a month (plus cost to maintain it) and might be more efficient in terms of dev time? |
Thanks @johnduffell, this is super helpful! |
Thanks @johnduffell - we've been chatting about your ideas :) We think the idea of the dry run to keep the lambdas warm is a great idea, we're definitely going to test it out! |
glad it's useful - let me know if you want any further thoughts or review anything based on my experience! |
Another quick things to try regarding performance generally (this will not help directly cold start problem) is AWS CodeGuru profiler. You need to set-up for the lambda and then code integration is trivial. |
Ah, this is super interesting, thank you for sharing! I will add this to our list of initiatives about how to improve lambda performance. We did some investigation into our lambda timings of our lambdas and we also raised a support case with aws. We believe we may have a problem with concurrent executions of our lambdas (this is backed up by cloudwatch metrics showing how long we have sqs messages waiting on the queue to be processed) Our next set of tests include:
As an aside, I believe our lambda code could be improved. We've seen some lambdas starting to time out after 1m30s when trying to send notifications to firebase. At the moment we make 1 api request per device, but firebase now support making batch requests which could improve our performance/efficiency. Thought i'd note our current thoughts/plans in case it sparks any other ideas :) |
What does this change?
Document the testing we've done to try and reduce the lambda cold start time (no code changes)