New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update provided runtime to al2 #115
Conversation
Assets to test with provided.al2: Edit: They seem to work for my initial tests on Lambda itself. Note/Aside: I wonder if should publish the previous binaries in deno-docker (I know that at least one user was downloading these for centos7)... |
@hayd Give me an hour - I'll try update |
😬 😬 😬 |
Registry went down with just updating to 1.5.2 on al1, so can't actually test right now. Sorry. denoland/deno_registry2#185 |
@lucacasonato do you think that is a deno-lambda bug or just /x ? It looks like #113 ?? |
Looks like a deno bug. We shouldn't be trying to create the ts compiler cache if we are running a pure js file (bundle). |
As mentioned in other issue, it's actually created in |
Updated layer: |
This does create better error messages, but the behavior is a bit strange. I added a
The response took 9.4seconds total. |
@kyeotic The
should be removed in the latest commit. I think when you do this
you get somewhat strange behavior from lambda if it errors on init (and IME it may run multiple times)... the fact it's outputting this /opt/bin/deno line multiple times suggests bootstrap is invocated multiple times. I guess that message should say unable to import 'handler' from api.ts (in the cases where it compiles but has a runtime error)... and it should also be &> rather than 2> to hide any additional output. We should add a file like that as a test case :) Edit: Although note that this warn/error behavior isn't well tested - I thought there was an open issue... |
For me, the following:
somehow kicks lambda to use an earlier version* of my function code, and runs that instead after the error! (If I comment out the throw then the expected code is run). I feel like this must be some esoteric edge case of lambda itself rather than deno-lambda. Anyway, here is another update :) * maybe the first version? But certainly not the previous version. 🤷 |
Eurgh, it seems like the /runtime/init/error POSTing behavior has changed... Unless I am missing something... it used to be (and is on al1) that POSTing the /runtime/init/error would lead to On al2 it doesn't... and somehow continues afterwards to successfully run another version of the function code. 🤮 ... it seems implausible that curl would be doing something different, and that surely it's a lambda bug. |
Okay, maybe it is less bad than I thought. Though still quite whacky... My suspicion, though this makes no sense, is that in al2 IF your handler is the default hello.handler since you have another hello.ts in the layer then it'll try using that one on the second attempt?!! It does seem to time out/spend a long time thinking in that case too. When I renamed to foo.ts I couldn't reproduce (it worked immediately as on al1). Also minimum usecase (without any layers but with bootstrap posting to /init/error) I was unable to reproduce - it works as expected. I wonder if adding --no-check would be wise (and speed up the failures). 🤷 Edit: And now I can't replicate it at all!! |
I don't like the idea of shipping the hello modules in the layer, especially if they can be used in edfe case scenarios. I don't want "hello world" showing up in an API response. |
I guess we'll release this with 1.6.1 (assuming 1.6.0 is the first release to fix denoland/deno#8584 ) |
Apparently deno eval now creats a gen dir... this fails given lambda's readonly permission to home dir
Run with --no-check by default (perhaps should be a config)
1.6.0 is released with al1. I've updated this branch to 1.6.0 as well and here is the zip: The tests are failing due to ts error in the serverless-example (a hack with |
I tested out the layer and it works, but the performance has gotten noticeably worse. The runtime overhead is up to ~400ms, even on a warm lambda |
@kyeotic What are/were you seeing on al1? I wonder what it could be. Can building the deno binary from scratch make any difference (I doubt it)? |
I was seeing around ~250ms overhead on a warm lambda, which is still pretty bad. For comparison previously testing has shown a bare NodeJS lambda can cold-start in 150ms and has <10ms overhead when warm. I've never done any other perf testing on layers, so it may be a lambda layer overhead. |
This feels a bit hacky...
I am going to bump this to 1.6.1 and ship in the next couple of days. I don't have any insight into why things might be slower, and it's unfortunate, but we are getting very close to al1 EOL. |
closes #114 , probably WIP (need to test on actual Lambda)
Must discuss how we push this change out as it's kinda breaking (if anyone is using other binaries in al1).
cc @lucacasonato @kyeotic.