-
Notifications
You must be signed in to change notification settings - Fork 195
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Deprecation in favor of Lambda Web Adapter #143
Comments
To be honest, I appreciated the simplicity of just implementing a simple GW API and creating a ZIP file. Sometimes this approach is actually easier to implement. As a minimum it would be nice to have Golang examples with Gin support in that repository. |
Alternatively, extend the maintainers list and leave the maintenance process to the community. The problem is partially solved. |
The fact that the suggested library is implemented as a Lambda extension is a no-go for us; it goes directly against the simplicity philosophy Go packages like this one offer — an easy to integrate, no commands or Lambda-specific setup, mapping between lambda events and http.Handler-aware Go code. |
You don't have to use it as an extension. You could package the executable as part of the zip for your function and run it that way. My thinking was that by using the external executable it would leave your web code clean of Lambda-specific parts. |
Thanks for the feedback, @uded. I've sent this over to the lambda-web-adapter maintainers. |
To be honest, I do not see this as a problem. If anything, it is the opposite. We build our app with two This API simplifies debugging if we add additional (separate) logic to the Lambda proxy's main function. For example, different logging, error reporting, etc. We have quite a long list of those, it helps a lot by actually separating Lambda implementation from base implementation. Suddenly we will have to make a lot of if-then blocks to initialize those things based on where it's being executed or still rely on (more complex in adapter scenario) build tags and completely rewrite them. So migration is possible, and yet the architecture leveraging this proxy, in my opinion, is much simpler, cleaner, and easier to use. Maybe that was not the initial intention for this project, but with proper implementation, it makes life way, way easier. For now, my initial analysis of the migration to the adapter fails to check any checkboxes from my list of benefits of using this proxy. |
@sapessi bring on the go example! |
I took a long look into that adapter. I guess I get how it works, but I do not fully agree with the design. Honestly, I think this proxy is a better solution for Golang. But this is only my personal opinion. It doesn't bring any value to the code, just moves responsibility from one package type to another. I am completely missing the point of why we should migrate! I can understand PHP developers - that adapter will possibly change the world for them. But for Golang, this is a second layer decoupling that makes life... well, not way more complicated, just less visible. |
Personally while it's sometimes nice to have things running in a Docker container, Lambda is slower, you have a 10 second delay upon deployment, and cold starts add another 4-5 seconds. Having to use docker when its not needed adds to the costs as well and forces over-engineering where it's not needed - especially when Go is a supported native layer. Maybe we can get some performance differences on top of a Go example as well @sapessi? |
OMG, I completely neglected to check performance. @acidjazz, you seem (after some initial testing) to be 100% correct; it adds up more than I expected. I haven't done extensive performance testing, but the results do not favor the adapter. There is a loss of time to deploy and for the cold start (1.1 seconds vs. 9 seconds on average! sic!), and I am uncertain but also a bit on handling requests (maybe like 5-10%). @sapessi, to be honest, unless the web adapter can prove us wrong and show there is no performance difference, I would vote never to shut down this project! I use Gin on Lambda for multiple projects, and we have zero problems with it. Except for minor fixes, it works just as I would expect. And fixes are minor; one of them someone already merged, and it works fine. |
I'm not a fan of all the cruft being added to AWS to make things more complicated then they need to be, and personally I intentionally want a minimalist approach available simply to have something that's documented in code that works. |
Here are Gin examples with Lambda Web Adapter: https://github.com/awslabs/aws-lambda-web-adapter/tree/main/examples/gin-zip Lambda Web Adapter could be used with zip packages and docker images. The Gin app does not contain Lambda specific packages. The performance impact is minimum. It adds about 1ms of latency. |
Lambda Web Adapter also makes it easy to run exits web apps on Lambda. Here is a go-httpbin server running on Lambda: |
@bnusunny Forcing to use docker results in:
A compiled go binary as a single file in a Lambda alone is much more efficient and less complex |
Lambda Web Adapter supports both Zip packages and docker images. The first example is for Zip packages. https://github.com/awslabs/aws-lambda-web-adapter/tree/main/examples/gin-zip |
Does the zip package example require SAM? - I notice it also adds the adapter as a Layer on the function which I've found also slows down responses and extends cold starts. |
The example uses SAM to automate the build and deployment. You could use AWS CLI, Terraform, or CDK to do the same. The adapter is a Lambda extension and delivered as a layer. I don't see layer has large impact on response time and cold starts. Do you have data to show that? |
I don't have data on hand but I've tested using layers and definitely noticed a difference in cold starts, responses, and deployment times. ( I'm the author of https://fume.app ) It's very hard to convince a community to add complexity and a loss of efficiency on top of change. |
Depending on what's included in a layer, it could run code during cold start, or add runtime processing. The adapter converts lambda invokes to http requests and vice versa for the response. It typically adds 1ms latency. |
So, to summarize, this is how it is supposed to work:
No SAM, no added magic or code. Just deploy the bin app with the HTTP server, and you're good to go? I intentionally left performance out of it. Am I correct? Or is there some added complexity that I missed? |
Yes, you are correct. One thing to mention: at cold start, the adapter will perform readiness checks (health checks) with the http server. Once the check passes, it forwards the requests to the http server. By default, the adapter send http GET request at http://127.0.0.1:8080/, and takes 2xx response code as passing. The port and path can be customized. In case you don't need to perform http checks, you can also configure the adapter to use TCP to check if the http server is listening on a port. |
Other concerns I have:
So using the go adapter this is our flow: And switching to this generic adapter is instead making our flow: Are there any advantages at all here? |
The adapter is developed with Rust. :) Source code is here: https://github.com/awslabs/aws-lambda-web-adapter/tree/main/src The advantage is allowing developers to create http server with any Golang rest framework or other web frameworks, plus portability and easy local development. |
I think this current proxy that allows Go developers to use Lambda while sticking with a current stack of just Go is still a better solution. I hope this adapter stays up to date. I will continue to explore this layer though, get some benchmark data, and stay open to its potential. |
@bnusunny all examples in the SAM template are pointing to port 8000. Anyway, I made a quick try, and it seems it doesn't work as efficiently as this proxy. First, the layer is not in the AWS provided list, and I needed to put it in by hand - bad. Furthermore, I have no clue which version I should use since there is no documentation in the console. Your examples are listed ver 7, latest is ver 9. Wrong again, creating unnecessary confusion. Also, in the future, how will this layer get upgraded? Automatically? Where is the changelog? How should I know whether there is a point in upgrading the version? After getting all stuff configured, my server is starting nicely on port 8000 or 8080, and this is what I am getting:
The problem is I have no clue, indication, log, or another hint on where to start fixing it. Since your layer is external, and I have no logging (or do I?) from it, I see no chance of fixing it right away. This proxy had and still has one huge advantage - I can log all that goes in and out from the level of my app. As I mentioned before, this layer is a black magic box for me - that is not good! Update: Right now, after checking, I think it's hitting the Plus, why is it stating that the IP that the request was originating from 127.0.01? I feel that (and don't get me wrong) someone in AWS envied GCP for how CloudRun is deployed. They have precisely the same model of deploying a stand-alone binary with HTTP server built in as a docker package. But, in comparison, it works there without a hiccup. As mentioned above, I have no idea how to start fixing it up! |
As mentioned before, the requests in the log are readiness checks. By default, the adapter sends http GET to http://127.0.0.1:8080/, and expects HTTP 2xx response to pass the readiness check. From the logs, the returned status code is 307. Do you have a readiness check path that return 2xx? You can configure the readiness check path using the environment variable: I get your concern. We may have a more relaxed default for readiness checks. |
@bnusunny I wouldn't guess that. Like ever. OK, got it working. Now, how in the name of whatever is Your God can I get the real IP of the client making the request? Because 127.0.0.1 is not the one I am looking for. Also, I do not see any headers with the IP that Gin can use. |
I think this adapter will be quite useful when Lambda receives a native Rust layer. |
I think the language itself makes no difference. The problem is the nature of the beast - it's a black box. We can't peek inside once it's working, there is no tracing or logging. If it works - good for you. But even the AWS console UI is not helping us to be convinced. Not to mention proper maintenance strategy, etc. So now I am supposed to follow some project changelog changes? No automatic |
You can enable logging by setting the environment variable Integrating with Lambda console is a good idea. We will explore this possibility with the service team. |
@bnusunny Another problem - with the adapter, how can I access Lambda RequestId? For now, I was unsuccessful, which is a problem for some of our projects... |
@uded Sorry for the late response. The adapter does not forward request context for now. Besides RequestId, do you use other context values? |
Not many, and that (for my team and me) is yet another problem. We used API gateway context information to enrich the interaction with our app. Now we have even less compared to using pure nginx. Let's compare, this is V2 proxy event:
In comparison this is what I am getting from the adapter (sorry, different format - but all data from incoming request):
The difference is... significant. I am not sure, but we cannot get the stage variables either? Sorry, I haven't verified that yet; I just got a message from one of my teammates. Also, we might be unable to read the stage of APIGW and all the other variables we used before. So...
The problem is - I have no idea right now. I would have to check all our codebase. But the point here is that since we are about to migrate, we might lose the ability to use those variables. If you want to replace this library, the new solution should match the current one. Meaning all of the variables and context information should still be available. I can only speak for myself, but since we had those in the API GW event, I am confident here and there developers used them to make our software more flexible. Staging variables was the first thing that came to mind besides tracing request IDs in logs, but I haven't had the time to verify if we use them commonly or just in specific scenarios. My feeling is that we most certainly do. At a minimum, all the headers, variables, and additional information should be passed via this gateway. Otherwise, this might not be usable for many projects relying on specific information from different AWS layers/products. Specifically, if one wants to use various features of API GW to the full of its potential. Also, in your basic examples, please add a trusted proxy for 127.0.0.1 with some explanation. Otherwise, many developers will miss how to read remote IP properly as they might not know that the adapter works as a proxy. |
@uded Lambda Web Adapter v0.6.0 is released. In this version, |
@bnusunny Thanks. Again, another problem listed above - how should I upgrade my Lambda configuration? Unfortunately, there is no version It doesn't help me find which version is available and doesn't make automatic or even user-driven upgrades. Also, there is no notification or other information on which version is there, etc. Any chance to quickly fix that? It would be nice to have upgrade assist and |
The layer version for v0.6.0 is 10. You can see it on the project README. The console integration will take sometime. I'm thinking about providing a public System Manager Parameter Store config which always points to the latest version. This can be easily integrated with CloudFormation, SAM, CDK or Terraform. |
Maybe open an issue about this detail on the repo here ? And then go ahead and close this issue since we are still going to support/use this proxy :) |
detailed terraform & go migration from |
Any solution that requires the use of nodeJS as a dependency doesn't work for us. That link talks about forcing people to use nodejs etc and that too large a dependency to enforce, and doesn't work for people who only want to run GOLang only shops. And yes, all of the variables and context information should still be available. If I can't do a go get and have my build just work, or I cant do tests or Ci or whatever due to this solution being too tightly coupled, it does not work well enough to depreciate this. You may not like maintaining this, but your customer base critically depends on it. |
zero nodeJS solutions are "required" in the link
zero statements in the link are "forcing" people to use nodejs
a monorepo of more than a dozen services can have more valuable priorities than help shops remain "GOLang only" if youve found this thread while shopping a solution that helps integrate with managed services, and you wish to avoid esoteric function signatures & dependencies, please consider the https://github.com/awslabs/aws-lambda-web-adapter project thank you for your contribution here @sapessi, but please archive this project to free yourself up for more important work users can import and maintain their own forks |
@mxfactorial while we can all see how proud you are of your migration, please do not attempt to speak for the rest of the Go community that would rather favor a simple solution like this proxy which is in the same language and easily debuggable; over a pretty over-engineered attempt like what you've given as an example. |
As I was and I am an advocate not to deprecate this library, after a full round of testing and migrating one service to production, I have to say it's not the end of the world. There is a performance penalty of about 1-2ms most of the time, although one should be aware that this is not a solid value. It will change during the day, probably (my assumption) based on the load of the container, etc. We use ZIP deployment, Gin based app -- looks good enough. My reservations about versioning, automatic (I wish!) upgrades of layers, etc., remain unchanged. Logging and debugging from and of the layer - forget about it. Local development is as easy or easier, so no worries. |
The Depreciation of this library would make our current serverless deployments impossible. There is no formal replacement for it anywhere that does not use sam. WE DONT WANT TO USE "sam", and forcing people to do so in some cases is the same as effectively kicking them off AWS and forcing them to use a different host, because its the same amount of work at that point to either fix it for AWS, or to switch hosting providers, and on top of that, at that point the devs are angry so they are motivated to look for other options. We have multiple technical and business reasons to not want to use it; On top of that it also slows down builds. We value every second of our deployment pipeline and this is just added cost and complexity that is simply not needed. AWS keeps trying to force people to use the SAM tool, And we hate it. It adds so much bloat and so many problems and actively it hurts our ability to use the platform by making things less flexible. We already have an entire build pipeline built out that allows us to simply check in code and then see it deployed after it passes all of our gates. It just uses the aws cli like nature intended, because this library lets us do that. Why should we have to do all this extra work to effectively rebuild everything from scratch to use a tool we don't want to use that just adds unneeded dependencies and problems that actively create risk to our project/business/whatever continuity, because now it's extra work and cost that the company/engineering has to think about and plan around? I'm 100% against adding these unnecessary dependencies and breaking builds simply to force people to use SAM. We need to maintain non-sam options. |
Lambda Web Adapter is an extension. It does not require SAM for deployment. It can be deployed with AWS CLI, AWS CloudFormation, AWS SAM, CDK, Serverless Framework, Terraform, and other IaC tools. @duaneking What tool you use to deploy Lambda functions? I could see if I could help. |
I'm worried that you're intentionally misunderstanding me. And that worries me. To be clear, I'm using the AWS CLI v2 to do all my deployments, because everything is scripted in bash. Anything that forces us to change that is a strict no. Our expectation is that we simply import go modules to access lambda as needed, and then import gin and use that. Anything that forces Sam on us is a hard no, and Our leadership is on record as saying that if Amazon forces us to use the SAM model, then we will go to azure. The Lambda/serverless model is simply so much better without Sam, or it's broken tool set and insane dependency graph. |
You can use AWS CLI to attach layers to a Lambda Function. You are free to choose the tools suite you best. aws lambda update-function-configuration --function-name my-function \
--layers arn:aws:lambda:us-east-1:123456789012:layer:my-layer:3 \
arn:aws:lambda:us-east-1:753240598075:layer:LambdaAdapterLayerX86:11 More details in the AWS CLI doc. |
We do not not want to use layers or modify our deployment. |
@bnusunny just to clarify in this thread:
|
That's a great summary of the issue @acidjazz ➕ |
I understand why AWS wants to increase our costs and make us do more work by taking away a working solution that they feel is too cheap for the customer; I'm just not ok with it. If AWS forces us to use layers in Lambda, increases our costs, or in any way, adds even a millisecond to the speed of my request (which also increases my costs, since I pay per millisecond on Lambda), then they are increasing my costs. Arbitrarily. Unfairly. Yes, because AWS is a metered service, and that would be forcing people to use more of these metered services with this change; I highly suspect the engineers working on this don't understand the liability they are creating for the company that they work for. I read this as an extreme attempt to increase customer costs and complexity of their deployment, just to pad Amazon's bottom line, when, in fact, using this system, Would be harder for people, especially people who don't want to use Kubernetes at all because it violates their requirements. I would like to posit to any Amazon manager's or leadership reading this, that this is not customer focused. I would really like it if somebody with the authority to do so in Amazon took a hard look at this team and made sure that they were following the 16 leadership principles, because that doesn't seem to be the case here. Also, If everybody at Amazon could look up the definition of "moral hazard" and understand it, that would be great. |
The issue was phrased as a question and we were simply exploring all options. Thanks for all the feedback. We hear you on the fact that you like the approach of this library and we'll keep it active. I'm closing this issue. |
Thank you. This would have hurt a lot of people. |
@sapessi THANK YOU !!!!!! |
With the runtime APIs and docker image support, Lambda has become a more extendable service. Leveraging these new features, AWS service teams release the Lambda Web Adapter library: https://github.com/awslabs/aws-lambda-web-adapter
It's a more scalable, simple approach that also makes the code more portable since it does not require any additional dependency or code change. I would like to deprecate this library in favor of Lambda Web Adapter. Any objections or other ideas?
The text was updated successfully, but these errors were encountered: