New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Custom resources are great but binaries and dependencies are necessary for full potential #3
Comments
You're right that this is essentially for adding dependencyless code that mostly just calls aws sdk stuff. You could add those files to s3, pass the s3 bucket and key to your custom code, and have them copy them over first before running them? |
(Alternatively is having this plugin support a package instead of inline code - I'm open to that too! Just the use cases i had were super lightweight things that inline was fine for) |
@dougmoscrop I'm willing to create a PR for package support, but could you perhaps hint me towards some documentation on that to jumpstart me? I'm not really interested in passing a s3 bucket, because then you need to setup something like terraform to automate the process of creating that bucket and uploading the package files to it. I'd prefer to do it fully automated in cloudformation, including the deployment of the custom resource and its dependencies. |
@tommedema I commented on your question after searching a related question. I think a less-convoluted solution would be using a nested stack (or a tree of deployments), but honestly a lot depends on how dogmatic about "serverless" you want to be. To answer your more immediate question, docs on lambda-functions-via-cloudformation: Specifically, @dougmoscrop is filling the
I think it would probably be simpler to take trent's answer and synchronize keys using SSM params rather than environment vars (which would allow you to more naturally separate stages and services without worrying about de-syncing), but heckling doesn't help anyone and I think that adding this is a worthwhile use of time by itself. |
@Apathyman Thanks a lot for the input, very much appreciated. However, I feel like we are talking sideways. If you look at my previous message, I mentioned how supplying a S3 zip file would not make sense in this setup, unless I used something like Terraform to first create that s3 bucket and then upload the package there. What do you think about that approach? Of course, I wouldn't want to manually upload that bucket, and there is also the case of ensuring that the bucket receives a unique name, and that it is cleanly removed again after a sls remove. I'm aware of the current implementation and of the lambda docs; the challenge is in how to change it from inline code to a package while still having the entire setup version controlled. Trent's answer is not following infrastructure as code principles, which is what this question is all about. Note that I don't have a tight timeline here, I'm simply trying to get this right once such that I won't have to go through this trouble again in the future. I.e. it's worth it for me to spend extra time on doing this properly. One more idea -- I could create a plugin that takes care of the setting up of a helper s3 bucket with zip files for the lambdas (including destruction on sls remove etc.). However, I would then need to be able to use |
Serverless creates a deployment bucket for you. I use it for other things too. I would build the binary for aws linux, npm package it as part of your plugin, have your plugin copy the file over on deploy to a known key in to the sls bucket (you can determine that programmatically) and then use this lib to execute It, you can Ref: ServerlessDeploymentBucket and so on. No need to change this library. |
Another option is writing a serverless function that does it and invoke it from your custom resource. That's just moving the zip package and copy steps to sls instead of your plugin. And it will show up in sls commands. |
@dougmoscrop using the serverless deployment bucket sounds great. How can I tell serverless to package the zip containing the lambda code and binary, and then programmatically receive the resulting s3 bucket key? Also, where would you I'm optimistic about your suggestion. Let's hope it will work. |
Using the ServerlessDeploymentBucket can go two ways:
Both cases involve your plugin doing the operation on S3 and you determine the S3 Key yourself. During the deploy hook, you'd get the serverless bucket name programmatically. During the package hook, when you use this library, you can simply Ref the ServerlessDeploymentBucket as part of the parameters. A third option, similar to the second, is just to have Serverless package/deploy your function. This would be done with a hook during the compile phase, to 'inject' a Lambda function, and then you would have to modify this library to skip the makeFunction() part, and instead just point the custom resource at an existing function. |
@dougmoscrop good news, and bad news: Good news is that it is surprisingly easy to create custom resources with ordinary serverless functions. E.g.: serverless.yml
custom.js:
A new (fresh) deployment causes cloudformation to make a request to the specified lambda function, which will return the data object. Works great. I can also see successful output in Cloudwatch. The problem is that if I were to change the lambda function, e.g. to change the Do you think I should post an issue on the serverless project instead with the above sample? |
I've created an issue here serverless/serverless#4483 |
Hey I'm gonna close this as part of housekeeping as I think it's outside the scope of this project to do this, but I am open, of course, to PRs that might help ease pain in terms of boilerplate around uploading things to s3 and so on. |
I'm trying to answer my own question on stackoverflow.
Basically, I'm trying to request SSL certificates (if necessary) following infrastructure as code principles, i.e. by defining it in cloudformation. So far I managed to use your plugin to get to a point where it almost works.
In serverless.yml I added
Where I created a plugin
.serverless_plugins/serverless-request-certificate/index.js
:And finally the custom resource code
.serverless_plugins/serverless-request-certificate/custom-resource.js
:(note that there are some gotcha's here that you cannot use comments and must always use semicolons, or the code cannot be parsed by cloudformation, but that is not the point of this issue)
The problem is that while the custom resource currently just returns static strings, I want to add async logic here. Part of that logic is to request a certificate using a binary:
certbot
. The process looks much like this (in shell):With
authenticate-cert.sh
:Of course, this would then be written in the lambda and the DNS records would be returned dynamically.
The question is: how would I run the certbot binary from the
custom-resource.js
javascript? I noticed that you are defining the entire script's code directly in the cloudformation template. I guess this would make adding dependencies and binaries hard. Would there be an alternative approach to achieve my objective?The text was updated successfully, but these errors were encountered: