Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Custom resources are great but binaries and dependencies are necessary for full potential #3

Closed
tommedema opened this issue Nov 17, 2017 · 12 comments

Comments

@tommedema
Copy link

tommedema commented Nov 17, 2017

I'm trying to answer my own question on stackoverflow.

Basically, I'm trying to request SSL certificates (if necessary) following infrastructure as code principles, i.e. by defining it in cloudformation. So far I managed to use your plugin to get to a point where it almost works.

In serverless.yml I added

plugins:
# request SSL certificates through code if defined in custom config
  - serverless-request-certificate

...

custom:
  ssl:
    prod:
      dnsTxtRoot:
        Fn::GetAtt: [CustomCertificateRequestResource, dnsTxtRoot]
      dnsTxtWww: 
        Fn::GetAtt: [CustomCertificateRequestResource, dnsTxtWww]
      certArn:
        Fn::GetAtt: [CustomCertificateRequestResource, certArn]

...

resources:
  Resources:
    # cloudfront with https redirect
    WebCloudfrontDist:
      Type: AWS::CloudFront::Distribution
      Properties:
          ...
          ViewerCertificate: # TODO: only specify if custom ssl is enabled for this domain
            AcmCertificateArn: ${{self:custom.ssl.${{self:provider.stage}}.certArn}}
            SslSupportMethod: sni-only
  
    ...
   
    WebHostedZoneRecordSetGroup:
      Type: AWS::Route53::RecordSetGroup
      Condition: ShouldSetupDomain
      Properties:
        HostedZoneId:
          Ref: WebHostedZone
        RecordSets:
        ...
        - Type: TXT
          Name: _acme-challenge.${{self:custom.domains.${{self:provider.stage}}, ''}}
          TTL: '86400'
          ResourceRecords:
            - ${{self:custom.ssl.${{self:provider.stage}}.dnsTxtRoot}}
        - Type: TXT
          Name: _acme-challenge.www.${{self:custom.domains.${{self:provider.stage}}, ''}}
          TTL: '86400'
          ResourceRecords:
            - ${{self:custom.ssl.${{self:provider.stage}}.dnsTxtWww}}

Where I created a plugin .serverless_plugins/serverless-request-certificate/index.js:

const addCustomResource = require('add-custom-resource')
const path = require('path')

class ServerlessRequestCertificate {
  constructor(serverless, options) {
    this.serverless = serverless
    this.options = options
    this.hooks = {
      'before:package:createDeploymentArtifacts': () => this.updateCfTemplate()
    }
  }

  updateCfTemplate() {
    const template = this.serverless.service.provider.compiledCloudFormationTemplate

    addCustomResource(template, {
      name: 'CertificateRequest',
      sourceCodePath: path.join(__dirname, 'custom-resource.js')
    })
  }
}

module.exports = ServerlessRequestCertificate

And finally the custom resource code .serverless_plugins/serverless-request-certificate/custom-resource.js:

const response = require('cfn-response');
 
module.exports.handler = function(event, context) {
  return response.send(event, context, response.SUCCESS, {
    dnsTxtRoot: '"LnaKMkgqlIkwuv8mWx2xh8RUg-7PkKvqb_wqwVnC4q0"',
    dnsTxtWww: '"c43VS-VqPQEE3JhbvnGOg6cU8kUPXdKg4WVBRPCXXcA"',
    certArn: 'arn:aws:acm:us-east-1:151798775195:certificate/d9126a9f-4cc9-4615-b859-fcc50d84c66a'
  });
};

(note that there are some gotcha's here that you cannot use comments and must always use semicolons, or the code cannot be parsed by cloudformation, but that is not the point of this issue)

The problem is that while the custom resource currently just returns static strings, I want to add async logic here. Part of that logic is to request a certificate using a binary: certbot. The process looks much like this (in shell):

#!/bin/sh

SCRIPT_DIR=$(dirname $0)
DOMAIN="tommedema.tk"

# request the certificate
echo "request-cert: requesting certificate with DNS challenge for
  domains $DOMAIN and www.$DOMAIN; note that you agree to the certbot
  terms of service by continuing"
certbot certonly --manual -d "$DOMAIN" -d "www.$DOMAIN" \
  --agree-tos \
  --email "webmaster@$DOMAIN" \
  --preferred-challenge dns \
  --config-dir "$SCRIPT_DIR/cert" \
  --work-dir "$SCRIPT_DIR/cert" \
  --logs-dir "$SCRIPT_DIR/cert" \
  --manual-auth-hook "$SCRIPT_DIR/authenticate-cert.sh" \
  --manual-public-ip-logging-ok \
  --force-renew \ # TODO: this may be redundant
  -n

With authenticate-cert.sh:

#!/bin/sh

echo "you are validating $CERTBOT_DOMAIN"
echo "create a hosted zone record set of type TXT,
  name _acme-challenge.$CERTBOT_DOMAIN and value $CERTBOT_VALIDATION"

Of course, this would then be written in the lambda and the DNS records would be returned dynamically.

The question is: how would I run the certbot binary from the custom-resource.js javascript? I noticed that you are defining the entire script's code directly in the cloudformation template. I guess this would make adding dependencies and binaries hard. Would there be an alternative approach to achieve my objective?

@dougmoscrop
Copy link
Owner

You're right that this is essentially for adding dependencyless code that mostly just calls aws sdk stuff.

You could add those files to s3, pass the s3 bucket and key to your custom code, and have them copy them over first before running them?

@dougmoscrop
Copy link
Owner

(Alternatively is having this plugin support a package instead of inline code - I'm open to that too! Just the use cases i had were super lightweight things that inline was fine for)

@tommedema
Copy link
Author

@dougmoscrop I'm willing to create a PR for package support, but could you perhaps hint me towards some documentation on that to jumpstart me?

I'm not really interested in passing a s3 bucket, because then you need to setup something like terraform to automate the process of creating that bucket and uploading the package files to it. I'd prefer to do it fully automated in cloudformation, including the deployment of the custom resource and its dependencies.

@Apathyman
Copy link

Apathyman commented Nov 18, 2017

@tommedema I commented on your question after searching a related question. I think a less-convoluted solution would be using a nested stack (or a tree of deployments), but honestly a lot depends on how dogmatic about "serverless" you want to be.

To answer your more immediate question, docs on lambda-functions-via-cloudformation:
http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-lambda-function.html

Specifically, @dougmoscrop is filling the Code field with a ZipFIle made up of the raw code. You could alter that to accept an actual zip file of your own making:

I think it would probably be simpler to take trent's answer and synchronize keys using SSM params rather than environment vars (which would allow you to more naturally separate stages and services without worrying about de-syncing), but heckling doesn't help anyone and I think that adding this is a worthwhile use of time by itself.

@tommedema
Copy link
Author

tommedema commented Nov 18, 2017

@Apathyman Thanks a lot for the input, very much appreciated. However, I feel like we are talking sideways. If you look at my previous message, I mentioned how supplying a S3 zip file would not make sense in this setup, unless I used something like Terraform to first create that s3 bucket and then upload the package there. What do you think about that approach? Of course, I wouldn't want to manually upload that bucket, and there is also the case of ensuring that the bucket receives a unique name, and that it is cleanly removed again after a sls remove.

I'm aware of the current implementation and of the lambda docs; the challenge is in how to change it from inline code to a package while still having the entire setup version controlled.

Trent's answer is not following infrastructure as code principles, which is what this question is all about. Note that I don't have a tight timeline here, I'm simply trying to get this right once such that I won't have to go through this trouble again in the future. I.e. it's worth it for me to spend extra time on doing this properly.

One more idea -- I could create a plugin that takes care of the setting up of a helper s3 bucket with zip files for the lambdas (including destruction on sls remove etc.). However, I would then need to be able to use Ref: or Fn::GetAtt: within my serverless.yml cloudformation template to the s3 bucket created by the plugin. As far as I am aware, it is not possible to do this. If it is, I'll go ahead and start writing this plugin as that should allow for the packaging of lambdas as a next step. :)

@dougmoscrop
Copy link
Owner

Serverless creates a deployment bucket for you. I use it for other things too.

I would build the binary for aws linux, npm package it as part of your plugin, have your plugin copy the file over on deploy to a known key in to the sls bucket (you can determine that programmatically) and then use this lib to execute It, you can Ref: ServerlessDeploymentBucket and so on. No need to change this library.

@dougmoscrop
Copy link
Owner

Another option is writing a serverless function that does it and invoke it from your custom resource. That's just moving the zip package and copy steps to sls instead of your plugin. And it will show up in sls commands.

@tommedema
Copy link
Author

@dougmoscrop using the serverless deployment bucket sounds great. How can I tell serverless to package the zip containing the lambda code and binary, and then programmatically receive the resulting s3 bucket key?

Also, where would you Ref: ServerlessDeploymentBucket? You mentioned that there is no need to change this library, but then how would you have this library use a s3 bucket as the lambda code, rather than the current inlining of code of a specified file? At least I would have to change this library to allow for the referencing of the serverless deployment s3 bucket and zip file key right?

I'm optimistic about your suggestion. Let's hope it will work.

@dougmoscrop
Copy link
Owner

dougmoscrop commented Nov 18, 2017

Using the ServerlessDeploymentBucket can go two ways:

  • the first way, your plugin copies the binary file to the S3 bucket, something using fs.createReadStream(path.join(__dirname, 'mybinary')) probably; your custom resource code remains inline and you simply pass the S3Bucket and S3Key in as parameters to your custom resource. you copy the binary over first, then run it in your function.
  • the second way is you package everything up in to a zip yourself, upload it, and modify this plugin to allow specifying non-inline code as mentioned in the thread

Both cases involve your plugin doing the operation on S3 and you determine the S3 Key yourself. During the deploy hook, you'd get the serverless bucket name programmatically. During the package hook, when you use this library, you can simply Ref the ServerlessDeploymentBucket as part of the parameters.

A third option, similar to the second, is just to have Serverless package/deploy your function. This would be done with a hook during the compile phase, to 'inject' a Lambda function, and then you would have to modify this library to skip the makeFunction() part, and instead just point the custom resource at an existing function.

@tommedema
Copy link
Author

@dougmoscrop good news, and bad news:

Good news is that it is surprisingly easy to create custom resources with ordinary serverless functions. E.g.:

serverless.yml

service: loggroup

provider:
  name: aws
  runtime: nodejs6.10
  memorySize: 512
  timeout: 10
  stage: ${opt:stage, 'dev'}
  region: ${opt:region, 'eu-west-2'}
  
custom:
  ssl:
    prod:
      dnsTxtRoot:
        Fn::GetAtt: [MyCustomResource, dnsTxtRoot]
      dnsTxtWww:
        Fn::GetAtt: [MyCustomResource, dnsTxtWww]
      certArn:
        Fn::GetAtt: [MyCustomResource, certArn]
      
functions:
  customResource:
    handler: custom.handler
  
resources:
  Resources:
          
    MyCustomResource:
      Type: Custom::MyCustomResource
      Properties:
        ServiceToken:
          Fn::GetAtt: [CustomResourceLambdaFunction, Arn]

custom.js:

const response = require('cfn-response');
 
exports.handler = function(event, context) {
  return response.send(event, context, response.SUCCESS, {
    dnsTxtRoot: '"LnaKMkgqlIkwuv8mWx2xh8RUg-7PkKvqb_wqwVnC4q0"',
    dnsTxtWww: '"c43VS-VqPQEE3JhbvnGOg6cU8kUPXdKg4WVBRPCXXcA"',
    certArn: 'arn:aws:acm:us-east-1:151798775195:certificate/d9126a9f-4cc9-4615-b859-fcc50d84c66a'
  });
};

A new (fresh) deployment causes cloudformation to make a request to the specified lambda function, which will return the data object. Works great. I can also see successful output in Cloudwatch.

The problem is that if I were to change the lambda function, e.g. to change the dnsTxtRoot string, the lambda function is successfully updated (verified this on AWS console), but for some reason cloudformation does not send a new request to the new lambda function. There are no logs in cloudwatch. I.e. the new value returned by the lambda is never received in a subsequent deployment, and it will still operate as if it is returning the original value.

Do you think I should post an issue on the serverless project instead with the above sample?

@tommedema
Copy link
Author

I've created an issue here serverless/serverless#4483

@dougmoscrop
Copy link
Owner

Hey I'm gonna close this as part of housekeeping as I think it's outside the scope of this project to do this, but I am open, of course, to PRs that might help ease pain in terms of boilerplate around uploading things to s3 and so on.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants