Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Run arbitrary pre- and post-tasks #13

Closed
jaymell opened this issue Apr 19, 2017 · 8 comments
Closed

Run arbitrary pre- and post-tasks #13

jaymell opened this issue Apr 19, 2017 · 8 comments

Comments

@jaymell
Copy link

jaymell commented Apr 19, 2017

Hi. This isn't really an issue with the existing application, but I wanted to bring up a possible use case. I currently use Ansible and a custom set of scripts to do largely the same thing that your excellent app appears to be doing here, with an emphasis on being able to automate the entirety of an application deployment, including pieces that must be done outside of Cloudformation.

This leads us to need to make subshell calls and/or run arbitrary scripts to make API calls. There are a variety of specific use cases, but for the most part they fall under two categories:

  1. when we encounter functionality that is not yet or not fully implemented in Cloudformation.

In such cases, we generally fall back to using aws cli or a script using the SDK.

  1. when we have to do steps in between a multi-stack deployment.

For example, I may build a stack that generates a KMS key and need to encrypt/inject something into user data in a subsequent stack. For such cases, our automation provides easy hooks to be able to run pre- and post-tasks.

One could definitely argue that both of these sorts of use cases are better solved with custom Cloudformation resources. For better or worse, we've thus far been reluctant to go down that path. I'm curious whether anyone has any thoughts as to whether allowing more-or-less arbitrary pre- and post-stack tasks would be a worthy addition to this app.

Cheers,

@daidokoro
Copy link
Owner

daidokoro commented Apr 21, 2017

Hey Jaymell,

Thanks for feedback. Really appreciate it ^_^

I often run into these very same use cases. So far there have been fewer and fewer that I haven't been able to solve using Qaz and some tactical Cloudformation-fu.

For example, given your KMS example in the past i've Created the KMS Key-id in an initial stack and Exported it, then have the subsequent stack import the value and inject it directly to my instance user-data to perform the necessary encryption/decryption. For stuff like KMS Keys or values that will be useful for future stacks that may not be a part of the same deployment, it's good to Export them, so those values can be imported to any future stack deployed in that region without special scripts to re-fetch.

That being said, there are definitely situations where one needs a good old fashion script or api call between stack deployments. For Qaz this needs to be implemented while adhering to the following:

  1. Keep everything as Cloud-native as possible
  2. Maintain minimal abstraction from the underlying AWS platform
  3. Run-from-anywhere, that is, the App should be able to run by calling config remotely. Deployments should have no explicit local dependency.

Given the above, how I intended to handle this was by implementing AWS Lambda Hooks that can be triggered by Qaz before/after a stack deployment. All the logic needed for a deployment can be stored in a single function which performs various actions based on the event json passed in or multiple simple functions can be created.

In config, this would look something like this:

stacks:
  autoscaling:
    deploy_hook:
      pre: 
        - lambda_name: '{some:json}'
      post:
        - lambda_name: '{some:json}'
    
    delete_hook:
      pre: 
        - lambda_name: '{some:json}'
      post:
        - lambda_name: '{some:json}'
      

Given the above, you'd be able to trigger as many events pre/post deploy/delete stack operations (may even add one for updates).

The functionality is still being mapped out in my head, but I'm open to suggestions for how this should/could work.

--

I am quite keen on Custom Resources as a solution as well, due to it being the most Cloud native way of achieving this and that it allows you to Export the value of any special operations to Global Cloudformation space, making it accessible to all future stack deployments. Since it's raw Cloudformation and AWS Services, you're not locked in to using a particular tool or limited by said tool.

--

Another existing alternative currently available in Qaz is using the template Deploy-Time/Gen-Time function invoke. This allows you to invoke a lambda function during the process of generating or deploying a template. For example:

{{ invoke "some_function" `{"some":"json"}` }}

The above will invoke a function a write the response to your Template before deploying. In this way, you're able to dynamically trigger actions in AWS via lambda and Export the outputs via Cloudformation.

For example:

Outputs:
  functionResponse:
    Description: Lambda Function Response
    Value: {{ invoke "some_function" `{"some":"json"}` }}
    Export:
      Name: some-export-name

This doesn't help much with post deployment ops though. :-/

--

Let me know your thoughts. :-)

Apologies for the long winded response ^_^

@jaymell
Copy link
Author

jaymell commented Apr 21, 2017

Thanks for the detailed response. I think this sounds like a good approach and I think could work well.

The only thing that gives me pause about relying heavily on Lambda functions is that deploying the Lambda functions adds more complexity to the overall deployment process. I think the "Cloud native" approach is a good one overall, but from a pragmatic perspective, I get impatient with the additional work of packaging and deploy Lambda functions and find the subshell call easier. Do you have any thoughts about keeping the Lambda deployments themselves relatively painless?

There is also one thing that generally has to be done prior to using deploying/running Lambda or Cloudformation: creating an s3 bucket to hold your CF templates and Lambda code prior to deploying them. Because the apps I deploy are multi-AWS account, I've found it easiest to use app-environment-specific infrastructure buckets to hold these artifacts. Furthermore, given some of the annoyances of dealing with s3 buckets within Cloudformation, I often choose to just use Ansible for creating all my applications' s3 buckets, to avoid having to worry about deleting and re-creating a stack and thereby losing any data within s3 buckets that may have been part of that stack.... Anyway, all of this is just a long-winded way of saying that it would be helpful if the app could idempotently create at least that first turtle -- I mean, s3 bucket :).

Point taken on using cross-stack references. I put together most of the automation scripts and CF template snippets I used prior to cross-stack references being available so haven't used them as much as I probably should. I also sort of dislike the 'global' nature of the exports, but that's a relatively minor gripe.

Ayway, I've just changed jobs and no longer have access to my old automation tools, which is the perfect opportunity to start fresh. I've been contemplating whether to build my own fairly minimal Python framework to do so, but I think you've already done a better job than I would be able to do myself. Besides, I'm more enamored with Golang these days :) -- meaning, I'd love to contribute to the project if it looks like qaz will be a good fit for the needs of my current employer.

@daidokoro
Copy link
Owner

Gave it some thought today and you're completely right! Given the use case above I image it would get extremely tedious deploying lambda functions to different accounts for each build. Further, managing those functions can also add some extra admin work to your day.

All-in-all, I liked the Lambda idea but given your input I believe it won't scale well. The hook logic needs to be pushed from where ever Qaz sits, thus giving a more centralised management solution.

So I'm wide open to a feature-set that supports local scripts as hooks.

I think it would be great to have values from the Config file be passed into scripts, that way scripts can be dynamically generated templates that differ based on values from the config.

--

I'm always happy for feedback and pull requests, as you've just proven, I don't know everything :-) So I love any contribution great or small.

You've given me some stuff to re-think but let me know if you have any ideas on how to approach this, or fork and implement it and I can test it out.

Thanks again Jaymell :-)

@jaymell
Copy link
Author

jaymell commented Apr 22, 2017

Thanks for the feedback. I'm still undecided as to whether the Lambda deployment overhead is a worthy trade-off for a deployment that has minimal local dependencies. To the extent that the Lambda functions are reusable across multiple applications and not just one-off solutions to a specific application's deployment, it's probably worth taking the Lambda-based solution you initially proposed.

I'm hoping to spend much of the next couple weeks figuring that out and seeing if qaz will fit our needs. Also, this is not necessarily related, but I wanted to at least mention some other use cases I'm trying to think about possibly being able to accomplish:

  • Ability to assume roles instead of relying on profiles for deploying some or all stacks (say I need to put a DNS record in Route 53 in another AWS account) or running the discussed tasks
  • Easy multi-region deployments (ideally, I could pass region name on command-line rather than having to hard-code it in the config files)

If I feel like I have any good solutions to these problems, I will fork and try to get some code in place for them. Thanks again for your feedback. Cheers!

@daidokoro
Copy link
Owner

I'll give some thought to both solutions definitely.

Role switching is already supported by specifying the Roles in your AWS Config. For example here's my AWS CLI Config file:

~/.aws/config

[profile default]
aws_secret_access_key = "oxoxoxoxoxoxoxoxoxoxoxoxoxoxoxo"
aws_access_key_id = "oxoxoxoxoxoxoxoxoxoxox"
region = "eu-west-1"

[profile billing]
role_arn = "arn:aws:iam::9999999999:role/myrolename"
source_profile = "default"
region = "eu-west-1"

I'm then able to specify the billing profile in my config and Qaz will perform the role switch in the background. So the setup will work with Keys and Roles. Qaz handles this in the same way aws-cli does.

--

As for the Region issue, I'm currently working on version 0.50-beta which addresses partially addresses this issue. The region keyword is going to be superseded by the AWS CLI Config. This means that the region you specify in your CLI config for each profile will be used when deploying against a stack.

For example, the region defined for billing in my CLI Config above is eu-west-1. If I call this profile in Qaz Config, without specifying region in Qaz, it will use the region defined for the profile. Effectively, you won't need to specify a region in Qaz config at all.

--

I'll also give some thought about passing values via the CLI, i'd planned to do this or things like Stack Parameters and maybe CF values.

Let me know if you have any questions on the Role switching and region stuff.

Thanks :-)

@jaymell
Copy link
Author

jaymell commented Apr 24, 2017

Thanks for the reply. The region config definitely makes sense, and for the most part I think it should work fine. There are two potential issues I see with relying on the AWS profile config:

  1. Blue/green deployments -- though I've never actually gotten a full application to do this (we can all dream!), I've always intended to be able to do this by deploying the same stack across different regions. I think a runtime option to specify region would be easiest, rather than hard-coding it in the credentials file.
  2. On centralized build servers, I generally use instance profiles to define permissions. Since this elminiates the need for API keys, ideally I could just specify the name of a role to assume rather than separately creating an AWS credentials file on the server.

--James

@daidokoro
Copy link
Owner

Inclined to agree with passing the region in as a flag for a multi-region Blue/Green Deployments. The original use case for me with the Multi-account/region deployments was simultaneous or cross-account deployments and dependencies. I needed to be able to provision multiple separate accounts with the same stacks at the same time and have a another account read stack outputs from all of them.

Given the above, it wouldn't be hard to have a region flag that overrides the config values that can be passed in. The hard part is being able to do it for individual stacks. If we go with the region flag, it'll have to be Global, but based on the Use case you've outlined, I think that should be ok.

The central build stuff was from an external build server or workstation point of view. You're definitely right, if running from EC2, it wouldn't make sense to define a profile. That being said, I'm pro having a role flag.

We then use the instance_profile to Assume the role.

The commands then could look something like this:

qaz deploy some_stack --role="some/arn" --region=eu-west-1
qaz deploy --all --role="some/arn" --region=eu-central-1

Note: When dealing with roles, it has to be the full ARN, it won't be fun to type out, especially if dynamically generated, It may still be worth having the option to store it in config also.

Let me know if that's direction you were thinking.

@daidokoro
Copy link
Owner

After giving this one some thought, I believe qaz has sufficient functionality via Lambda to run arbitrary tasks.

  • Using Lambda as a template source also allows other tasks to be run before returning the Template
stacks:
  my_stack:
    source: lambda:{"some":"event"}@myfunction
  • Lambda can be called both at Gen-Time & Deploy-Time within generation operations. Allows actions to be run and the responses written directly to the template.
{{ $resp := invoke "somefunction" `{"some":"event"}` }}

My response is {{ $resp.response }}

I'm open to a PR that enhances the arbitrary tasks model but will actively avoid calling local scripts.

Closing this one for now.

Thanks

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants