-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to deploy test/staging environments? #814
Comments
Best I can gather you'd do this just like you'd do normal CloudFormation.
Template snippet:
|
Alternatively, use a separate account for dev/staging. You won’t have to name resources like this, you can assume everything in the entire AWS stack is part of the same environment. You’d just need to specify a different profile from your ~/.aws/config file. |
@et304383 That method works great with un-named resources, but won't named ones clash? And you definitely want to name DynamoDB tables because un-named ones are mistakenly deleted way too easily. |
Then you name then with the environment variable, just like StageName. |
I've just ventured towards serverless. I don't really want to pay a fixed amount for a fixed specification (number of API requests) when AWS offers a PER REQUEST PRICING and AWS already have dashboards for the API Gateway and Lambda.
This is also how serverless is doing it, every It would have been perfect if this is a feature supported by uncle sam on version 1. I think it makes sense that people would be looking for this, right? Something like |
Yes to the OP and to the first response from @et304383 . SAM does support deploying to multiple environments. Many architects from AWS will tell you to deploy to separate AWS Accounts for each environment, but for most businesses that is not scalable, as your organization may not have the governance controls in place to manage dozens of different AWS accounts. Best Practice is to create a naming convention in all of your SAM and CloudFormation resources, so that you can deploy the stack multiple times into a single account. I have found that we can do this with the following CFN parameters:
With these three parameters (environment, branch name, and service name) you can push all of your non-prod deployments into a single AWS account (keeping Prod as a separate account is important for security measures). This naming convention must then go into every named resource in your template, including the API, Lambda Functions, IAM Roles, DynamoDB Tables, and anything else that has an explicit name. For example:
The deploy command, enabling feature branch deployments as well, then looks like this:
The neat advantage of SAM tooling is that it allows you to build once (package), archive the artifact, and then deploy that same artifact to as many environments as needed. As mentioned above, the Serverless.com framework has a stage parameter when executing the package/deploy commands. The downside to that is that Serverless.com framework does not support build artifact promotion. You must re-build the serverless.com assets each time you deploy to a new environment. |
@michaelj-smith thanks for the detailed explanation! Would be great if someone very well experienced in this could create a tutorial about this even put it in AWS or youtube. I wonder why AWS did not support stage separation feature, when you create new SAM the default environment is PROD. |
@michaelj-smith This isn't true. This has been supported for several years now.
|
@bytekast it is true; serverless does not separate its configuration from packaging and therefore this will fail serverless package --package my-artifacts --stage dev
serverless deploy --package my-artifacts --stage dev
# now try to promote to another "stage", this will fail because `dev` is baked into the package created above
serverless deploy --package my-artifacts --stage prod See one of many issue threads here |
SAM CLI now is supporting |
Hmm... When I use "sam deploy" or "cloudformation deploy" with parameter overrides to deploy my lambda function to a stage, then the previously created stage gets deleted :(. For example, if I first deploy to the 'dev' stage, and then pass a different stage name(prod) as a parameter, then the dev stage gets replaced with a new name 'prod'... How can I keep both stages and deploy the function to a selected stage?
|
@simplyi same behaviour on my side. All resources get deleted for first env and re-created for new env. I found this guide: https://medium.com/@jun711.g/deploy-aws-api-to-multiple-stages-when-aws-sam-replaces-previous-stages-2f8fd7c49e45 but TBH I don't get it. When I create a stage manually and try to deploy to this one via SAM CLI I get "stage already exists" error. |
@hakunatomata2 @simplyi you have to provide different stack names per environment to keep both your lambda functions at the same time |
Description:
I would like to deploy multiple environments (production, staging, etc.).
As I've read in aws/serverless-application-model#191 API Gateway stages are not usable for that. Fair enough. What should I use?
From what I can tell I need to deploy with a different stack name? For example:
Is that the correct way to do this with SAM?
But then the deployment will fail because I can't deploy twice resources with the same name.
The text was updated successfully, but these errors were encountered: