Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to handle in-between state during deployment? #12

Open
bboure opened this Issue Apr 4, 2019 · 3 comments

Comments

Projects
None yet
2 participants
@bboure
Copy link

bboure commented Apr 4, 2019

I have a question on how AppSync Apis get deployed and updated under the hood, especially when deploying with CloudFormation/Serverless.

An AppSync API usually depends on several resources that work as a whole: Schema definition, Data Sources, Mapping templates, Lambda functions, DynamoDb tables, etc.
Updating all resources can take some times (a few seconds or minutes).
Could this lead to in-between state inconsistencies?
e.g: the lambda function has been updated, but not yet the mapping template.
Incoming request that could lead to an error.

I guess a solution would be to use blue-green deployments (it's probably good practice anyway), but I was just wondering if that is something that maybe AppSync/Cloudformation handles? Or what would you recommend?

thanks

@mim-Armand

This comment has been minimized.

Copy link

mim-Armand commented Apr 4, 2019

I don't think to update or recreate your DynamoDb table with your deployment is a good permanent solution anyway, especially in a production environment you'd want your AppSync APIs to depend on it but not to modify it with each deployment. if You do it that way, the deployment fails if your table needs modifications to support the new schema ( so you can change it and retry ).
for the rest, I use the serverless framework to stage and/or version my deployments, like that I can "Switch" to the new version easily each time I release to prod since DNS update is the last step of my prod deployment. Also, I have multiple stacks ( microservices ) all in one repository, My DynamoDB table is one of them, my Cognito user pool is another and so on, so all my infra is defined in my code base and if something needs to be updated it can be done explicitly.
Another useful trick is to use cloud formation outputs, as those prevent you to update a resource that's being used (imported) by other stacks which is a very good safeguard for the situations you mentioned.

@bboure

This comment has been minimized.

Copy link
Author

bboure commented Apr 4, 2019

Thanks, @mim-Armand, Of course, I have all my infrastructure separated into microservices as well. Especially, I separate stateful from stateless resources.

I am not ready yet to go to production, and when I do I have the intention of using blue-green deployment anyway and probably something like CloudFront to do the final "switch" as you said.

My question was more because I am a bit curious about how it works under the hood and also see what are the recommendations about how to handle it.

BTW, could you share with us how yor deployments with serverless? Do you just deploy a new "versioned stage"?
Something like sls deploy --stage prod-1.2.3 maybe ?

@mim-Armand

This comment has been minimized.

Copy link

mim-Armand commented Apr 4, 2019

@bboure I use two AWS sub-accounts, one for dev and one for prod, development, testing and staging all happens in dev account which is the default stage in my serverless.yml (using default profiles and stages set, we simply run sls deploy for dev), releases to prod are done using aliased version promotions, we simply name our releases ( as stupidly and/or thoughtfully as possible!! ) ) and once it deploys successfully and a few e2e tests are run we switch the DNS to the new endpoint..

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.