-
Notifications
You must be signed in to change notification settings - Fork 422
ClusterContainsContainerInstancesException #38
Comments
That is really odd... the error comes from the AWS API. We're working with 0.6.15 here, so either the stack code relies on a bug prior to 0.6.16 or the latest terraform has introduced an issue (the latter seems more likely to me). Can we confirm that there was no ECS cluster in your AWS account when you ran the first time? |
Tired 0.6.15 - same symptoms. There might have been an ECS cluster from a previous Stack experiment but it would have had a different name - are the modules created in such a way I can only have one Stack per AWS account? |
Things should work if you give a different name to your stacks and use different subnets etc... are all the stacks you use named differently? |
Yes, all named differently. Only two stacks in my test account. The second stack was just to see if it would happen again. Right now I am not able to reproduce because both stacks give me a cycle error #37 (Happen to know of a util to wipe out all provisioned resources of an aws account?) |
Charity Majors recommends tagging all resources with something like That way you can be more surgical when destroying resources out of band. |
Im more than a bit excited about Stack. This is not nit picking, but sharing info because I experience quite a few issues. Using Terrafrom 0.6.16.
On a fresh and branch new stack I have the following plan:
Unfortunately, apply does not make it all the way through:
But if I plan and apply again, then Terrafrom says there are no changes to be made.
The text was updated successfully, but these errors were encountered: