-
Notifications
You must be signed in to change notification settings - Fork 69
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Amplify push failure - CloudFormation stack rollback failed #2458
Comments
Hey👋 thanks for raising this! I'm going to transfer this over to our API repository for better assistance 🙂 |
I haven't had a huge amount of help with my other issues on the API repo I'm afraid @ykethan, if you have any ideas I'd love to know! I have opened an AWS support case for this particular issue too, but it seems like something with Amplify caused it to fail and get into a bad state, hopefully support can help me recover it. |
Hey @pr0g, I apologize for the inconvenience you've experienced. Have you been contacted by a member of our support team regarding this issue? |
Hi @AnilMaktala, thanks for getting back to me. That's okay, I'm speaking to someone from support who's contacted the CloudFormation team to help restore it. One of the nested stacks is reporting to its parent that it's in UPDATE_COMPLETE, but internally it's in UPDATE_ROLLBACK_FAILED, so the rollback can't be continued from the root stack. Apparently that's a symptom of Drift, but I don't know how that could have happened as I was using the Amplify CLI to perform all operations. I'll report back with an update hopefully when it's sorted. Thanks! |
From the description, it is most likely that the failure of the deployment is caused by this
As you mention about the error comes from As a workaround for the update rollback failure, I notice there are already steps mentioned by another customer (see #2157 (comment)) about adding the dummy resolvers for those with errors (in your case the ones in connection stacks), which should be helpful for you to resolve the rollback issue. Once you rollback successfully, I suggest only keeping the |
Hi @AaronZyLee, Thanks for your reply. Yes in hindsight I should have been a good scientist and only changed one thing at a time (lesson learned again 🙈). The reason I updated these flags is I'd been meaning to do it after @ykethan suggested I do it in this post. I realize I probably should have done this after though (less haste more speed). I did see the post you mentioned, but unfortunately I don't think it will work for me because the root stack doesn't think it's in an Might it be possible to delete the Thanks for the feedback and it's good to know for future, but ideally now I just need a way of recovering things and getting back to a good state. |
Hey @pr0g, Are you still experiencing this issue? |
Hi @AnilMaktala, Thanks for following-up, I was able to talk to AWS support and was able to get my CloudFormation stack back to UPDATE_ROLLBACK_COMPLETE, unfortunately when I try and do an Amplify push things are still failing. I've been talking with the AWS Amplfy support team and have managed to narrow things down a bit. I'm going to try and sync back to earlier in our Git history when this problem occurred and do an I'm going to try and get to this later this week and will leave an update if that works. Thanks! |
How did you install the Amplify CLI?
npm
If applicable, what version of Node.js are you using?
16.20.2
Amplify CLI Version
12.10.3
What operating system are you using?
macOS
Did you make any manual changes to the cloud resources managed by Amplify? Please describe the changes made.
Made one change to an AppSync resolve after seeing this post (#2157) which seemed related to the issue I was seeing. I only made this change after the stack was in a bad state (not able to roll back).
Describe the bug
We recently upgraded from Amplify Transformer V1 to V2. A bug was detected a few days after this upgrade, and to verify the V1 -> V2 Transformer change was the cause, I synced back to the commit before the upgrade, did an
amplify push
, and confirmed things worked as expected.After doing some digging I discovered the issue was down to a change in behavior from Amplify Transformer V1 to V2 where the
owner
field is not automatically populated when making GraphQL requests. I found this post #37, which suggested to updateamplify/cli.json
to include"populateOwnerFieldForStaticGroupAuth": true
. I made this change, along with a few other updates toamplify.json
(see cli.diff.txt, just remove the.txt
extension to preview), synced back to our change after the upgrade to Transformer V2, and did anamplify push
.After running this push, the CloudFormation stack for the application failed. It failed in the
api<app>client
stack. From the top level, its status is reported asUPDATE_COMPLETE
, but when you click the Physical ID link, it shows asUPDATE_ROLLBACK_FAILED
. The reason isThe following resource(s) failed to update: [ConnectionStack].
. Looking at theConnectionStack
, I can see it is also inUPDATE_ROLLBACK_FAILED
. The following reason is three resolvers failed to deploy (these are tables in our GraphQL schema):If I attempt to continue the rollback, I go from one of the nested stacks to the root stack, but as it is not reported as being in
UPDATE_ROLLBACK_FAILED
, I can't rollback at all. If I try and runamplify push
again I see:At this stage I am not sure how to recover the stack. Is there something I can do to fix the resolvers outside Amplify? Any guidance/support/advice would be hugely appreciated.
Expected behavior
The CloudFormation stack can be rolled back and
amplify push
works as expected.Reproduction steps
Not sure exactly if this will work, but this is roughly what I did:
"populateownerfieldforstaticgroupauth": true
set - see attachment for state ofamplify/cli.json
)amplify push
amplify push
(restore earlier state)amplify/cli.json
with changes shown in diffamplify push
Project Identifier
Attempting to run
amplify diagnose --send-report
shows:Log output
See earlier description
Additional information
No response
Before submitting, please confirm:
The text was updated successfully, but these errors were encountered: