Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

aws-s3-deployment - intermittent cloudfront "Waiter InvalidationCompleted failed" error #15891

Open
naseemkullah opened this issue Aug 4, 2021 · 62 comments
Assignees
Labels
@aws-cdk/aws-cloudfront Related to Amazon CloudFront @aws-cdk/aws-s3-deployment bug This issue is a bug. p1

Comments

@naseemkullah
Copy link
Contributor

naseemkullah commented Aug 4, 2021

def cloudfront_invalidate(distribution_id, distribution_paths):
invalidation_resp = cloudfront.create_invalidation(
DistributionId=distribution_id,
InvalidationBatch={
'Paths': {
'Quantity': len(distribution_paths),
'Items': distribution_paths
},
'CallerReference': str(uuid4()),
})
# by default, will wait up to 10 minutes
cloudfront.get_waiter('invalidation_completed').wait(
DistributionId=distribution_id,
Id=invalidation_resp['Invalidation']['Id'])

I've come across a deployment where cloudfront was invalidated but the lambda timed out with cfn_error: Waiter InvalidationCompleted failed: Max attempts exceeded. I suspect a race conditon, and that reversing the order of cloudfront.create_invalidation() and cloudfront.get_waiter() would fix this race condition.

edit: proposed fix of reversing create_invalidation() and get_waiter() is invalid, see #15891 (comment)

@otaviomacedo
Copy link
Contributor

Hi, @naseemkullah

Thanks for reporting this and suggesting a solution.

I presume your hypothesis is that, in some cases, the invalidation happens very fast and the waiter gets created after the invalidation has completed, causing it to wait until the timeout is reached. Is that fair?

Also, how easily can you reproduce this issue? Race conditions are usually tricky to test. I would like to get some assurance that the swap will actually fix the issue.

@otaviomacedo otaviomacedo removed the @aws-cdk/aws-cloudfront Related to Amazon CloudFront label Aug 13, 2021
@naseemkullah
Copy link
Contributor Author

Hi @otaviomacedo,

I presume your hypothesis is that, in some cases, the invalidation happens very fast and the waiter gets created after the invalidation has completed, causing it to wait until the timeout is reached. Is that fair?

Yep, that's right.

Also, how easily can you reproduce this issue? Race conditions are usually tricky to test. I would like to get some assurance that the swap will actually fix the issue.

Not easily 😞 , in fact it is an intermittent issue that I've observed at the end of our CI/CD pipeline (during deployment) once every now and then (rough estimate 1/50). I'm afraid I cannot provide more assurance than the reasoning above. If you don't see any potential issues arising from reversing the order that I may not have thought of, I'll be happy to submit this potential fix. Cheers!

@otaviomacedo
Copy link
Contributor

I think the risk involved in this change is quite low. Please submit the PR and I'll be happy to review it.

@naseemkullah
Copy link
Contributor Author

naseemkullah commented Aug 13, 2021

After reading up on the waiter, it appears that it uses a poll mechanism, furthermore the ID of the invalidation request needs to be passed into it, so all seems well on that front.

Not sure why I see these timeouts occasionally 👻 .... but my hypothesis no longer holds, closing. Thanks!

edit: re-opened since this is still an issue

@github-actions
Copy link

⚠️COMMENT VISIBILITY WARNING⚠️

Comments on closed issues are hard for our team to see.
If you need more assistance, please either tag a team member or open a new issue that references this one.
If you wish to keep having a conversation with other community members under this issue feel free to do so.

@peterwoodworth
Copy link
Contributor

Reopening because additional customers have been impacted by this issue. @naseemkullah are you still running into this issue?

From other customer experiencing the issue Message returned: Waiter InvalidationCompleted failed: Max attempts exceeded

this issue is intermittent and when we redeploy it works.
Our pipelines are automated and we deploy 3-5 times everyday in production.
When our stack fails due to this error then cloudfront is unable to rollback, which create high severity issues in prod and there is a downtime until we redeploy the pipeline again. This error happens during the invalidation part but somehow cloudfront is not able to get the files from s3 origin when this error occurs. We have enabled versioning in s3 bucket so that cloudfront is able to serve the older version in case of rollback but its still unable to fetch files until we redeploy.

customer's code:

  new s3deploy.BucketDeployment(this, 'DeployWithInvalidation', {
      sources: [s3deploy.Source.asset(`../packages/dist`)],
      destinationBucket: bucket,
      distribution,
      distributionPaths: [`/*`],
      retainOnDelete: false,
      prune: false,
    });

This deploys the files in s3 bucket and creates a cloudfront invalidation which is when the stack fails on the waiter error.

@naseemkullah
Copy link
Contributor Author

@peterwoodworth yes occasionally! I was a little quick to close it once my proposed solution fell through, thanks for reopening.

@github-actions github-actions bot added the @aws-cdk/aws-cloudfront Related to Amazon CloudFront label Aug 31, 2021
@peterwoodworth peterwoodworth removed the @aws-cdk/aws-cloudfront Related to Amazon CloudFront label Aug 31, 2021
@otaviomacedo otaviomacedo added the bug This issue is a bug. label Sep 3, 2021
@otaviomacedo
Copy link
Contributor

In this case, the most plausible hypothesis is that CloudFront is actually taking longer than 10 min to invalidate the files in some cases. We can try to reduce the chance of this happening by increasing the waiting time, but Lambda has a maximum timeout of 15 min. Beyond that, it's not clear to me what else we can do. In any case, contributions are welcome!

@otaviomacedo otaviomacedo removed their assignment Sep 3, 2021
@naseemkullah
Copy link
Contributor Author

In this case, the most plausible hypothesis is that CloudFront is actually taking longer than 10 min to invalidate the files in some cases. We can try to reduce the chance of this happening by increasing the waiting time, but Lambda has a maximum timeout of 15 min. Beyond that, it's not clear to me what else we can do. In any case, contributions are welcome!

it has happened twice in recent days, next time it occurs i will try to confirm this, iirc the first time this happened i checked and I saw the invalidation event had occurred almost immediately yet the waiter did not see that (that's why i thought it might be a race condition). Will confirm though!

@quixoticmonk
Copy link

Noticed the same with a client I support over the last few weeks and makes us rethink using the BucketDeployment construct overall. I will check any new occurrences and confirm the actual behavior of CloudFront in the background.

@quixoticmonk
Copy link

In my case, the invalidation kicked off two and both were in progress for a long time and eventually timed out.
Screen Shot 2021-09-08 at 9 15 14 AM

@naseemkullah
Copy link
Contributor Author

Confirming that in my case the validation occurs when it should, but the waiter just never gets the memo and fails the deployment after 10 minutes.

@sblackstone
Copy link
Contributor

I can also confirm this issue occurs with some regularity for me too...

I have a script that that deploys the same stack to 29 different accounts - with a deploy I just did, I had 3 of 29 fail with Waiter InvalidationCompleted failed:

@github-actions github-actions bot added the @aws-cdk/aws-cloudfront Related to Amazon CloudFront label Sep 16, 2021
@emmapatterson
Copy link

My team are also seeing this error regularly!

@Negan1911
Copy link

Started to see this problem when using s3 bucket deployments with CDK

@jkbailey
Copy link

We started seeing this, it started on 4/19/23, still happening today 4/20

@calebwilson706
Copy link

This is happening us frequently now also

@miekassu
Copy link

miekassu commented May 3, 2023

This is happening more frequently now

@costleya
Copy link
Contributor

costleya commented Jun 6, 2023

Indeed the cache invalidates on the CloudFront side almost instantly. But the deploy fails, and rolls back (which CloudFront also takes affect immediately, and the rollback fails).

@nkeysinstil
Copy link

Seeing this now also

@leantorres73
Copy link

leantorres73 commented Jun 6, 2023

Same here...

@xli2227
Copy link

xli2227 commented Jun 9, 2023

encounter the same issue, some action log timestamps:

2023-06-09 08:15:11 UTC-0700 | AgenticConsoleawsgammauseast1consolestackbucketdeploymentCustomResource9C0F1745 | UPDATE_FAILED | Received response status [FAILED] from custom resource. Message returned: Waiter InvalidationCompleted failed: Max attempts exceeded (RequestId: 3b01a325-6c24-45f0-8f6c-86638f2e282b)
-- | -- | -- | --
2023-06-09 08:04:38 UTC-0700 | AgenticConsoleawsgammauseast1consolestackbucketdeploymentCustomResource9C0F1745 | UPDATE_IN_PROGRESS | -

took 10m to failed the CDK stack, and the invalidation was created 1 min after the failure.

  | IEKSZWOI5U3Q6GNNNQMQLJ11WH | Completed | June 9, 2023 at 3:16:20 PM UTC

@JonWallsten
Copy link

JonWallsten commented Oct 23, 2023

I've just seen this the first time today.
But in my case the invalidation is actually not complete:
image
It's been going on for 19 minutes now.
I have a single origin: A S3 bucket with three files on it.
It just failed the deploy for the third time in a row.
image

@hugomallet
Copy link

It seems there's currently a problem in AWS cloudfront I get the same timeouts errors

mdbudnick added a commit to mdbudnick/personal-website that referenced this issue Oct 24, 2023
@nbeag
Copy link

nbeag commented Nov 13, 2023

we are also encountering this intermittently in one of our CDK stacks and have noticed it happen more frequently in the last few weeks. when it occurs, the stack initiates a rollback - sometimes this fails (and requires manual intervention) and sometimes the rollback succeeds. Any update/workaround would be appreciated

@abury
Copy link

abury commented Nov 29, 2023

Started seeing this regularly today as well
Edit: Seeing this almost every day around the same time? I'm not even sure we can use Cloudfront going forward if we can't reliably deploy

@mattiLeBlanc
Copy link

mattiLeBlanc commented Dec 5, 2023

I am getting same error all of a sudden in our Staging deployment via Bitbucket:

UPDATE_FAILEDLikely root cause | Received response status [FAILED] from custom resource. Message returned: Waiter InvalidationCompleted failed: Max attempts exceeded (RequestId: dcd7fbdb-d6b7-441f-96f1-08026063b052)

This is a cloudfront deployment. I tried to deploy a deployment of 4 days ago which was fine and that also fails.
It happens at:
image
Our Dev and Prod deployments are working fine (different accounts)

This is totally unacceptable because I think I need to delete my stack, which luckily I can because of our microservice approach, but again, totally unacceptable.

@MrDark
Copy link

MrDark commented Dec 5, 2023

After not encountering this problem for a while, we're now also having this issue again. Luckily, it happened in our dev account, but I'm hesitant about deploying it to production.

@alechewitt
Copy link

alechewitt commented Dec 11, 2023

We are also experiencing this issue. The Lambda successfully uploads all the files to S3, however it does not complete and results in a timeout error. The other strange thing, for the latest Lambda invocation that have timed out, I don't see any Cache invalidation in the CloudFront Distribution.

These are the Lambda logs:

[INFO]	2023-12-11T19:31:34.655Z	181e927e-a970-43bc-a974-d88e6761c4cc	| aws s3 sync /tmp/tmp9zlftjbn/contents s3://notebooks/
Completed 8.6 KiB/~9.5 KiB (70.6 KiB/s) with ~3 file(s) remaining (calculating...)
upload: ../../tmp/tmp9zlftjbn/contents/error/403.html to s3://notebooks/error/403.html
Completed 8.6 KiB/~9.5 KiB (70.6 KiB/s) with ~2 file(s) remaining (calculating...)
Completed 9.1 KiB/~9.5 KiB (14.2 KiB/s) with ~2 file(s) remaining (calculating...)
upload: ../../tmp/tmp9zlftjbn/contents/index.html to s3://notebooks/index.html
Completed 9.1 KiB/~9.5 KiB (14.2 KiB/s) with ~1 file(s) remaining (calculating...)
Completed 9.5 KiB/~9.5 KiB (5.2 KiB/s) with ~1 file(s) remaining (calculating...) 
upload: ../../tmp/tmp9zlftjbn/contents/error/404.html to s3://notebooks/error/404.html
Completed 9.5 KiB/~9.5 KiB (5.2 KiB/s) with ~0 file(s) remaining (calculating...)
2023-12-11T19:45:36.537Z 181e927e-a970-43bc-a974-d88e6761c4cc Task timed out after 900.17 seconds

END RequestId: 181e927e-a970-43bc-a974-d88e6761c4cc
REPORT RequestId: 181e927e-a970-43bc-a974-d88e6761c4cc	Duration: 900171.11 ms	Billed Duration: 900000 ms

@richard-collette-precisely

Just hit this. CDK deployement. RequestId: 552880ea-f37b-4b8b-8cc8-3772e52e4cd3

@abury
Copy link

abury commented Feb 15, 2024

Still happening in 2024....
Not sure why I'm using Cloudfront at this point...

@alexandr2110pro
Copy link

Same here. What can you guys propose to prevent such issues in production pipelines?

@edwardofclt
Copy link

edwardofclt commented Feb 27, 2024 via email

@alexandr2110pro
Copy link

alexandr2110pro commented Feb 28, 2024

Add retry logic. Sent from my iPhoneOn Feb 27, 2024, at 1:00 PM, Alexandr Cherednichenko @.> wrote: Same here. What can you guys propose to prevent such issues in production pipelines? —Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you commented.Message ID: @.>

Hey man. What do you mean? Where? The cloud formation deployment service fails with the state "UPDATE_ROLLBACK_FAILED". All we can do is wait and then do "continue update rollback" in the UI. (I guess there must be an API command for that)

Why don't AWS add the retry? We are using standard CDK lib - its core building blocks must just work, right?

We enjoy having the cloud and application code in the same language in the same monorepo. (typescript + cdk + nx in our case) but such problems make us think about migrating to Terraform.

@edwardofclt
Copy link

edwardofclt commented Feb 28, 2024 via email

@jkbailey
Copy link

jkbailey commented Feb 28, 2024

We no longer experience this issue after increasing the memory limit of the bucket deployment.

new BucketDeployment(this, 'website-deployment', {
  ...config,
  memoryLimit: 2048
})

The defalut memory limit is 128. (docs)

@LosD
Copy link

LosD commented Feb 28, 2024

We no longer experience this issue after increasing the memory limit of the bucket deployment.

new BucketDeployment(this, 'website-deployment', {
  ...config,
  memoryLimit: 2048
})

The defalut memory limit is 128. (docs)

I'm pretty sure that's coincidental. First of all, it is VERY random; It can easily be 30-40 deployments between the issue, then suddenly happen multiple times within a few days. Second, the issue seems to be the CloudFront API itself timing out or taking so long that the bucketdeployment times out.

Only pattern I've seen is that it seem to happen more often if we deploy at the end of day (CET).

@sblackstone
Copy link
Contributor

We no longer experience this issue after increasing the memory limit of the bucket deployment.

new BucketDeployment(this, 'website-deployment', {
  ...config,
  memoryLimit: 2048
})

The defalut memory limit is 128. (docs)

I'm pretty sure that's coincidental. First of all, it is VERY random; It can easily be 30-40 deployments between the issue, then suddenly happen multiple times within a few days. Second, the issue seems to be the CloudFront API itself timing out or taking so long that the bucketdeployment times out.

Only pattern I've seen is that it seem to happen more often if we deploy at the end of day (CET).

Somewhere in the last 2 years the devs said this was an issue internal to Cloudfront and they were working with that team on it. That was a long time ago.

Abandon all hope ye who enter here.

@pardeepdhingra
Copy link

Still facing this issue in June 2024.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
@aws-cdk/aws-cloudfront Related to Amazon CloudFront @aws-cdk/aws-s3-deployment bug This issue is a bug. p1
Projects
None yet
Development

Successfully merging a pull request may close this issue.