Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error in Cloudwatch Logs for Lambda #12

Open
chathamdl opened this issue Jun 24, 2021 · 5 comments
Open

Error in Cloudwatch Logs for Lambda #12

chathamdl opened this issue Jun 24, 2021 · 5 comments
Assignees
Labels
bug Something isn't working CloudWatch
Milestone

Comments

@chathamdl
Copy link

Pete looks like I am all set, but it looks like 2 resources are trying to use the same cloudwatch group.

I torn down everything and reran to get this error from clean run.
image

But previously I got this indicating that 2 objects want to use the same cloudwatch group.
image

Not sure what those are doing yet, but figured I would point it out. I assume it does not hurt anything just causes an error in terraform apply.

@petewilcock
Copy link
Contributor

This is an AWS synchronisation issue that occurs if you delete a named resource and try to recreate it again immediately. AWS has some internal consistency to reach so any subsequent attempt very quickly are likely to get this issue. The advice is to wait a few minutes and try again.

@alex036
Copy link

alex036 commented Jul 12, 2021

I'm getting this exact same error. Not sure how waiting a few minutes will fix this, you're trying to create 2 resources with the exact same name.

name = "/aws/lambda/us-east-1.${var.site_name}_redirect_index_html"

&
name = "/aws/lambda/us-east-1.${var.site_name}_redirect_index_html"

In order to re-apply I renamed resource "aws_cloudwatch_log_group" "object_redirect_ue1_local"

to "/aws/lambda/us-east-1.${var.site_name}_local_redirect_index_html"

@petewilcock petewilcock reopened this Jul 12, 2021
@petewilcock
Copy link
Contributor

@alex036 I think I see the cause now. Is the region you're deploying into also us-east-1?

There is a particular behaviour of CloudFront Lambda@Edge functions where it creates a certain log-group in the local region, but additionally us-east-1 regardless of your deployment region.

I think the actual issue here is I'm not accommodating that us-east-1 could also be the deployment region for this set-up, which would thereby collide in this scenario.

The actual fix here is to make the definition at #15 conditional on the deployment region not being us-east-1. I'll add this for action

@petewilcock petewilcock added bug Something isn't working CloudWatch labels Jul 12, 2021
@petewilcock petewilcock self-assigned this Jul 12, 2021
@petewilcock petewilcock added this to the 0.2.0 milestone Jul 12, 2021
@alex036
Copy link

alex036 commented Jul 12, 2021

Yes, also deploying this to us-east-1. Thanks for taking a look at this.

@nvnivs
Copy link
Collaborator

nvnivs commented May 4, 2022

I believe this issue will be fixed by this commit 35b37b1 when release 0.2.0 is ready.

The log_groups used by Lambda@Edge are replaced by a single one in UE1 which resolves the conflicts.

When I get a change will try to stand up a stack in UE1 to confirm.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working CloudWatch
Projects
None yet
Development

No branches or pull requests

4 participants