Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error: rpc error: code = ResourceExhausted desc = grpc: received message larger than max due to elasticmapreduce/ListSteps #14976

Closed
Thiago-Dantas opened this issue Sep 2, 2020 · 12 comments · Fixed by #20871
Labels
bug Addresses a defect in current functionality. service/emr Issues and PRs that pertain to the emr service.
Milestone

Comments

@Thiago-Dantas
Copy link

Thiago-Dantas commented Sep 2, 2020

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Terraform CLI and Terraform AWS Provider Version

Terraform v0.12.28

  • provider.aws v3.1.0
  • provider.external v1.2.0
  • provider.local v1.4.0
  • provider.null v2.1.2
  • provider.random v2.3.0
  • provider.template v2.1.2

Affected Resource(s)

  • aws_emr_cluster

Debug Output

full log output (26MB) at https://drive.google.com/file/d/1ShpP3JmtHjg4nShKH2BJMjBJPamq8u94

Expected Behavior

Plan generated successfully

Actual Behavior

Error: rpc error: code = ResourceExhausted desc = grpc: received message larger than max (5209913 vs. 4194304)

Steps to Reproduce

Start many steps on the EMR cluster (ours is currently on 23000) and try to generate a terraform plan
On the debug log we noticed a lot of calls to elasticmapreduce/ListSteps (some even had to be retried due to AWS throttling)

Important Factoids

Our EMR cluster is somewhere long lived and we start a lot of jobs on it during its lifespan

On our configuration we set to ignore changes to step hoping to avoid this type of problem but maybe the plugin doesn't know it doesn't have to query steps before producing configuration differences?

  lifecycle {
    ignore_changes = [kerberos_attributes, step, configurations_json]
  }

References

@ghost ghost added the service/emr Issues and PRs that pertain to the emr service. label Sep 2, 2020
@github-actions github-actions bot added the needs-triage Waiting for first response or review from a maintainer. label Sep 2, 2020
@Thiago-Dantas
Copy link
Author

Previously I thought a large number of steps was the cause, but 1000 steps seems enough to break our terraform
Full log with 1000 steps https://drive.google.com/file/d/1bucFMYl75XUMWs62IxoFdkbnVo9hdvu-

@Thiago-Dantas
Copy link
Author

We are currently not seeing this issue anymore on aws provider version 3.6

@rednuht
Copy link

rednuht commented Nov 26, 2020

@Thiago-Dantas, interesting! I will need to try that right away. Which version of terraform are you using?
Btw, your google drive link is no longer valid. Perhaps using a gist is better for sharing?

@Thiago-Dantas
Copy link
Author

I was getting some weird russian spam on that so I deleted it, lol, sorry
We are currently using 0.12.28 and this is no longer an issue for us

I can confirm that even with the ignore_changes terraform is still listing the steps on generated plans but we are no longer failing to apply changes

@rednuht
Copy link

rednuht commented Nov 26, 2020

Just checked and we are still seeing it:
Error: rpc error: code = ResourceExhausted desc = grpc: received message larger than max (5223257 vs. 4194304)

@rednuht
Copy link

rednuht commented Nov 27, 2020

@Thiago-Dantas I still see this problem, can you re-open this issue please?
Using aws provider version 3.18.0 and terraform 0.13.4.

@Thiago-Dantas Thiago-Dantas reopened this Nov 27, 2020
@nivreddy14
Copy link

nivreddy14 commented Jan 6, 2021

I see the problem when deleting EMR cluster on TF v0.12.16 provider (hashicorp/aws) 3.18.0, could someone please suggest the fix

module.emr_cluster.aws_emr_cluster.emr_cluster: Refreshing state... [id=j-*********]

Error: rpc error: code = ResourceExhausted desc = grpc: received message larger than max (4246485 vs. 4194304)

Never faced this problem on Terraform 0.11

@Thiago-Dantas
Copy link
Author

Im getting this again on plugin version 3.44.0 and 3.45.0

@dsc133
Copy link
Contributor

dsc133 commented Aug 5, 2021

I am also seeing this. I left a comment on #9888. I have suggested a possible solution to the problem that may alleviate the issue on long standing EMR clusters.

@alexsanderp
Copy link

I have the same problem in version 0.15.5!

@justinretzolk justinretzolk added bug Addresses a defect in current functionality. and removed needs-triage Waiting for first response or review from a maintainer. labels Sep 13, 2021
@github-actions github-actions bot added this to the v4.12.0 milestone Apr 28, 2022
@github-actions
Copy link

This functionality has been released in v4.12.0 of the Terraform AWS Provider. Please see the Terraform documentation on provider versioning or reach out if you need any assistance upgrading.

For further feature requests or bug reports with this functionality, please create a new GitHub issue following the template. Thank you!

@github-actions
Copy link

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators May 29, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug Addresses a defect in current functionality. service/emr Issues and PRs that pertain to the emr service.
Projects
None yet
6 participants