Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

"terraform apply" uses 3 GB memory and 50% of my PC for uploading 18 small lambdas #9364

Closed
Dreijnde opened this issue Oct 14, 2016 · 7 comments · Fixed by #9667
Closed

"terraform apply" uses 3 GB memory and 50% of my PC for uploading 18 small lambdas #9364

Dreijnde opened this issue Oct 14, 2016 · 7 comments · Fixed by #9667

Comments

@Dreijnde
Copy link

"terraform apply" uses 3 GB memory and 50% of my PC for uploading 18 small lambdas.
See screenshot.
terraform

Is this a memory leak?

I am running Windows 64 bit

@radeksimko
Copy link
Member

Hi @Dreijnde
would you mind sharing your terraform configs (minus any secrets) and more some details about your OS (Windows version etc.)?

Any more details about your environment generally would help reproducing and eventually fixing this.

@radeksimko radeksimko added bug waiting-response An issue/pull request is waiting for a response from the community core labels Oct 14, 2016
@kwilczynski
Copy link
Contributor

@Dreijnde hi there! I am sorry that you are having issues.

Are you uploading these lambdas as ZIP (or any archive) files?

@arminc
Copy link

arminc commented Oct 15, 2016

I have run the same project as @Dreijnde on OSX and got the same problem. It seems it has to do something with the Lambdas because even if all the infra and lambdas are up as soon as I only change the lambdas this is the terraform behaviour. The lambdas are 'jar' files. I looked at the size of them and they are 10mb or less, so even if all 18 lambdas are in memory it should not be 3gig.

I believe creating a terraform file with only 18 lambdas should reproduce this problem, I will see if I can test that statement.

@arminc
Copy link

arminc commented Oct 15, 2016

I tested my theory and it is the lambdas. Here is an example terraform that you can use to reproduce the problem. I included only 3 lambdas as an example but just add more of the same to increase the number.

There are actually two problems:

First one is dat when creating 18 lambdas from scratch the timeout is to low so you can not create them all at once. I needed to keep adding 2 or 3 lambdas to the terraform file to be able to finally have 18 lambdas.

Second is that when changing all 18 lambdas (changing the test.jar) the current issue described in this thread happens.
all.txt

I am doing this on latest OSX with Terraform v0.7.4.

@jbardin jbardin added provider/aws and removed waiting-response An issue/pull request is waiting for a response from the community labels Oct 18, 2016
@mitchellh mitchellh removed the core label Oct 25, 2016
@mitchellh
Copy link
Contributor

I took a look at the AWS SDK and currently the AWS SDK requires that lambda function contents be sent as []byte which means that the file has to be completely loaded in-memory. This is definitely what is causing your memory to balloon. See here: http://docs.aws.amazon.com/sdk-for-go/api/service/lambda/#UpdateFunctionCodeInput

I then looked at the AWS API itself and even the Lambda API expects a JSON object with base64 encoded file contents. So, it isn't possible to stream to this endpoint at all... well, not without some really special sauce.

I think what we need to do here in the AWS resource is one of those sad global semaphores to only upload 1 lambda with a zip file at a time. I don't think we need to get fancy with resource tracking or anything: just serialize the lambda code updates (when a zip file is present).

@catsby
Copy link
Member

catsby commented Oct 27, 2016

Hello @Dreijnde and @arminc – sorry to see the trouble here. I have a question, could you please verify that limiting the parrallism helps alleviate the memory issue here?

If you could please try running the modifications with the -parallelism flag and verify for me the memory improvements, that would help:

$ terraform apply -parallelism=3 <path to configuration files>

I'm fairly certain this will help and perhaps work as a workaround for now before I introduce the semaphore.

Thanks!

@ghost
Copy link

ghost commented Apr 20, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@ghost ghost locked and limited conversation to collaborators Apr 20, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

7 participants