Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Memory leak using Docker container #12

Open
nodesocket opened this issue Mar 23, 2021 · 10 comments
Open

Memory leak using Docker container #12

nodesocket opened this issue Mar 23, 2021 · 10 comments
Labels
bug Something isn't working help wanted Extra attention is needed

Comments

@nodesocket
Copy link

nodesocket commented Mar 23, 2021

Can't 💯 confirm, but it looks like there may be a memory leak. I am hosting my own version of send on AWS Lightsail using their container service (essentially ECS). Memory usage is continuously increasing linearly. Running latest version of send v3.4.5 via Docker container.

Screen Shot 2021-03-23 at 5 34 36 PM

Screen Shot 2021-03-23 at 5 34 47 PM

Screen Shot 2021-03-23 at 5 34 52 PM

@timvisee
Copy link
Owner

Thanks for the report! Yeah, that's definitely increasing.

Do your images show the total memory usage of the full host, of all running containers (I'm assuming you're running more, like this), or just the send container specifically?

I can confirm that I see some growth in memory usage on my public instance, though it seems to stop and even out after a while.

image (2 week period, from 32.2% to 33.4%, restarted for an update in the middle, 28.8% to 30.6%, memory usage of full system running these containers)

@nodesocket
Copy link
Author

nodesocket commented Mar 24, 2021

The send Lightsail container service is only running send and redis:6.2.1-buster. I suppose it could be redis that is increasing memory usage, but honestly not using send much (maybe a total of 10 uploads). So, not sure why redis memory usage would be steadily increasing.

I could try switching to redis:6.2.1-alpine instead of using Debian Buster.

Curious, where are you hosting your public instance?

@nodesocket
Copy link
Author

nodesocket commented Mar 24, 2021

@timvisee perhaps related, if I don't set the envar FILE_DIR the default is to write to the filesystem right? What directory does it use by default? It's not possibly writing files into redis is it?

@timvisee timvisee added help wanted Extra attention is needed bug Something isn't working labels May 5, 2021
@nodesocket
Copy link
Author

nodesocket commented Mar 9, 2022

Indeed this is still happening and looks like the containers crash and thus unfortunately causes Redis to also crash, expiring all outstanding links 😢 😠 . Running both send and redis in the same AWS Lightsail container task. Can send over that configuration if it helps.

The only metrics I really have are:

Screen Shot 2022-03-09 at 4 09 23 PM
Screen Shot 2022-03-09 at 4 09 16 PM

@nodesocket
Copy link
Author

@timvisee here is the AWS Lightsail container configuration if it helps. Would pulling Redis out of the same container service as send and running it in a dedicated container service help at all? I would be absolutely shocked if the memory leak is in redis:6.2.5-alpine3.14

Screen Shot 2022-03-09 at 9 31 15 PM

Screen Shot 2022-03-09 at 9 34 43 PM

@timvisee
Copy link
Owner

timvisee commented Mar 14, 2022

Would pulling Redis out of the same container service as send and running it in a dedicated container service help at all?

I don't think so. Either way, they're separate containers. A service is a virtual context to help 'link' things together.

I did monitor the send.vis.ee instance for a while again. I don't see any of this weirdness.

I wonder, are you running the container in production mode? If not, it won't be using Redis and stores entries internally. It might cause such issue.

perhaps related, if I don't set the envar FILE_DIR the default is to write to the filesystem right? What directory does it use by default? It's not possibly writing files into redis is it?

Files are stored in a random temporary directory by default. See:

send/server/config.js

Lines 173 to 177 in 742b5de

file_dir: {
format: 'String',
default: `${tmpdir()}${path.sep}send-${randomBytes(4).toString('hex')}`,
env: 'FILE_DIR'
},

@nodesocket
Copy link
Author

nodesocket commented Mar 14, 2022

I wonder, are you running the container in production mode? If not, it won't be using Redis and stores entries internally. It might cause such issue.

I am setting the envar NODE_ENV to production that should do it right? I do wonder if something specific to Amazon Lightsail containers is at fault. I am not seeing it though.

If you think it makes sense, I can try running send locally and leave it up for a few days and see if I can replicate the memory leak.

@timvisee
Copy link
Owner

I am setting the envar NODE_ENV to production that should do it right? I do wonder if something specific to Amazon Lightsail containers is at fault. I am not seeing it though.

Yes, that's right. I wonder if it would affect it, I mean, I assume it to be just a Docker container, right.

If you think it makes sense, I can try running send locally and leave it up for a few days and see if I can replicate the memory leak.

That would be awesome. You might need to send some traffic to it though, in a similar pattern to your hosted instance.

@nodesocket
Copy link
Author

@timvisee I tried just for fun switching Redis to use the following image tag redis:6.2.6-bullseye instead of using Alpine. Unfortunately, same behavior. This is a graph of memory usage for the last day in AWS Lightsail. Testing send locally is gonna be a bit of work for me, but I will get around to it.

Screen Shot 2022-03-15 at 6 22 31 PM

@nodesocket
Copy link
Author

nodesocket commented Mar 28, 2022

This has to be something either in the code, or a problem with hosting container on AWS Lightsail. Looks like memory usage grows for 7 days, then the process restarts. Upon, restart though, all the outstanding links expire which is also a flag to me.

Memory

Screen Shot 2022-03-28 at 4 41 36 PM

CPU

Screen Shot 2022-03-28 at 4 43 37 PM

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working help wanted Extra attention is needed
Projects
None yet
Development

No branches or pull requests

2 participants