-
-
Notifications
You must be signed in to change notification settings - Fork 7.4k
Description
Expected behaviour
Memory usage does not slowly increase over time
Actual behaviour
Sidekiq and puma processes both appear to leak memory
Steps to reproduce the problem
- Run Mastodon using Ruby with jemalloc
- Run
free -m(on a Linux machine) to see available memory - Notice that the
usedcolumn slowly increases over time
Apologies for the lack of details on this one. I'm not sure if you can actually repro it in a dev server; you might need a live Mastodon instance with actual people using it.
Just to put some numbers on this, I host two small instances on the same machine. When I restart all Mastodon processes, the used memory is around 1.8GB. But within a few hours, this will slowly start to creep up, eventually reaching 4GB (which is the limit on this particular machine). This means that a sudden need to run ffmpeg, a tootctl script, or a Postgres backup script can bring the server down, since there's no memory left.
Using top, it seems like the Sidekiq processes consume the most memory, followed by the Puma processes. Both seem to increase in memory usage over time, but Sidekiq grows faster.
The workaround I've adopted is to have this in the systemd config:
Restart=always
RuntimeMaxSec=21600
I.e. restart the process every 6 hours. (I'm thinking of bumping it to 3 hours though, since the memory creeps up that quickly.)
Perhaps another solution would be something like sidekiq-worker-killer? Assuming the source of the leak can't be found.
Specifications
3.3.0