Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Out of memory problems with Zulip 5.6 and 6.0 #23757

Closed
HellMar opened this issue Dec 6, 2022 · 6 comments
Closed

Out of memory problems with Zulip 5.6 and 6.0 #23757

HellMar opened this issue Dec 6, 2022 · 6 comments

Comments

@HellMar
Copy link

HellMar commented Dec 6, 2022

Hello all.

Around November 16, I had to increase the RAM of the VM because the OOM killer regularly killed services and Zulip was only partially usable. Since when the RAM became scarce I can unfortunately no longer say because the monitoring is not yet running so long.

Then on November 23, Zulip was upgraded from 5.6 to version 6.0 and the memory requirements exploded again.

Does anyone have any ideas what we can do about this?

image

@timabbott
Copy link
Sponsor Member

Hi! Based on those graphs and knowing what was in the 6.0 release, there's no significant change in Zulip's behavior. (It's possible that there's a tiny increase in memory footprint due to upgrading third-party dependencies).

I think what you're seeing is that because you increased the memory available on the host before the upgrade, Zulip switched its default mode for how to run queue processors to a configuration that is more efficient but uses more memory:

https://zulip.readthedocs.io/en/latest/production/deployment.html#queue-workers-multiprocess

Zulip is designed to make use of the memory available on the system -- so for example, memcached will be allocated 1/8 of the system's available memory. The configuration will be updated whenever you run zulip-puppet-apply, which is included in the upgrade process.

If you want to go back to using less memory, you can set that deployment option manually, resize your system, and run zulip-puppet-apply so your configuration changes take effect.

@zulipbot
Copy link
Member

zulipbot commented Dec 6, 2022

Hello @zulip/server-production members, this issue was labeled with the "area: production" label, so you may want to check it out!

@devZer0
Copy link

devZer0 commented Dec 28, 2022

Does anyone have any ideas what we can do about this?

@HellMar , if you suspect memory leaks or increased memory usage/requirement, i would setup some monitoring script to watch process memory growth over time.

something like

while true;do date;ps -eo rss,vsz,pid,etime,comm,args|sort -rn | head -n 25;sleep 60;done >logfile

@andreas-bulling
Copy link

andreas-bulling commented Mar 26, 2023

I am encountering a similar problem - I am running the latest git version and at some point in the last couple of weeks I suddenly started to see the zulip server becoming unavailable frequently (up to 2 times a day). Upon investigating the log files (I didn't touch configuration or anything before) I noticed that nginx was killed because of out-of-memory and in other places I saw that the connection to rabbitmq was lost. A reboot fixes the problem but I can't reboot once or even multiple times a day... The machine has 4 GB of RAM.

What is the proposed solution?

@alexmv
Copy link
Collaborator

alexmv commented Mar 27, 2023

There was a recent regression in the memory footprint of nginx in 1c76036; it was fixed in 262b193.

@alexmv
Copy link
Collaborator

alexmv commented Apr 19, 2023

I'm going to merge this into #23174, which is also a memory consumption report with recent versions. Please follow along there for further updates.

@alexmv alexmv closed this as not planned Won't fix, can't repro, duplicate, stale Apr 19, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

6 participants