-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
too many child processes from eimp
#4100
Comments
Those are used by mod_avatar I guess, is by any chance the number of cores = number of eimp processes? Do you have Did you set special mod_avatar conversion rules? |
We are using a basic Setup without any special configuration or modules, except LDAP. We are just using XMPP. And I tried a plain Setup on a fresh Debian 12 Container with the same result. we are not using |
Do put a cleaned up ejabberd.yml on https://gist.github.com and attach the link here |
I put in the yml from the fresh install. The file is not modified... https://gist.github.com/frashman123/eedda75008aa7218db275d2a8c654a49 |
I believe eimp start one process per core, how much cores this machine have? (if you run ejabberd shell with |
Ejabberd runs on a dedicated LXC with 1Core. The host machine runs on a 2x16C (32T) AMD EPYC 7282 (total of 64 Threads). I just migrated to another host machine with just 12/24 cores/threads and it seems you are right. Now I am down to 24 processes. This is clearly a bug. Any chance for a workaround? Can I manually set the max count of child processes? |
...that exposes the real number of cores to the VM? |
Well, yes. As far as I know this is normal. |
So an app might start 24 threads, since it is lied 24 cores exist, but have only one core available? Does not sound ok to me... |
We are running a proxmox cluster on 3 nodes. LXC containers are not a KVM. They are more like a Docker container, but with a base OS like Debian. They share the kernel of the host and devices like PCI cards or storage - if configured. So yes, I type The question is why "eimp" spawn child processes, depending on how many cores are available |
So we don't have option to set that manually, i also don't see a way to tell erlang system to use different value that what it detects from os. You could probably try use lxcfs to make system report values that take into consideration limits set in lxc. |
hmm okay, that's fine. I will enable lxc-fs for our internal services. But I think you should consider to change this behavior, because in my opinion it makes no sense to create as many processes of the image manipulator as there are cores available. Or at least give the option to set the max amount of child processes. Thank you for your help. |
It seems to me that I am suffering of the same problem: Way too much
If I start the What does |
During erlang VM startup, it starts all the erlang applications mentioned in this file, including eimp: ejabberd/src/ejabberd.app.src.script Lines 38 to 41 in 426e33d
If you have no plan to use this library at all (you already disabled mod_avatar and mod_http_upload), then you can try to remove the library name in that file. |
In the end, I simply removed the "executable" flag from the binary. It's still a bug from erlang/ejabber.... the solution above to set the correct lxcfs settings was already done - I had forgotten about it at that point. It's normal for LXC to see some host parameters (like CPU count) since it uses the host kernel.
|
Environment
Bug description
A few days ago we upgraded our Jabber LXC instance from Debian 11 to 12 and since then we have noticed a high number of child processes from
eimp
. under Debian 11 there were about ~70 child processes out of a total of 100. under Debian 12 there are 255 child processes. Is this the way it is supposed to be and can it be stopped?As reference I set up a new LXC with Debian 12 and installed ejabberd, without configuration, and have the same issue.
Here is the status output of the systemd process from Debian 11. The Debian 12 output looks similar, but way longer...
The text was updated successfully, but these errors were encountered: