Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix memory leak in cmd parser #404

Merged
merged 2 commits into from
Nov 29, 2022
Merged

Conversation

sni
Copy link
Contributor

@sni sni commented Nov 25, 2022

Fixing two issues here. If you like separate PRs, just tell me :-)

First one fixes a memory leak in the cmd parser which makes the worker processes grow which
then leads to more cpu usage during all the forking (along with the wasted memory)

The second came to my attention while searching the leak, those two allocations where by far the biggest ones on my test system and both simply depend on the maximum number of open files.
I added a high enough max value to be hopefully future proof without wasting useless memory.
I don't think any naemon worker will ever sanely use more than 100k open file handles. I
considered adding a log entry or another warning. But at that point, there is no access to the log and stderr/stdout is already closed.
With those patches, memory consumtion seems stable.

lib/iobroker.c Outdated
/* add sane max limit, if ulimit is set to unlimited or a very high value we
* don't want to waste memory for nothing */
if (iobs->max_fds > 100000)
iobs->max_fds = 100000;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could we make this configurable just in case someone wants a different value?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i thought about it as well. It is usually limited by ulimit -n. So if you want a smaller value, set a sane ulimit.
Not sure if a config option is required, maybe a compile time constant?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah I guess it is mainly for larger values on would nee to change it.

There shouldn't be much need to increase it above this limit I suppose, but a configuration option somewhere (compiler/runtime) seems nicer than a "magic" constant.

lib/runcmd.c Show resolved Hide resolved
Usually ulimit -n is has sane values somewhere between 1000 and 10000. Some
container managers like docker set higher values, so naemon might end up
with ulimit of millions which just wastes memory.
If a single worker hits the open file limit, simply start more worker.
@sni sni force-pushed the fix_memory_leak_in_cmd_parser branch from f41db3d to a80eccf Compare November 28, 2022 17:08
@sni
Copy link
Contributor Author

sni commented Nov 28, 2022

i removed the magic... :-)

Copy link
Contributor

@jacobbaungard jacobbaungard left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great, thanks!

@sni sni merged commit a807cb0 into naemon:master Nov 29, 2022
@sni sni deleted the fix_memory_leak_in_cmd_parser branch February 9, 2024 15:14
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants