Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ninja and RAM/Memory usage #2187

Open
rubyFeedback opened this issue Aug 27, 2022 · 4 comments
Open

Ninja and RAM/Memory usage #2187

rubyFeedback opened this issue Aug 27, 2022 · 4 comments

Comments

@rubyFeedback
Copy link

Hey folks.

I am using meson + ninja since ... well, 3 years or so, largely because other devs
switched to it increasingly. Meson and ninja is an awesome combination. Hopefully
we can abandon the GNU configure legacy one day. But anyway, this is a side
issue.

I am very happy with the speed of meson + ninja, which is great. There is, however
had, one concern that I have had in the past and this is in regards to the memory /
RAM usage.

This may be my old computer here from 2017 failing slowly, or something. I have
16GB RAM and it is a fairly fast computer. I compile everything from source.

But in the last, say, 12 months or so I am noticing memory/RAM issues.

This was the case in the past too but it also deadlocks and freezes my
computer. Again, this may be hardware related but ... the thing is, it only
freezes so in certain projects, most of which use ninja.

For instance, an hour ago I tried to compile webkitgtk using these
instructions:

https://www.linuxfromscratch.org/blfs/view/svn/x/webkitgtk.html

This one is using ninja.

A few hours before that I had a similar issue with compiling the
most recent node from source.

https://www.linuxfromscratch.org/blfs/view/svn/general/nodejs.html

This one does not use ninja, though, so it may well be that
something is wrong with my computer here. I am, however had,
not really writing this issue from the point of view of my computer
alone, because the secondary problem I have is that these
almost consistently keeps on happening whenever I try to
compile a larger program. It also happened to me in LLVM

  • cmake recently, and perhaps 2 years ago, on the same
    machine, I did not have that issue. GCC versions changed
    too, binutils as well so perhaps my hardware is ok but the
    software is not so much ok.

Anyway.

So what does this have to do with ninja?

Well - my suspicion is that this all kind of has to do with
one way or another with RAM or perhaps a CPU is
semi-flawed or something.

Would it be possible to add more "fine-tuned" control
towards ninja itself, directly? For instance, the bigger projects
I don't mind if ninja compiles without maximum speed. I
understand your rationale for that as the default - after
all a ninja is fast and ninja should be fast. But a ninja
should not die in the middle of the mission and get
intercepted! No noob ninjas please. Translation: no
fails when RAM or other issues arise.

Perhaps there are some conditionals or config values
I could shift around, e. g. I could then set it before
trying to compile, say, webkitgtk. Where ninja is very
careful and not too aggressive. For smaller programs
I never run into any of these issues by the way. So not
sure what is up (or perhaps my CPU gets overloaded,
I have to clean this machine up soon anyway).

Could there be more control added towards RAM usage
and how greedy ninja is? I understand this is not a
trivial issue to work with, so please feel free to close it,
but it would be nice to get some feedback about this. If
I am the only one with any such issues then that's ok, but
perhaps others also have encountered problems (and,
as said, it is hard to say who is at fault - could be meson
too after all. But at the end of the day, even if these are
separate projects or separate problems, they may indirectly
affect ninja too, if people on lower cost machines can not
easily compile stuff).

@jhasse
Copy link
Collaborator

jhasse commented Sep 4, 2022

In the last couple of years linkers required more RAM due to LTO. Also the number of available threads grew (SMT, Ryzen, ...) while RAM didn't grow as much. This resulted in Ninjas assumption (number of threads +2 for the default of jobs) to be optimistic for many builds.

What to do? In the end we could add more monitoring to Ninja to check if the memory usage exceeds a threshold and start to freeze jobs.

But the correct solution would be to fix this at the kernel level. Tell the kernel "hey here are a bunch of processes, they don't need to run in parallel if there isn't enough memory".

@bmcpt
Copy link

bmcpt commented Sep 16, 2022

@jhasse An easy solution is to just not spawn new tasks if there is less than, say, 2GB ram free. This will avoid the absolute worst case scenario. Build tasks are one of the few things that use a lot of ram, avoiding the system crash is partly their responsibility as well.

@chmorgan
Copy link

Just started using ninja here. Kept coming back to my terminal window being gone. Confirmed it's the OOM killing the process. I love the idea of maxing out compute resources to reduce build time but ninja should really do something smart regarding spawning new processes once the system memory has been exhausted. It took four tries to complete a rebuild here for a relatively complex c++ project (Kicad) in a 16GB qemu instance on an 8 core i9 x86 MacBook. I bet lots of other people are hitting the same issue and giving up on using ninja without looking into what's up.

@LeChatP
Copy link

LeChatP commented Mar 17, 2023

Same for me, I try to compile llvm with ninja, Even with VM with 10 Threads high performance, 8GB RAM + 36GB Swap (NVME PCI4) it is still being killed by OOMKiller.... Ninja just does memory leak.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants