Skip to content

Node++ memory models

Jurek Muszyński edited this page Nov 1, 2022 · 8 revisions
macro max connections max sessions NPP_OUT_BUFSIZE (KiB) Node++ Hello World footprint
NPP_MEM_TINY 10 5 64 6 MiB
NPP_MEM_SMALL 20 10 128 19 MiB
NPP_MEM_MEDIUM (default) 200 100 256 70 MiB
NPP_MEM_LARGE 1,000 500 256 286 MiB
NPP_MEM_XLARGE 5,000 2,500 256 1.33 GiB
NPP_MEM_XXLARGE 10,000 5,000 256 2.65 GiB
NPP_MEM_XXXLARGE 20,000 10,000 256 5.29 GiB
NPP_MEM_XXXXLARGE 50,000 25,000 256 13.20 GiB
NPP_MEM_XXXXXLARGE 100,000 50,000 256 25 GiB
NPP_MEM_XXXXXXLARGE 200,000 100,000 256 52 GiB

When all the connections or sessions are taken, the next request will receive status 503.

Memory requirements heavily depend on your application profile, particularily on how much data you want to store in each user session. Current memory usage is printed at the beginning and at the end of each log file, like this:

Memory: 13 216 kB (12.91 MiB / 0.01 GiB)

When the output buffer needs to be reallocated, it's not reduced afterwards. Hence – if rendered output happens to exceed NPP_OUT_BUFSIZE – the memory HWM (high water mark) may keep growing. You can prevent this by defining NPP_OUT_CHECK or NPP_OUT_FAST.

MEM_XXLARGE and above will require 64-bit compilation (default with GCC on Linux).

On Windows or when using NPP_FD_MON_SELECT, max connections will be reduced to FD_SETSIZE-2.

On Linux you may notice much less memory usage than declared by NPP_MEM_, until you acually try to use it (for example there are more connections to be served simultaneously). It's just a Linux feature called overcommit. You need to be aware that if you, for example, expect up to 100,000 sessions at some point, you need to have at least 52 GiB of RAM available just for npp_app process. Otherwise, system's OOM killer will start to kill processes it thinks are guilty of memory overconsumption.

Clone this wiki locally