Skip to content
A wrapper for nfdump that makes use of GNU parallel and makes use of all your host CPU
Branch: master
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Type Name Latest commit message Commit time
Failed to load latest commit information.


A wrapper for nfdump that makes use of GNU parallel and makes use of all your host CPU.

This is developed for use with Nagios Network Analyzer (NNA) but should be easily expanded to work transparently with any application that invokes nfdump.

I call this Version 1.0.1. I have changed some initial default settings and timings and see some performance improvements.

This wrapper may make a smaller installation of NNA run slower. I have a large site and expect to eventually have about 600 devices sending flow data. Until I get all the flows coming to NNA I will not know if this actually helps. Also, this wrapper is just one way I am making NNA run faster. I have made php and javascript changes too. With that in mind I find that NNA is working good so far.

Currently the wrapper works for all the provided data display except the Top Talkers. The Top Talker is on my list for later. In some cases the Chord Diagrams do not display properly and state "No Data". I have only seen that on Custom Queries that I made myself. In those cases it is possible that the output asked for made no sense. All the canned displays seem work with the wrapper.

The wrapper works by taking the initial command line invoked for nfdump and breaking it into smaller tasks. These tasks can be run in parallel with each other, and the output aggregated to provide the initial requested data.

The syntax of certain nfdump parameters lend themselves to being broken into smaller pieces. Look at the man page for nfdump at the -t and the -M parameters. Also check out the -w parameter. Using this information I create multiple lines of nfdump commands that can be run in parallel. The binary results are saved in intermediate files. Then I run the original command against the intermediate files. The initial multi line nfdump commands are run using GNU parallel. The final pass is run against the intermediate binary results. The initial size of data could be many hundreds of gigabytes. The intermediate file size will be much smaller and as a result run very fast. I hope...

Prequisite Perl Modules:

Prequisite Software:
GNU Parallel
Follow their simple install instructions. Nothing special.

nfdump minimum Version 1.6.15 is required
From: for version 1.6.15
Clone the source, and then follow these simple steps
Change to the directory and run:
autoreconf -if
./configure --enable-nsel
make install
Then stop NNA and httpd, and restart them. Make sure the nfcapd processes get stoped and are started with the new version.

Wrapper Installation:
To transparently invoke the wrapper, rename the current nfdump file, and replace it with the wrapper script. I did this as follows. Feel free to improvise. I did.

Make locations for the perl script.
mkdir /usr/local/nfdumpWrapper

Copy the wrapper to the new location.
cp /usr/local/nfdumpWrapper

Make sure the wrapper is executable.
chmod 755 /usr/local/nfdumpWrapper/

Now, hide the original nfdump by renaming it and prefixing a dot to the name making it a hidden file.
mv /usr/local/bin/nfdump /usr/local/bin/.nfdump

Put the wrapper in place of nfdump by linking to the original location.
ln -s /usr/local/nfdumpWrapper/ /usr/local/bin/nfdump

The code has at the top some items that you may want to edit. Such as Directory Default locations. I am using /tmp as the root location for the 3 directories that the code uses. It will make the sub directories it wants if they do not already exist. I suggest letting the GUI application run the wrapper to make the locations in order to make sure they have the correct ownerships. If you run the program manually the first time, your mileage may vary.

Possible problems:
I encountered a situation where the httpd process runs in a protected space. In that space there is a private tmp directory. The httpd process thinks this is a root based /tmp location. I did not figure this out initially, so when I did find a solution I took the easy way out. The system I run this on is Red Hat 7. The httpd process creates a protected directory in /tmp using a long convoluted name. Maybe you have seen and wondered what that was for. I ignored it until it prevented me from doing what I needed.

In the /tmp directory you will see protected directories that look like this:
drwx------ 3 root root 16 Jun 15 04:58 systemd-private-03d2c892c043485883d4e9e39bcd699a-httpd.service-V47y1Q

In that directory with httpd in the name is another tmp directory that is the protected space. I found that it was difficult to deal with files hidden there and decided to run httpd in unprotected space. This was done as follows:
cp /usr/lib/systemd/system/httpd.service /etc/systemd/system/httpd.service
vi /etc/systemd/system/httpd.service
Change the line from

Tell the OS that you changed a startup file.
systemctl daemon-reload

Stop and start httpd
systemctl stop httpd.service
systemctl start httpd.service

Stop and start NNA
systemctl stop nagiosna
systemctl start nagiosna

At this time, the httpd locked down directory in /tmp should be gone.

The perl script will now use /tmp to make intermediate files and queue files. It should clean up after itself. Look for items in /tmp that start with nfd
This is a requirement for the wrapper to work properly.

To Do:
Make an install script.


You can’t perform that action at this time.