Skip to content

Setup the IPFIXcol

Jan Wrona edited this page Aug 16, 2016 · 7 revisions

If you don't have any experience with the IPFIXcol, please read at least the [collecting flow records](Collecting flow records) section of this wiki. If you've already worked with the IPFIXcol before, read it anyway.

Run the installation script

On an arbitrary node run:

./install.sh ipfixcol

Verify the installation

OCF resource agent and its metadata file should be present:

$ ls -l /usr/lib/ocf/resource.d/cesnet/
total 16
-rw-r--r-- 1 root root  3704 Jul 28 13:16 ipfixcol.metadata
-rwxr-xr-x 1 root root 12046 Jul 28 13:16 ipfixcol.sh

Automaticly generated IPFIXcol configuration files should be present:

$ ls -l /data/conf/ipfixcol/
total 3
-rw-r--r-- 1 root root 1568 Jul 28 13:16 startup-proxy.xml
-rw-r--r-- 1 root root  979 Jul 28 13:16 startup-subcollector.xml

Configuration

In the previous step we've created a shared GlusterFS volume conf, now we are using to use it as a storage for the startup configuration files for the IPFIXcol. Shared volume brings a big benefit: configuration files are always consistent across all the nodes and we can create/modify/remove them from any node.

Both proxy and subcollector configuration files have been gerenated during installation according to install.conf, but you should verify that all the options are correct. Feel free to modify it as you want or need!

Proxy

Proxy startup configuration file is stored on the path ipfixcol/startup-proxy.xml relative to your conf volume mount point (/data/conf/ipfixcol/startup-proxy.xml according to the example configuration).

In the example below, you can see the collectingProcess element, as it was generated by the installation script. It uses UDP-CPG input plugin, which is ordinary UDP input plugin with support for synchronization over CPG (closed process group). The proxy will listen on the port 4739 on all network interfaces.

<collectingProcess>
  <name>UDP-CPG collector</name>
  <udp-cpgCollector>
    <name>Listening port 4739</name>
    <localPort>4739</localPort>
    <templateLifeTime>1800</templateLifeTime>
    <optionsTemplateLifeTime>1800</optionsTemplateLifeTime>
    <CPGName>ipfixcol</CPGName>
  </udp-cpgCollector>
  <exportingProcess>Forward UDP</exportingProcess>
</collectingProcess>

Proxy exportingProcess element generated by the installation script uses forwarding output plugin with round-robin data distribution. Each node defined in the <destination> element will be target of this distribution, so make sure all the subcolectors are present. Forwarding plugin uses TCP transport protocol and the default port is set to 4741.

<exportingProcess>
  <name>Forward UDP</name>
  <destination>
    <name>Forward flows to collectors</name>
    <fileWriter>
      <fileFormat>forwarding</fileFormat>
      <distribution>RoundRobin</distribution>
      <defaultPort>4741</defaultPort>
      <destination>
        <ip>sub1.example.org</ip>
      </destination>
      <destination>
        <ip>sub2.example.org</ip>
      </destination>
      <destination>
        <ip>sub3.example.org</ip>
      </destination>
    </fileWriter>
  </destination>
  <singleManager>yes</singleManager>
</exportingProcess>

Subcollector

Subcollector startup configuration file is stored on path ipfixcol/startup-subcollector.xml relative to your conf volume mount point (/data/conf/ipfixcol/startup-subcollector.xml according to the example configuration).

The collectingProcess element on the subcollector has to go along with the proxy's exportingProcess. Therefore it uses TCP and is listening on port 4741.

<collectingProcess>
  <name>TCP collector</name>
  <tcpCollector>
    <name>Listening port 4741</name>
    <localPort>4741</localPort>
  </tcpCollector>
  <exportingProcess>Store flows</exportingProcess>
</collectingProcess>

The exportingProcess of the subcollector defines all the aspects of the flow files, like format, path, file prefix and suffix and many more. The autogenerated configuration uses lnfstore output plugin and storage path is set to the /data/flow/%h/, where /data/flow/ is our flow volume mount point and %h is a special character sequence, which is substituted with the hostname of the node. The %h special character sequence makes it possible to use shared configuration file for all the subcollectors: everyone writes files to "their own" subdirectory on the GlusterFS flow shared volume.

<exportingProcess>
  <name>Store flows</name>
  <destination>
    <name>Storage</name>
    <fileWriter>
      <fileFormat>lnfstore</fileFormat>
      <profiles>no</profiles>
      <storagePath>/data/flow/%h/</storagePath>
      <prefix>nfcapd.</prefix>
      <suffixMask>%Y%m%d%H%M%S</suffixMask>
      <identificatorField>securitycloud</identificatorField>
      <compress>yes</compress>
      <dumpInterval>
        <timeWindow>300</timeWindow>
        <align>yes</align>
      </dumpInterval>
    </fileWriter>
  </destination>
  <singleManager>yes</singleManager>
</exportingProcess>