Skip to content
Browse files

Merge tag 'v1.7.0' into current

* tag 'v1.7.0': (244 commits)
  * Updated docs.
  * Updated docs.
  * fix, BGP daemon: when dumping BGP data at regular time intervals, dump_close message contained wrongly formatted timestamp. Thanks to Yuri Lachin for reporting the issue.
  * fix, AMQP, Kafka, print plugins: execute external trigger (ie. print_trigger_exec) only if not purging due to a safe action (ie. cache space is finished). * Updated docs.
  * fix, pidfile: pidfile created by plugin processes was not removed. Thanks to Yuri Lachin for reporting the issue.
  * Bumped build number
  UPGRADE changes per PR #160
  Update docs for sql_num_hosts now using INET6_ATON()
  * Updated docs.
  * Updated docs.
  * fix, sfacctd: compiling issue on --disable-ipv6 flag.
  * MySQL plugin: if --enable-ipv6 and sql_num_hosts set to true, use INET6_ATON for both v4 and v6 addresses. Fix to PR #151. Thanks to Guy Lowe ( @gunkaaa ) for his support.
  * sql_aggressive_classification: discontinuing feature.
  * Updated docs. Related to Issue #159
  * Updated docs.
  * tee_receiver: discontinuing feature. tee_receivers to be used instead.
  * sfacctd: ported nfacctd_time_new feature to the sFlow-based daemon.
  * nfacctd: introducing pcap_savefile support (8)
  * pmacctd: ported nfacctd_time_new feature to the libpcap-based daemon.
  * fix, nfacctd: added flowset count check to existing length checks for NetFlow v9 datagrams. Thanks to Steffen Plotner for reporting the issue.
  • Loading branch information...
c-po committed Sep 1, 2018
2 parents 313a010 + c9d6b21 commit 22b9be62d8e1eba71db8391fe86ad95181645887
Showing with 8,433 additions and 4,411 deletions.
  1. +12 −7 .travis.yml
  2. +3 −1 AUTHORS
  3. +297 −319 CONFIG-KEYS
  4. +161 −3 ChangeLog
  5. +47 −16 FAQS
  6. +338 −199 QUICKSTART
  7. +1 −1
  8. +5 −5 TOOLS
  9. +48 −1 UPGRADE
  10. +4 −0 bin/configure-help-replace.txt
  11. +118 −18
  12. +9 −10 docs/INTERNALS
  13. +9 −9 docs/SIGNALS
  14. +9 −8 docs/TRIGGER_VARS
  15. +43 −12 examples/amqp/
  16. +183 −0 examples/kafka/
  17. +25 −16 examples/kafka/
  18. +2 −2 examples/networks.lst.example
  19. +33 −29 examples/
  20. +6 −5 examples/primitives.lst.example
  21. +3 −0 examples/tee_receivers.lst.example
  22. +0 −76 m4/ac_check_typedef.m4
  23. +84 −0 m4/ax_lib_mysql.m4
  24. +11 −0 sql/README.IPv6
  25. +44 −22 sql/README.mysql
  26. +12 −5 sql/README.pgsql
  27. +17 −5 sql/README.sqlite3
  28. +28 −0 sql/README.tunnel
  29. +34 −2 src/
  30. +37 −2 src/acct.c
  31. +43 −4 src/addr.c
  32. +5 −2 src/addr.h
  33. +47 −180 src/amqp_common.c
  34. +4 −3 src/amqp_common.h
  35. +38 −26 src/amqp_plugin.c
  36. +1 −18 src/amqp_plugin.h
  37. +16 −10 src/bgp/bgp.c
  38. +1 −1 src/bgp/bgp.h
  39. +4 −4 src/bgp/bgp_ecommunity.c
  40. +1 −1 src/bgp/bgp_logdump.c
  41. +6 −6 src/bgp/bgp_lookup.c
  42. +50 −25 src/bgp/bgp_msg.c
  43. +1 −1 src/bgp/bgp_packet.h
  44. +103 −23 src/bgp/bgp_util.c
  45. +7 −4 src/bgp/bgp_util.h
  46. +1 −1 src/bmp/bmp_util.c
  47. +19 −22 src/cfg.h
  48. +226 −353 src/cfg_handlers.c
  49. +19 −22 src/cfg_handlers.h
  50. +31 −5 src/classifier.c
  51. +1 −0 src/classifier.h
  52. +16 −16 src/conntrack.c
  53. +65 −84 src/imt_plugin.c
  54. +5 −3 src/imt_plugin.h
  55. +11 −11 src/ip_flow.c
  56. +3 −3 src/ip_flow.h
  57. +19 −11 src/ip_frag.c
  58. +3 −3 src/isis/isis.c
  59. +40 −87 src/kafka_common.c
  60. +3 −3 src/kafka_common.h
  61. +35 −31 src/kafka_plugin.c
  62. +1 −18 src/kafka_plugin.h
  63. +3 −109 src/ll.c
  64. +5 −1 src/log.h
  65. +107 −81 src/mongodb_plugin.c
  66. +3 −13 src/mongodb_plugin.h
  67. +40 −32 src/mysql_plugin.c
  68. +2 −1 src/mysql_plugin.h
  69. +7 −0 src/ndpi/
  70. +499 −0 src/ndpi/ndpi.c
  71. +148 −0 src/ndpi/ndpi.h
  72. +98 −0 src/ndpi/ndpi_util.c
  73. +30 −0 src/ndpi/ndpi_util.h
  74. +42 −11 src/net_aggr.c
  75. +31 −31 src/network.h
  76. +277 −201 src/nfacctd.c
  77. +18 −8 src/nfacctd.h
  78. +45 −3 src/nfprobe_plugin/netflow9.c
  79. +34 −66 src/nfprobe_plugin/nfprobe_plugin.c
  80. +3 −0 src/nfprobe_plugin/nfprobe_plugin.h
  81. +37 −37 src/nfv8_handlers.c
  82. +671 −36 src/nfv9_template.c
  83. +98 −28 src/nl.c
  84. +2 −2 src/once.h
  85. +41 −33 src/pgsql_plugin.c
  86. +2 −1 src/pgsql_plugin.h
  87. +252 −229 src/pkt_handlers.c
  88. +18 −5 src/pkt_handlers.h
  89. +78 −47 src/plugin_cmn_avro.c
  90. +8 −5 src/plugin_cmn_avro.h
  91. +79 −8 src/plugin_cmn_json.c
  92. +7 −0 src/plugin_cmn_json.h
  93. +63 −14 src/plugin_common.c
  94. +10 −3 src/plugin_common.h
  95. +39 −356 src/plugin_hooks.c
  96. +10 −50 src/plugin_hooks.h
  97. +1 −1 src/pmacct-build.h
  98. +36 −35 src/pmacct-data.h
  99. +35 −18 src/pmacct-defines.h
  100. +251 −139 src/pmacct.c
  101. +11 −1 src/pmacct.h
  102. +128 −47 src/pmacctd.c
  103. +21 −7 src/pmbgpd.c
  104. +21 −7 src/pmbmpd.c
  105. +620 −0 src/pmsearch.c
  106. +88 −0 src/pmsearch.h
  107. +22 −8 src/pmtelemetryd.c
  108. +9 −1 src/pretag-data.h
  109. +38 −36 src/pretag.h
  110. +116 −31 src/pretag_handlers.c
  111. +4 −0 src/pretag_handlers.h
  112. +176 −140 src/print_plugin.c
  113. +2 −6 src/print_plugin.h
  114. +46 −5 src/server.c
  115. +217 −159 src/sfacctd.c
  116. +15 −0 src/sfacctd.h
  117. +52 −2 src/sflow.c
  118. +2 −1 src/sflow.h
  119. +10 −0 src/sfprobe_plugin/sflow.h
  120. +14 −0 src/sfprobe_plugin/sflow_receiver.c
  121. +42 −72 src/sfprobe_plugin/sfprobe_plugin.c
  122. +212 −155 src/sql_common.c
  123. +16 −3 src/sql_common.h
  124. +2 −1 src/sql_common_m.c
  125. +135 −26 src/sql_handlers.c
  126. +40 −32 src/sqlite3_plugin.c
  127. +2 −1 src/sqlite3_plugin.h
  128. +46 −97 src/tee_plugin/tee_plugin.c
  129. +3 −2 src/tee_plugin/tee_plugin.h
  130. +2 −1 src/tee_plugin/tee_recvs-data.h
  131. +23 −1 src/tee_plugin/tee_recvs.c
  132. +2 −1 src/tee_plugin/tee_recvs.h
  133. +47 −14 src/uacctd.c
  134. +209 −186 src/util.c
  135. +8 −10 src/util.h
  136. +372 −0 src/zmq_common.c
  137. +82 −0 src/zmq_common.h
@@ -1,20 +1,26 @@
language: C
compiler: gcc
dist: trusty
sudo: required

- sudo apt-get install libtool autoconf cmake
- sudo apt-get install pkg-config libtool autoconf automake cmake bash
- git clone
- cd jansson && autoreconf -i && ./configure && make && sudo make install && cd ..
- git clone
- cd librdkafka && ./configure && make && sudo make install && cd ..
- git clone
- cd rabbitmq-c && mkdir build && cd build && cmake .. && sudo cmake --build . --target install && cd .. && cd ..
- wget
- tar fxz libmaxminddb-1.1.2.tar.gz
- cd libmaxminddb-1.1.2 && ./configure && make && sudo make install && cd ..
- git clone --recursive
- cd libmaxminddb && ./bootstrap && ./configure && make && sudo make install && cd ..
- git clone -b 2.0-stable
- cd nDPI && ./ && ./configure && make && sudo make install && cd ..
- git clone
- cd libzmq && ./ && ./configure && make && sudo make install && cd ..

- ./
- ./configure --enable-mysql --enable-pgsql --enable-sqlite3 --enable-kafka --enable-geoipv2 --enable-jansson --enable-rabbitmq --enable-nflog
- ./configure --enable-mysql --enable-pgsql --enable-sqlite3 --enable-kafka --enable-geoipv2 --enable-jansson --enable-rabbitmq --enable-nflog --enable-ndpi --enable-zmq
- make
- sudo make install
- make distcheck
@@ -26,9 +32,8 @@ addons:
- libpq-dev
- libsqlite3-dev
- libmysqlclient-dev
- libjansson-dev
- libnetfilter-log-dev
- mysql-client
- mysql-client-5.6
- libtool
- autoconf
- automake
@@ -24,9 +24,9 @@ Thanks to the following people for their strong support along the time:
Robert Blechinger
Stefano Birmani
Pier Carlo Chiodi
Arnaud De-Bermingham
Francois Deppierraz
Marcello Di Leonardo
Pierre Francois
Rich Gade
Aaron Glenn
@@ -46,7 +46,9 @@ Thanks to the following people for their strong support along the time:
Gabriel Snook
Rene Stoutjesdijk
Thomas Telkamp
Matthieu Texier
Stig Thormodsrud
Luca Tosolini
Brent Van Dussen
Markus Weber
Chris Wilson

Large diffs are not rendered by default.

Oops, something went wrong.
164 ChangeLog
@@ -2,9 +2,167 @@ pmacct [IP traffic accounting : BGP : BMP : IGP : Streaming Telemetry]
pmacct is Copyright (C) 2003-2017 by Paolo Lucente

The keys used are:
!: modified feature, -: deleted feature, +: new feature

1.6.2 -- XX-04-2017
!: fixed/modified feature, -: deleted feature, +: new feature

1.7.0 -- XX-10-2017
+ ZeroMQ integration: by defining plugin_pipe_zmq to 'true', ZeroMQ is
used for queueing between the Core Process and plugins. This is in
alternative to the home-grown circular queue implementation (ie.
plugin_pipe_size). plugin_pipe_zmq_profile can be set to one value
of { micro, small, medium, large, xlarge } and allows to select
among a few standard buffering profiles without having to fiddle
with plugin_buffer_size. How to compile, install and operate ZeroMQ
is documented in the "Internal buffering and queueing" section of
the QUICKSTART document.
+ nDPI integration: enables packet classification, replacing existing
L7-layer project integration, and is available for pmacctd and
uacctd. The feature, once nDPI is compiled in, is simply enabled by
specifying 'class' as part of the aggregation method. How to compile
install and operate nDPI is documented in the "Quickstart guide to
packet classification" section of the QUICKSTART document.
+ nfacctd: introduced nfacctd_templates_file so that NetFlow v9/IPFIX
templates can be cached to disk to limit the amount of lost packets
due to unknown templates when nfacctd (re)starts. The implementation
is courtesy by Codethink Ltd.
+ nfacctd: introduced support for PEN on IPFIX option templates. This
is in addition to already supported PEN for data templates. Thanks
to Gilad Zamoshinski ( @zamog ) for his support.
+ sfacctd: introduced new aggregation primitives (tunnel_src_host,
tunnel_dst_host, tunnel_proto, tunnel_tos) to support inner L3
layers. Thanks to Kaname Nishizuka ( @__kaname__ ) for his support.
+ nfacctd, sfacctd: pcap_savefile and pcap_savefile_wait were ported
from pmacctd. They allow to process NetFlow/IPFIX and sFlow data
from previously captured packets; these also ease some debugging by
not having to resort anymore to tcpreplay for most cases.
+ pmacctd, sfacctd: nfacctd_time_new feature has been ported so, when
historical accounting is enabled, to allow to choose among capture
time and time of receipt at the collector for time-binning.
+ nfacctd: added support for NetFlow v9/IPFIX field types #130/#131,
respectively the IPv4/IPv6 address of the element exporter.
+ nfacctd: introduced nfacctd_disable_opt_scope_check: mainly a work
around to implementations not encoding NetFlow v9/IPIFX option scope
correctly, this knob allows to disable option scope checking. Thanks
to Gilad Zamoshinski ( @zamog ) for his support.
+ pre_tag_map: added 'source_id' key for tagging on NetFlow v9/IPFIX
source_id field. Added also 'fwdstatus' for tagging on NetFlow v9/
IPFIX information element #89: this implementation is courtesy by
Emil Palm ( @mrevilme ).
+ tee plugin: tagging is now possible on NetFlow v5-v8 engine_type/
engine_id, NetFlow v9/IPFIX source_id and sFlow AgentId.
+ tee plugin: added support for 'src_port' in tee_receivers map. When
in non-transparent replication mode, use the specified UDP port to
send data to receiver(s). This is in addition to tee_source_ip,
which allows to set a configured IP address as source.
+ networks_no_mask_if_zero: a new knob so that IP prefixes with zero
mask - that is, unknown ones or those hitting a default route - are
not masked. The feature applies to *_net aggregation primitives and
makes sure individual IP addresses belonging to unknown IP prefixes
are not zeroed out.
+ networks_file: hooked up networks_file_no_lpm feature to peer and
origin ASNs and (BGP) next-hop fields.
+ pmacctd: added support for calling pcap_set_protocol() if supported
by libpcap. Patch is courtesy by Lennert Buytenhek ( @buytenh ).
+ pmbgpd, pmbmpd, pmtelemetryd: added a few CL options to ease output
of BGP, BMP and Streaming Telemetry data, for example: -o supplies
a b[gm]p_daemon_msglog_file, -O supplies a b[gm]p_dump_file and -i
supplies b[gm]p_dump_refresh_time.
+ kafka plugin: in the examples section, added a Kafka consumer script
using the performing confluent-kafka-python module.
! fix, BGP daemon: segfault with add-path enabled peers as per issue
#128. Patch is courtesy by Markus Weber ( @FvDxxx ).
! fix, print plugin: do not update link to latest file if cause of
purging is a safe action (ie. cache space is finished. Thanks to
Camilo Cardona ( @jccardonar ) for reporting the issue. Also, for
the same reason, do not execute triggers (ie. print_trigger_exec).
! fix, nfacctd: improved IP protocol check in NF_evaluate_flow_type()
A missing length check was causing, under certain conditions, some
flows to be marked as IPv6. Many thanks to Yann Belin for his
support resolving the issue.
! fix, print and SQL plugins: optimized the cases when the dynamic
filename/table has to be re-evaluated. This results in purge speed
gains when the dynamic part is time-related and nfacctd_time_new is
set to true.
! fix, bgp_daemon_md5_file: if the server socket is AF_INET and the
compared peer address in MD5 file is AF_INET6 (v4-mapped v6), pass
it through ipv4_mapped_to_ipv4(). Also if the server socket is
AF_INET6 and the compared peer addess in MD5 file is AF_INET, pass
it through ipv4_to_ipv4_mapped(). Thanks to Paul Mabey for reporting
the issue.
! fix, nfacctd: improved length checks in resolve_vlen_template() to
prevent SEGVs. Thanks to Josh Suhr and Levi Mason for their support.
! fix, nfacctd: flow stitching, improved flow end time checks. Thanks
to Fabio Bindi ( @FabioLiv ) for his support resolving the issue.
! fix, amqp_common.c: amqp_persistent_msg now declares the RabbitMQ
exchange as durable in addition to marking messages as persistent;
this is related to issue #148.
! fix, nfacctd: added flowset count check to existing length checks
for NetFlow v9/IPFIX datagrams. This is to avoid logs flooding in
case of padding. Thanks to Steffen Plotner for reporting the issue.
! fix, BGP daemon: when dumping BGP data at regular time intervals,
dump_close message contained wrongly formatted timestamp. Thanks to
Yuri Lachin for reporting the issue.
! fix, MySQL plugin: if --enable-ipv6 and sql_num_hosts set to true,
use INET6_ATON for both v4 and v6 addresses. Thanks to Guy Lowe
( @gunkaaa ) for reporting the issue and his support resolving it.
! fix, 'flows' primitive: it has been wired to sFlow so to count Flow
Samples received. This is to support Q21 in FAQS document.
! fix, BGP daemon: Extended Communities value was printed with %d
(signed) format string instead of %u (unsigned), causing issue on
large values.
! fix, aggregate_primitives: improved support of 'u_int' semantics for
8 bytes integers. This is in addition to already supported 1, 2 and
4 bytes integers.
! fix, pidfile: pidfile created by plugin processes was not removed.
Thanks to Yuri Lachin for reporting the issue.
! fix, print plugin: checking non-null file descriptor before setvbuf
in order to prevent SEGV. Similar checks were added to prevent nulls
be input to libavro calls when Apache Avro output is selected.
! fix, SQL plugins: MPLS aggregation primitives were not correctly
activated in case sql_optimize_clauses was set to false.
! fix, building system: reviewed minimum requirement for libraries,
removed unused m4 macros, split features in plugins (ie. MySQL) and
supports (ie. JSON).
! fix, sql_history: it now correctly honors periods expressed is 's'
! fix, BGP daemon: rewritten bgp_peer_print() to be thread safe.
! fix, pretag.h: addressed compiler warning on 32-bit architectures,
integer constant is too large for "long" type. Thanks to Stephen
Clark ( @sclark46 ) for reporting the issue.
- MongoDB plugin: it is being discontinued since the old Mongo API is
not supported anymore and there has never been enough push from the
community to transition to the new/current API (which would require
a rewrite of most of the plugin). In this phase-1 the existing
MongoDB plugin is still available using 'plugins: mongodb_legacy'
in the configuration.
- Packet classification basing on the L7-filter project is being
discontinued (ie. 'classifiers' directive). This is being replaced
by an implementation basing on the nDPI project. As part of this
also the sql_aggressive_classification knob has been discontinued.
- tee_receiver was part of the original implementation of the tee
plugin, allowing to forward to a single target and hence requiring
multiple plugins instantiated, one per target. Since 0.14.3 this
directive was effectively outdated by tee_receivers.
- tmp_net_own_field: the knob has been discontinued and was allowing
to revert to backward compatible behaviour of IP prefixes (ie.
src_net) being written in the same field as IP addresses (ie.
- tmp_comms_same_field: the knob has been discontinued and was
allowing to revert to backward compatible behaviour of BGP
communities (standard and extended) being writeen all in the same
- plugin_pipe_amqp and plugin_pipe_kafka features were meant as an
alternative to the homegrown queue solution for internal messaging,
ie. passing data from the Core Process to Plugins, and are being
discontinued. They are being replaced by a new implementation,
plugin_pipe_zmq, basing on ZeroMQ.
- plugin_pipe_backlog was allowing to keep an artificial backlog of
data in the Core Process so to maximise bypass poll() syscalls in
plugins. If home-grown queueing is found limiting, instead of
falling back to such strategies, ZeroMQ queueing should be used.
- pmacctd: deprecated support for legacy link layers: FDDI, Token Ring
and HDLC.

1.6.2 -- 21-04-2017
+ BGP, BMP daemons: introduced support for BGP Large Communities IETF
draft (draft-ietf-idr-large-community). Large Communities are stored
in a variable-length field. Thanks to Job Snijders ( @job ) for his
@@ -95,8 +95,8 @@ A: CPU cycles are proportional to the amount of traffic (packets, flows, samples
Internal buffering can also help and, contrary to the previous tecniques, applies
to all daemons. Buffering is enabled with the plugin_buffer_size directive; buffers
can then be queued and distributed with a choice of an home-grown circolar queue
implementation (plugin_pipe_size) or a RabbitMQ broker (plugin_pipe_amqp). Check
CONFIG-KEYS and QUICKSTART for more information.
implementation (plugin_pipe_size) or a ZeroMQ queue (plugin_pipe_zmq). Check both
CONFIG-KEYS and QUICKSTART for more information.

Q8: I want to to account both inbound and outbound traffic of my network, with an host
@@ -164,11 +164,18 @@ A: The portion of the packet accounted starts from the IPv4/IPv6 header (inclusi
directly within pmacct via the 'adjb' action (sql_preprocess).

Q11: How to get the historical accounting enabled ? SQL table have a 'stamp_inserted'
and 'stamp_updated' fields but they remain empty.
A: Historical accounting is easily enabled by adding to the SQL plugin configuration a
'sql_history' directive. Associate to it a 'sql_history_roundoff'. For examples and
syntax, refer to CONFIG-KEYS and QUICKSTART documents.
Q11: What is historical accounting feature and how to get it configured?
A: pmacct allows to optionally define arbitrary time-bins (ie. 5 mins, 1 hour, etc.)
and assign collected data to it basing on a timestamp. This is in brief called
historical accounting and is enabled via *history* directives (ie. print_history,
print_history_roundoff, sql_history, etc.). The time-bin to which data is allocated
to is stored in the 'stamp_inserted' field (if supported by the plugin in use, ie.
all except 'print', where to avoid redundancy this is encoded as part of the file
name, and 'memory'). Flow data is by default assigned to a time-bin basing on its
start time or - not applying that or missing such info - the timestamp of the whole
datagram or - not applying that or missing such info - the time of arrival at the
collector. Where multiple choices are supported, ie. NetFlow/IPFIX, the directive
nfacctd_time_new allows to explicitely select the time source.

Q12: Counters via CLI are good for (email, web) reporting but not enough. What are the
@@ -246,9 +253,9 @@ A: pmacct tarball gets with so called 'default' tables (IP and BGP); they are bu

Q16: What is the best way to kill a running instance of pmacct avoiding data loss ?
A: Two ways. a) Simply kill a specific plugin that you don't need anymore: you will
have to identify it and use the 'kill -INT <process number> command; b) kill the
have to identify it and use the 'kill -INT <process number>' command; b) kill the
whole pmacct instance: you can either use the 'killall -INT <daemon name>' command
or identify the Core Process and use the 'kill -INT <process number> command. All
or identify the Core Process and use the 'kill -INT <process number>' command. All
of these, will do the job for you: will stop receiving new data from the network,
clear the memory buffers, notify the running plugins to take th exit lane (which
in turn will clear cached data as required).
@@ -296,12 +303,13 @@ A: Few hints are summed below in order to improve SQL database performances. The
in case of unsecured shutdowns (remember power failure is a variable ...).

Q18: I've configured the server hosting pmacct with my local timezone - which includes
DST (Daylight Saving Time). Is this allright?
A: In general, it's good rule to run the backend part of any accounting system as UTC;
pmacct uses the underlying system clock, expecially in the SQL plugins to calculate
time-bins and scanner deadlines among the others. The use of timezones is supported
but not recommended.
Q18: Does having the local timezone configured on servers, routers, etc. - which can
very well include DST (Daylight Saving Time) shifts, impact accounting?
A: It is good rule to run the infrastructure and the backend part of the accounting
system as UTC; for example, accuracy can be negatively impacted if sampled flows
are cached on a router while the DST shift takes place; plus, pmacct uses system
clock to calculate time-bins and scanner deadlines among the others. In short,
the use of local timezones is not recommended.

Q19: I'm using the 'tee' plugin with transparent mode set to true and keep receiving
@@ -328,6 +336,29 @@ A: Binding to a "::" address (ie. no [sn]facctd_ip specified should allow to rec
default has been '0'. It appears over time some distributions have changed the
default to be '1'. If you experience this issue on Linux, please check your kernel

Q21: How can i count how much telemetry data (ie. NetFlow, sFlow, IPFIX, Streaming
Telemetry) i'm receiving on my collector?

A: If the interface where telemetry data is received is dedicated to the task then any
ifconfig, netstat or dstat tools or SNMP meaurement would do in order to verify
amount of telemetry packets and bytes (from which packets per second, bytes per
second can be easily inferred). If, instead, the interface is shared then pmacctd,
the libpcap-based daemon, can help to isolate and account for the telemetry traffic;
guess telemetry data is pointed to UDP port 2100 of the IP address configured on
eth0, pmacctd can be started as "pmacctd -i eth0 -P print -c none port 2100" to
account for the grand total of telemetry packets and bytes; if a breakdown per
telemetry exporting node is wanted, the following command-line can be used: "pmacctd
-i eth0 -P print -c src_host port 2100"; this example is suitable for manual reading
as it will print data every 60 secs on the screen and can, of course, be complicated
slightly to make it suitable for automation. A related question that often arises
is: how many flows per second am i receiving? This can be similarly addressed by
using "nfacctd -P print -c flows" for NetFlow/IPFIX and "sfacctd -P print -c flows"
for sFlow. Here FLOWS is the amount of flow records (NetFlow/IPFIX) or flow samples
(sFlow) processed in the period of time, and is the measure of interest. Changing
the aggregation argument in "-c peer_src_ip,flows" gives the amount of flows per
telemetry exporter (ie. router).

/* EOF */
Oops, something went wrong.

0 comments on commit 22b9be6

Please sign in to comment.
You can’t perform that action at this time.