Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

IPFIX (Netflow v10) support #10

Closed
wants to merge 9 commits into from
Closed

Conversation

bodgit
Copy link
Contributor

@bodgit bodgit commented May 13, 2015

I've been meaning to do this for a while however every time I got waylaid the layout of the Logstash source changed and I had to try and rebase my changes. Now the codec is split into its own repository I should be able to get this done :-)

IPFIX is very similar to Netflow v9 in design however there are some subtle differences:

  1. Netflow and IPFIX often differ in how they indicate structure size within the packets, where one uses field or record count the other tends to use actual bytes so it requires separate BinData records.
  2. Field types can optionally be defined under a standard IANA Private Enterprise Number (PEN) rather than every vendor fight over the same flat namespace. Currently I have implemented the implicit PEN 0 for the standard fields as defined by IANA along with the "reverse" PEN 29305 defined in RFC 5103.
  3. IPFIX defines three complex data types; BasicList, SubTemplateList & SubTemplateMultiList. What makes these more complicated as that they can theoretically be nested and contain one another.
  4. IPFIX defines variable-length fields in templates, usually used with the complex types above.

This PR is not yet complete so please don't merge yet but I thought I would open it here so people could see it's progress and/or comment.

@samdoran
Copy link

I'm anxious to use this because I want to get Netflow form our ASAs into Elasticsearch. Any idea when this will be ready to merged? Or is there an easy way to try it out in the mean time? I'm not sure if I need to actually build a new gem or if I can just clone this into the existing plugin directory inside the Logstash install.

@bodgit
Copy link
Contributor Author

bodgit commented Jul 15, 2015

I think it's good enough to go, I've tested it against both softflowd and OpenBSD's pflow(4) device.

However if you're really after Netflow (and I'm not 100% sure if ASA's support IPFIX) then this PR won't be much use; the good news is the plugin already supports Netflow v9 although there are probably missing field definitions for some of the more esoteric stuff that I think an ASA can export.

The README details how to run this plugin within a Logstash install.

@minou
Copy link

minou commented Jul 21, 2015

Hi,

I try to use this plugin with a IPFIX probe.
I got an error when I start logstash with the IPFIX plugin.

SystemStackError: stack level too deep
           each at org/jruby/RubyArray.java:1613
        do_read at /root/logstash-1.5.2/vendor/bundle/jruby/1.9/gems/bindata-2.1.0/lib/bindata/struct.rb:131
           read at /root/logstash-1.5.2/vendor/bundle/jruby/1.9/gems/bindata-2.1.0/lib/bindata/base.rb:146
           read at /root/logstash-1.5.2/vendor/bundle/jruby/1.9/gems/bindata-2.1.0/lib/bindata/base.rb:23
         decode at /root/logstash-1.5.2/vendor/local_gems/271b8b8e/logstash-codec-netflow-0.1.6/lib/logstash/codecs/netflow.rb:130
           call at org/jruby/RubyProc.java:271
  trace_reading at /root/logstash-1.5.2/vendor/bundle/jruby/1.9/gems/bindata-2.1.0/lib/bindata/trace.rb:31
         decode at /root/logstash-1.5.2/vendor/local_gems/271b8b8e/logstash-codec-netflow-0.1.6/lib/logstash/codecs/netflow.rb:129
    inputworker at /root/logstash-1.5.2/vendor/bundle/jruby/1.9/gems/logstash-input-udp-1.0.0/lib/logstash/inputs/udp.rb:95
   udp_listener at /root/logstash-1.5.2/vendor/bundle/jruby/1.9/gems/logstash-input-udp-1.0.0/lib/logstash/inputs/udp.rb:74
UDP listener died {:exception=>#<SocketError: recvfrom: name or service not known>, :backtrace=>["/root/logstash-1.5.2/vendor/bundle/jruby/1.9/gems/logstash-input-udp-1.0.0/lib/logstash/inputs/udp.rb:79:in `udp_listener'", "/root/logstash-1.5.2/vendor/bundle/jruby/1.9/gems/logstash-input-udp-1.0.0/lib/logstash/inputs/udp.rb:49:in `run'", "/root/logstash-1.5.2/vendor/bundle/jruby/1.9/gems/logstash-core-1.5.2.2-java/lib/logstash/pipeline.rb:176:in `inputworker'", "/root/logstash-1.5.2/vendor/bundle/jruby/1.9/gems/logstash-core-1.5.2.2-java/lib/logstash/pipeline.rb:170:in `start_input'"], :level=>:warn}

My config is:

input {
        udp {
                port => 4739
                codec => netflow {
                        versions => [10]
                        target => ipfix
                }
                type => ipfix
        }
}

output {
        stdout { codec => rubydebug }
}

Have you any ideas ?
It seems to be the same issue as elastic/logstash#3124.

@samdoran
Copy link

I finally managed to get this installed properly (sorry it took me so long). I think the ASA is presenting a rather difficult situation: it is sending Netflow v9 data but it also contains ipfix fields. When I set the version to 10, logstash ignores the data:

{:timestamp=>"2015-07-28T13:42:37.061000-0400", :message=>"Ignoring Netflow version v9", :level=>:warn}

When I set the version to [9, 10], I get an error stating there are missing field definitions:

{:timestamp=>"2015-07-28T14:05:12.670000-0400", :message=>"No matching template for flow id 263", :level=>:warn}
{:timestamp=>"2015-07-28T14:05:12.670000-0400", :message=>"No matching template for flow id 265", :level=>:warn}
{:timestamp=>"2015-07-28T14:05:12.670000-0400", :message=>"No matching template for flow id 260", :level=>:warn}
{:timestamp=>"2015-07-28T14:05:12.670000-0400", :message=>"No matching template for flow id 256", :level=>:warn}

These definitions do exist in the ipfix.yaml template, but they don't get used because the ASA is sending v9 data (if I'm understanding the logic in the plugin correctly, which I may not be since I'm not a Ruby expert).

The most confusing part is even when I create a custom netflow_definitions file and add the following, logstash still complains about no matching field id:

256:
- :uint16
- :ether_type
260:
- :uint32
- :max_export_seconds
263:
- :uint8
- :message_scope
265:
- :string
- :min_flow_start_seconds

I realize this probably has nothing to do with your additions to the plugin, but any help you can offer is greatly appreciated since you seem to understand how this plugin works more than I do.

@jorritfolmer
Copy link
Contributor

Like @minou, I've been getting consistent `SystemStackError: stack level too deep' within a couple of minutes of starting logstash.

I've been testing with the following Netflow probes, both with Netflow v10 options.

netflow probe options
nprobe -n logstash:2055 -i eth0 -V 10
ipt_NETFLOW options ipt_NETFLOW destination=logstash:2055 protocol=10

Running Logstash-1.5.3 from RPM with a git clone from your ipfix branch with this config:

input {
    udp {
      port => 2055
      codec => netflow {
        versions => [5,9,10]
      }
    }
}

output {
  file {
    codec => "json"
    path => "/var/log/netflow/netflow.json"
  }
}

Errors:

SystemStackError: stack level too deep
           call at org/jruby/RubyProc.java:271
  trace_reading at /opt/logstash/vendor/bundle/jruby/1.9/gems/bindata-2.1.0/lib/bindata/trace.rb:31
         decode at /opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-codec-netflow-1.0.0/lib/logstash/codecs/netflow.rb:129
    inputworker at /opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-udp-1.0.0/lib/logstash/inputs/udp.rb:95
   udp_listener at /opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-udp-1.0.0/lib/logstash/inputs/udp.rb:74
UDP listener died {:exception=>#<SocketError: recvfrom: name or service not known>, :backtrace=>["/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-udp-1.0.0/lib/logstash/inputs/udp.rb:79:in `udp_listener'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-udp-1.0.0/lib/logstash/inputs/udp.rb:49:in `run'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-1.5.3-java/lib/logstash/pipeline.rb:177:in `inputworker'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-1.5.3-java/lib/logstash/pipeline.rb:171:in `start_input'"], :level=>:warn}

It also dumps .inspect style stuff on stdout, seemingly for every flowset, but I don't see where it is coming from:

obj.version => 10
obj.pdu_length => 876
obj.unix_sec => 1439904912
obj.flow_seq_num => 99
obj.observation_domain_id => 35
obj.records[0].flowset_id => 257
obj.records[0].flowset_length => 860
obj.records[0].flowset_data-selection- => 257
obj.records[0].flowset_data => "\x00\x00\x00\x9D\x00\x00\x00\x...
...
obj[13].octetDeltaCount => 74
obj[13].packetDeltaCount => 1
obj[13].protocolIdentifier => 17
obj[13].ipClassOfService => 0
obj[13].tcpControlBits => 0
obj[13].sourceTransportPort => 27706
obj[13].sourceIPv4Address-internal-.storage => 2644325398
obj[13].sourceIPv4Address => "157.157.yy.xx"
obj[13].sourceIPv4PrefixLength => 0
obj[13].ingressInterface => 0
obj[13].destinationTransportPort => 53
obj[13].destinationIPv4Address-internal-.storage => 2887722242
obj[13].destinationIPv4Address => "172.31.37.2"
obj[13].destinationIPv4PrefixLength => 0
obj[13].egressInterface => 0
obj[13].ipNextHopIPv4Address-internal-.storage => 0
obj[13].ipNextHopIPv4Address => "0.0.0.0"
obj[13].bgpSourceAsNumber => 6677
obj[13].bgpDestinationAsNumber => 0
obj[13].flowStartMilliseconds => 1439904901076
obj[13].flowEndMilliseconds => 1439904901076
...

@bodgit
Copy link
Contributor Author

bodgit commented Aug 18, 2015

The tracing is from the BinData gem, just remove the BinData::trace_reading do ... end wrapper in decode(). There doesn't seem to be an obvious way to conditionally turn that on or off but it's probably useful whilst debugging & testing various new collectors.

@bodgit
Copy link
Contributor Author

bodgit commented Aug 18, 2015

I'm not sure how to go about debugging the stack level issue, I've seen it only a couple of times but then on a subsequent run it doesn't occur at all so I'm at a loss as to what to do there.

@bodgit
Copy link
Contributor Author

bodgit commented Aug 18, 2015

@samdoran The ASA is only sending Netflow v9. The error you're getting is not from missing field ID's, but because the ASA hasn't sent a template packet to Logstash. This is often configurable on the device, either to send templates every N packets or ever X minutes/seconds.

If you still get unknown fields then these are not necessarily the same as the equivalent IPFIX field in PEN 0. For ASA I would refer to the Cisco documentation, a quick search turned up this:

http://www.cisco.com/c/en/us/td/docs/security/asa/special/netflow/guide/asa_netflow.html#pgfId-1331296

Some of these fields are "standard" for Netflow v9, some are specific to the ASA.

@jorritfolmer
Copy link
Contributor

Ah it runs better with the BinData::trace_reading commented out.
No more SystemStackError so far with two probes (nprobe and ipt_NETFLOW) emitting v10 netflows to a single logstash.

@bodgit
Copy link
Contributor Author

bodgit commented Aug 18, 2015

Maybe it is literally just having tracing enabled. I'll push a change to remove it.

@jorritfolmer
Copy link
Contributor

I've been happily running @bodgit ipfix branch for almost 2 months now with 5 netflow exporters (softflowd and ipt_netflow both exporting ipfix).
Is there something that keeps this PR from being merged?

@benlavender
Copy link

Are you still using this for IPFIX @jorritfolmer? I’m accepting v10 inputs using this codec and getting a similar error.

@jorritfolmer
Copy link
Contributor

Yes, I'm using the latest IPFIX branch from @bodgit, including the "Remove bindata tracing" commit.

@benlavender
Copy link

@jorritfolmer I have it setup and are able to take NetFlow V9 etc but once I try send any IPFIX data I get the below:

{:timestamp=>"2015-10-28T11:22:07.045000+0000", :message=>"Unsupported enterprise", :enterprise=>6876, :level=>:warn}

Can't see any info regarding this anywhere (scratches head)

@jorritfolmer
Copy link
Contributor

Ah cool! It appears you have a VMware dvSwitch exporting IPFIX?
It seems to be exporting Enterprise-specific Information Elements, that are not (yet) supported.
Can you open a new issue to support VMware specific IPFIX elements so we can look at this seperately?

@benlavender
Copy link

Sure @jorritfolmer will do. Thanks.

@elasticsearch-release
Copy link

Jenkins standing by to test this. If you aren't a maintainer, you can ignore this comment. Someone with commit access, please review this and clear it for Jenkins to run; then say 'jenkins, test it'.

@tpnoonan
Copy link

Am anxious to test this. Has it been committed?

@FlorianHeigl
Copy link

Mergeeeeee pleeeeeeease!

@RobertLukan
Copy link

Quick question, is there any plans to support in the future Enterprise-specific Information Elements transformations. In our organization we would like to transform Sonicwall IPFIX (PEN 8741) flows to another proprietary "standard" from Fluke Networks.

@jordansissel
Copy link
Contributor

Given reports of success of this branch, I'm OK merging it.

@jordansissel
Copy link
Contributor

This PR fails to patch into master for me:

error: spec/codecs/netflow_spec.rb: patch does not apply
Patch failed at 0001 Add basic IPFIX rspec tests

@jordansissel
Copy link
Contributor

% curl -Ls https://github.com/logstash-plugins/logstash-codec-netflow/pull/10.patch | git am
Applying: Make room for IPFIX support
Applying: Add basic IPFIX support
Applying: Add IPFIX Template and Option flowset records
Applying: Add basic IPFIX packet parsing
Applying: Add basic IPFIX rspec tests
error: patch failed: spec/codecs/netflow_spec.rb:205
error: spec/codecs/netflow_spec.rb: patch does not apply
Patch failed at 0005 Add basic IPFIX rspec tests
The copy of the patch that failed is found in:
   /home/jls/projects/logstash-codec-netflow/.git/rebase-apply/patch

@bodgit
Copy link
Contributor Author

bodgit commented Jan 6, 2016

Sorry, I didn't see these email notifications, I'll try and rebase again shortly.

@nobletrout
Copy link

i've learned an awful lot about git today. to get this patch to work before an update:
git checkout b6dd6da
then:
curl -Ls https://github.com/logstash-plugins/logstash-codec-netflow/pull/10.patch | git am

@jordansissel
Copy link
Contributor

@nobletrout just for clarification, you're using the patch successfully with IPFIX?

@RobertLukan
Copy link

I have been using this patch now for a two weeks and it works fine. Also I have changed the ipfix.yaml file in order to accommodate Sonicwall IPFIX format.

@nobletrout
Copy link

Need support for variable length fields. Looks like the choice of using
BinData sends you down a rabbit hole of supporting all kinds of other
things and my ruby-fu isn't there. That said, it did work to ingest IPFIX
from nprobe.

On Thu, Feb 11, 2016 at 2:48 PM, RobertLukan notifications@github.com
wrote:

I have been using this patch now for a two weeks and it works fine. Also I
have changed the ipfix.yaml file in order to accommodate Sonicwall IPFIX
format.


Reply to this email directly or view it on GitHub
#10 (comment)
.

@RobertLukan
Copy link

I do not have issue for the flows from Sonicwall, data has fixed length. For sure it would be good to have variable length field.
Would be possible to modify the patch so it works with the latest release of logstash ?

Rename some things to make them Netflow-specific.
- Add a basic BinData PDU record that just treats all non-header data as a
  flat string
- Generate an equivalent YAML file for IPFIX field definitions. IANA provides
  a CSV with the field definitions so created a simple script to parse that
  and then generate the reverse field definitions as per RFC 5103
Currently the code cowardly refuses to deal with variable length encoded
fields or any of the complex data types, skipping any template containing
them. Tested with `softflowd`.

Currently no field is "fixed up" such as timestamps; IPFIX unlike Netflow
does not include the uptime of the exporting device in each packet. However
testing with `softflowd` shows an initial option record is sent with the
systemInitTimeMilliseconds field which is what is needed for subsequent
calculations so maybe that should be cached per-device while packets are
received with some frequency. A device however may also include this field
as part of its normal templates.
Just copy the Netflow v9 tests and generate a small IPFIX flowset.
Testing with OpenBSD's `pflow(4)` device a) worked and b) highlighted that it
exports using the flow{Start,End}Milliseconds fields which contain an absolute
timestamp. Fix this up to be a regular ISO 8601 timestamp and also add support
for the equivalent seconds, microseconds, and nanoseconds fields.
Possibly causes stack errors having it enabled?
Based on similar changes to Netflow v9 code.
@bodgit
Copy link
Contributor Author

bodgit commented Feb 12, 2016

That should now be rebased against master

@milomanden
Copy link

I really appreciate the work that has been put into this. But I don't think it's ready to be merged.

I'm exporting ipfix from netscalers, and the "Cowardly" variable field lenght, means my data-templates isn't imported. That means all the interesting data is thrown away. :(

If I can contribute to this plugin in any way, let me know.

@bodgit
Copy link
Contributor Author

bodgit commented Mar 9, 2016

It's not necessarily the variable length fields that are the problem but they're commonly used with the IPFIX complex data types; basicList, subTemplateList & subTemplateMultiList and these are difficult to implement as they have the potential to be nested within each other so it adds an element of recursion to parsing the packets which isn't present with Netflow <= 9.

I'm not even sure how you'd visualise or index a basicList of subTemplateMultiList's in Elasticsearch for example.

I initially tested with yaf and that also has a tendency to emit some data templates containing these complex data types but there's also a lot of tools that still only emit "simple" data templates that are equivalent to what you'd get with Netflow v9 and these AFAIK work so I'd say there's value in merging it as-is.

Can you possibly include a visualisation/export of your netscaler data templates captured with something like wireshark so we can see just how complicated they are?

@milomanden
Copy link

Thank you for taking the time to reply.

I've already debugged it with wireshark. I can see the templates coming in: http://i.imgur.com/tCpCA2y.png

When the templates come in, i see the warning you put in the netflow.rb ie "Cowardly refusing to deal with variable length encoded field"

Some of the non-variable length data templates is working, and i do get data into my elk: http://i.imgur.com/WetIJtd.png

I'm exporting ipfix from a netscaler. The netscaler exports with a PEN of 5951, which i've added to ipfix.yaml. But since the templates aren't accepted, i get no data. :(

Let me know if you need pcaps or anything else.

@zbare
Copy link

zbare commented Apr 16, 2016

Hello,

I am trying to get IPFIX flows into ELK for monitoring and ran across this plugin. Since IPFIX is not yet supported in the logstash's netflow codec, I figured I would give this a try (for development of course). I'm somewhat new to working with logstash, so would anyone be able to give me some tips on the best way to install this plugin?

In case it matters, I am trying to bring in IPFIX flows from a Juniper MX.

Thanks for any help.

@jorritfolmer
Copy link
Contributor

jorritfolmer commented May 5, 2016

I went ahead and rebased this PR in bodgit-ipfix branch, with conflicts resolved.
Also:

  • Bumped version to 2.1.0
  • Updated rspec to use new event API

Objections to merging bodgit-ipfix branch into master?

@bodgit
Copy link
Contributor Author

bodgit commented May 5, 2016

👍 it would be good to get this finally closed.

@jorritfolmer
Copy link
Contributor

Merged!

@ClaudioRifo
Copy link

RobertLukan... Can you share with me your changes to ipfix.yaml for Sonicwall?
Can anyone explain how to build a ipfix.yaml?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet