Skip to content

SQLNA Application Structure

Malcolm Stewart edited this page May 21, 2021 · 9 revisions

SQL Network Analyzer - Application Structure

This page describes how SQL Network Analyzer is organized from a code and data perspective.

General Organization

Program.cs .... this is the main routine
|-CommandLineParser.cs .... This is used to parse command-line arguments and enforce the rules
|-The diagnostic log file is opened here
|-Parser.cs .... called to read the files into memory and parse from the link layer through TCP or UDP standard protocols
| |-NetMonReader.cs, SimplePCAPReader.cs, PcapNGReader.cs, ReaderBase.cs, ETLFileReader.cs are called to read frames from the files
| |-called to reverse backward conversations .... where source and destination attributes are swapped
| |-called to mark retransmitted packets
| |-called to mark continuation packets
|-TDSParser.cs .... called to identify TDS packets and parse different message types
| |-called to find "other" conversations that also talk to an identified SQL Server
| |-called to find other SQL Server instances
|-SSRPParser.cs .... called to identify SQL Browser traffic
|-NameResolutionParser.cs .... called to identify DNS traffic
|-KerberosParser.cs .... called to identify Kerberos traffic
|-The log file (Report) is opened here
|-OutputText.cs .... called to write the report sections and the CSV file
| |-TextGraph.cs called to draw histograms

Parser.cs

In the main parsing routines in Parser.cs, the following actions are performed.

  1. If the filespec is a relative path, identify the absolute path.
  2. If the filespec contains wildcards, generate a list of files.
  3. For each file:
    3.1. Determine the type of file. For *.ETL, assume ETL, for all others, open and look at the magic number.
    3.2. Load the appropriate reader and examine the timestamp of the first frame in the file.
  4. Sort the list of files to process based on the timestamps.

Note: We don't use file time as that can be changed by various activities, and file extensions aren't always reliable.

  1. For each file in sorted order:
    5.1. Determine the type of reader required.
    5.2. For each frame:
    5.2.1. Determine the Link Layer. Four are currently supported: Ethernet, Wifi, Linux Cooked Capture, and NetEvent (e.g. .ETL saved as .CAP).
    5.2.2. Depending on the Link Layer, we can parse as follows:
     Protocol stacking
    
     .ETL NDIS Net Event                 -> Ethernet/Wifi
     Link Type: NDIS Net Event (.CAP)    -> Ethernet/Wifi
     Link Type: Ethernet                 -> IPV4/IPV6/VNETTag
     Link Type: Wifi/LLC/SNAP            -> IPV4/IPV6/VNETTag
     Link Type: Linux Cooked Capture     -> IPV4/IPV6/VNETTag
    
     GRE                                 -> ERSPAN/IPV4/IPV6
     ERSPAN                              -> Ethernet
     IPV4/IPV6                           -> TCP/UDP/GRE (Generic Routing Encapsulation)
     VNETTag                             -> 802.1Q
     802.1Q                              -> IPV4/IPV6
    
     UDP                                 -> SSRP
     TCP                                 -> TDS/Kerberosv5/DNS/SMP
     SMP                                 -> TDS

Other Link Layer types are ignored and other protocols, such as ICMP, are ignored as well.
TDS is only parsed over TCP. TDS over Named Pipes is rare and not handled by this application.

ETLFileReader.cs

Unlike the other readers, it is not based on ReaderBase.cs. This is because it does not read from a file (pull model), but calls a Windows API to open the ETL file and receives CALLBACK events (push model). This happens on a background thread, and a buffer is written to, asynchronously, with locking, that the foreground thread can read from, turning the push model into a pull model.

The ETLFileReader also tries to combine NetEvent fragments into a coherent frame and surfaces an Ethernet or Wifi Link Layer directly. The NetEvent parser implemented in Parser.cs does not try to combine fragments but throws them away.

Since the NETSH TRACE command does not generate chained .ETL files, they should be processed separately and wildcards should not be used in the filespec.

Data Classes

The main data class, NetworkTrace, is defined in NetworkTrace.cs.

NetworkTrace contains ...
... an ArrayList of files
... an ArrayList of conversations (either TCP or UDP). Each conversation has an ArrayList of frames.
... an ArrayList of frames (each frame links to a file and a conversation)
... an ArrayList of sqlServers (each TDS conversation may link to one)
... an ArrayList of SSRPRequests
... an ArrayList of DNSResponses
... an ArrayList of KerbResponses
... a system to quickly locate a conversation. This is a 64K-element array of ArrayLists of conversations. The element number is determined by XOR-ing the source and destination port numbers. Without this feature, large trace files would take many minutes to process, instead of just a handful of seconds.

Each conversation has a series of flags that make post processing easier, such as isIPV6, isSQL, isUDP, resetcount, etc., including flags for different TDS message types, so it is easy to determine the login progress, etc. The message types are used to "guess" whether a conversation is really a TDS/SQL conversation.

Each data class may contain some methods for commonly requested information or for diagnostic formatting, should you want to dump conversations out to a file.

There are some TDS* classes that are basically not used, but are for prototyping better TDS parsing in a future build.

Clone this wiki locally