Skip to content
Baby Bender edited this page Jan 26, 2017 · 29 revisions

#Interface Packet Processing Service (IPPS)

#####Version 0.1, Author Tom Sumardi #####TODO

  • References,
  • REST API,
  • Communication handshakes

Table of Contents

  1. Introduction
  2. Conventions/Requirements
  3. Abbreviations
  4. Summary
  5. Components
  6. [Component descriptions](#Component descriptions)
  7. [Communication handshakes](#Communication handshakes)

##Introduction This documentation intended to serve the purpose of giving detail level architecture description.

##Conventions/Requirements

  • Performance bound
  • IPPS operation will involve partial kernel and user space
  • Bidirectional communication interface from MS to IPPS will be through AMQP
  • Unidirectional communication interface from IPPS to PPP will be through ZeroMQ
  • PPP is a service that plugs into IPPS component
  • Packet outgoing interface will go through different interface than packet incoming interface

##Abbreviations TBD

##Summary

                              Figure 1 (packet flow)

IPPS will be launched as daemon that pulls packets from SKB using pfring based on interface name within kernel space. First filtering being done on the packet layer by specifying libpcap filter syntax at the kernel level. After the packets are pulled, the packets go through series of filters on the user space. A load balancer is also added to load balance the packets per flow based on 4 tuples (src/dst MAC address, src/dst IP address). For example, any interesting packets will be passed from one pipeline to the next from IPPS to PPP. Any uninteresting packets are ignored. Future work might allow user level to tell kernel to ignore the flow.

##Components

                              Figure 2 (component stack)
  • Background configuration/registration thread
  • Pfring configuration
  • Thread management and load balancing
  • Packet L2-L3 processing and filtering (optional)
  • Logging

##Component descriptions:

###1. Background configuration/registration thread Background thread that is responsible for configuring and registering IPPS daemon, which uses AMQP as communication channel to the MS. Responsibilities:

  • The only thread that is up and running when the daemon first started
  • Pfring configuration
  • Spawning multiple worker threads based on configuration given
  • Configures L2/L3 Filters
  • Idle once configuration and registration have been successfully applied

####state machine: |----------------| |-----------| down -> up -> registering -> registered -> loading -> loaded | | error

Below is the bi-directional communication between MS and IPPS:

####IPPS -> MS registration:

  • PPP uuid
  • status (state-machine): registering, registered, error

####MS -> IPPS registration:

  • status (state-machine): registering, registered, error

####MS -> IPPS configuration:

  • pfring
    • Monitoring network interface(s)
    • Libpcap ruleset
    • Packet hash flow
    • ??
  • L2-L3 rulesets

####PHS -> MS configuration:

  • status (state-machine): loading, loaded, error

###2. Pfring configuration Performs kernel level configurations such mmap, interface, pcap filtering, ring buffers , stream flows, etc.

###3. Thread management and load balancing The threads responsible for pulling packets from all the interfaces as fast as it can from the ring buffers. It utilizes pfring library in pulling the packets from kernel SKB and perform any necessary memory mapping the resource into the user space. Pfring library also responsible for segregating network interface, ring buffer creation, kernel-user space memory mapping, etc. The ring buffers are mapped to specific interface in one-to-many relationships with each ring buffer consuming packets per flow in 4 tuple relationships (src/dst IP address and src/dst MAC address).

Figure 1 example, n-thread instances are pulling packet from interface name “eth0”, which may have n-threads processing the packets with each thread assign its own stream flow. In case of network bonding, the interface will be broken down into each own separate interfaces.

###4. Packet L2-L3 processing and filtering (optional) This component will be optional since the operation can be done efficiently through pfring with minimal sacrifice to usability performing the operation at kernel level. Packet L2-L3 processing and filtering functionality (context) will exist within each load balancer (worker) thread. The idea is to pipeline incoming packets into a series of packet processing component and filters. As illustrated on figure 1, the packets are dissected and inspected from layer 2 up to layer 3. The filters will decide whether packets will be:

  • dropped,
  • moved to Packet Payload Processing (PPP) component

                              Figure 3 (state machine)

above explanation can be summarized with figure 3 diagram of state machine. ###5. Logging TBD ##Communication handshakes: Ladder Diagram: TODO

  • Daemon to Management Server (MS)

      -> POST http://localhost:9494/api/v1/1/endpoint/register 
      '{"arguments": 
         {
          "component": "ipps", "guid":"392846198346912"
         } 
      }'
      <-- OK {"sts":"success", "message":"success"}, {"sts": "error", "message":"resource unavailable"}
    
      -> POST http://localhost:9494/api/v1/1/endpoint/configure
      '{"arguments": 
         {
          "component": "ipps", "guid":"392846198346912", "config":"{    }"
         },
         "config":"{    }"
      }'
      <-- OK {"sts":"success", "message":"success"}, {"sts": "error", "message":"resource unavailable"}
    
  • Management Server (MS) to Daemon

      -> POST http://localhost:9494/api/v1/1/endpoint/alive
      '{"arguments": 
         {
          "component": "ms", "guid":"56e56546646454"
         } 
      }'
      <-- OK {"sts":"success"}, {"sts": "error", "message":"resource unaivaiable"}
    
Clone this wiki locally