Skip to content
Baby Bender edited this page Nov 22, 2016 · 29 revisions

Interface Packet Processing Service (IPPS)

Version 0.1 Author Tom Sumardi TODO: References,REST API, Communication handshakes, split this doc to ipps and ppp

Table of Contents Introduction 1 Abbreviations 1 Conventions/Requirements 1 Summary 2 Components 2 Architecture 3 Component descriptions: 3 Communication handshakes: 4

Introduction This documentation intended to serve the purpose of giving detail level architecture description. Abbreviations TBD Conventions/Requirements

  • Performance bound
  • IPPS operation will involve partial kernel and user space
  • Bidirectional communication interface from MS to IPPS will be through AMQP
  • Unidirectional communication interface from IPPS to PPP will be through ZeroMQ
  • PPP is a service that plugs into IPPS component
  • Packet outgoing interface will go through different interface than packet incoming interface Summary

Figure 1 (packet flow) IPPS will be launched as daemon that pulls packets from SKB using pfring based on interface name within kernel space. First filtering being done on the packet layer by specifying libpcap filter syntax at the kernel level. After the packets are pulled, the packets go through series of filters on the user space. A load balancer is also added to load balance the packets per flow based on 4 tuples (src/dst MAC address, src/dst IP address). For example, any interesting packets will be passed from one pipeline to the next from IPPS to PPP. Any uninteresting packets are ignored. Future work might allow user level to tell kernel to ignore the flow.

Components

Figure 2 (component stack)

  • Background configuration/registration thread
  • Pfring configuration
  • Thread management and load balancing
  • Packet L2-L3 processing and filtering
  • Logging Architecture Component descriptions:
  1. Background configuration/registration thread Background thread that is responsible for configuring and registering IPPS daemon, which uses AMQP as communication channel to the MS. Responsibilities:
  • The only thread that is up and running when the daemon first started
  • Pfring configuration
  • Spawning multiple worker threads based on configuration given
  • Configures L2/L3 Filters
  • Idle once configuration and registration have been successfully applied

state machine: |----------------| |-----------| down -> up -> registering -> registered -> loading -> loaded | | error

Below is the bi-directional communication between MS and IPPS:

IPPS -> MS registration: • PPP uuid • status (state-machine): registering, registered, error

MS -> IPPS registration: • status (state-machine): registering, registered, error

MS -> IPPS configuration: • pfring o Monitoring network interface(s) o Libpcap ruleset o Packet hash flow o ?? • L2-L3 rulesets

PHS -> MS configuration: • status (state-machine): loading, loaded, error 2. Pfring configuration Performs kernel level configurations such mmap, interface, pcap filtering, ring buffers , stream flows, etc. 3. Thread management and load balancing The threads responsible for pulling packets from all the interfaces as fast as it can from the ring buffers. It utilizes pfring library in pulling the packets from kernel SKB and perform any necessary memory mapping the resource into the user space. Pfring library also responsible for segregating network interface, ring buffer creation, kernel-user space memory mapping, etc. The ring buffers are mapped to specific interface in one-to-many relationships with each ring buffer consuming packets per flow in 4 tuple relationships (src/dst IP address and src/dst MAC address).

Figure 1 example, n-thread instances are pulling packet from interface name “eth0”, which may have n-threads processing the packets with each thread assign its own stream flow. In case of network bonding, the interface will be broken down into each own separate interfaces. 4. Packet L2-L3 processing and filtering Packet L2-L3 processing and filtering functionality (context) will exist within each load balancer (worker) thread. The idea is to pipeline incoming packets into a series of packet processing component and filters. As illustrated on figure 1, the packets are dissected and inspected from layer 2 up to layer 3. The filters will decide whether packets will be: o dropped, o moved to Packet Payload Processing (PPP) component

Figure 3 (state machine) above explanation can be summarized with figure 3 diagram of state machine. 5. Logging TBD Communication handshakes: Ladder Diagram: TODO

Clone this wiki locally