Skip to content

Architecture

Baby Bender edited this page Mar 22, 2018 · 14 revisions

Packet routing and recorder

Table of Contents

  1. Introduction
  2. Convention/Requirements
  3. Abbreviations
  4. Summary
  5. Component Overview
  6. Components
  7. Component descriptions
  8. Communication channels
  9. Future work
  10. References

Introduction

This documentation intended to be very brief and serve the purpose of giving high level architecture description only. Please refer to each component design document for extensive detail level description such as communication messages format, ladder diagram and state machine. This documentation is work in progress living documentation.

Conventions/Requirements

  • Every component should be decoupled from each other. There is no order for starting/stopping the services. Everything should work without specific order
  • Micro-service ready. Must be able to be deployed in container type environment such as docker, LXC, etc
  • Each component should use systemd and backward compatible to sysVinit
  • Use cases: (Passive) Packet recorder using ceph and (Active) Video OTT transparent caching
  • Management Server (MS) is out of scope for this project and will not be implemented

Abbreviations

TBD

Summary

Madeline is out-of/in-band inline (real time) packet router/recorder. The packets are decoded and dissected from layer 2 to 4 while the payload is untouched and pushed from one service to another in pipelined manner. It also performs intelligent routing/load-balancing and filtering of packets per flow from linux socket kernel buffer (SKB) bypassing linux network stack or physical NIC. The architecture allows each of the packet processing components (IPPS, PPP and PHS) to act as standalone processes with the user free to chain them together as needed with "active" and "passive" mode of operations.

  • In active mode, it can be used to perform OTT Video (MPEG-DASH/apple HLS/etc) caching based on regular expression or ip address/port tuple logic rulesets by injecting data into the packet payload.
  • In passive mode, it can be used to perform traffic recording using object store storage backend.

Component Overview

                                          Figure 1

Components

  1. RXTX abstraction library (RXTXAL)
  2. Interface Packet Processing Service (IPPS)
  3. Packet Payload Processing (PPP)
  4. Packet Handling Service (PHS)
  5. Management Server (MS) (example only, not to be implemented)

Component descriptions:

1. RXTX abstraction library (RXTXAL)

RXTXAL is DPI translation and abstraction layer between the physical device and the application layer (IPPS, PPP, PHS, etc). PF_RING, DPDK, ZeroMQ, etc are few libraries being abstracted. The purpose of RXTXAL is to allow application layer to pull packets from SKB and/or send packets to the appropriate logical interface. From above diagram, RXTXAL layer exists as input and output layer for sending and receiving packets for all the components.

2. Interface Packet Processing Service (IPPS):

IPPS main purpose is to process L2-L3 packets with filtering being done through RXTXAL. First filtering being done on the packet layer by specifying libpcap for example or other filtering syntax. Load balancing of packets will be based on 4 tuples (src/dst MAC address, src/dst IP address) being done through RXTXAL also.

3. Packet Payload Processing (PPP):

PPP filters the packet by parsing, dissecting and applying regex operations on the packet payload (L4 payload) at user space level. It performs regex matching based on configuration given and determines if the packet needs to be passed to the next pipeline. It is extendable through plugins being used for parsing and filtering operations such as http, SIP, etc. Example of PPP acting as packet filtering plugin operation:

  • above L4 decoding
  • above L4 packet filtering
  • session correlation

4. Packet Handling Service (PHS):

PHS will inject and route packets as specified. The mode of operations is being used to illustrate that the PHS can serve as packet router with storage backend consuming the packets instead of transparent caching solution. In this case, if interested packets are received from the PPP, packet injection will inform the video server to route the traffic to caching server while at the same time stop the current traffic flow. It is extendable through plugins being used for operations such as HTTP packet injection, CEPH object storage interface, etc. Example of PHS acting as packet injection plugin operation:

  • TCP RST packet generation
  • Redirection packet generation
  • raw packet vector injection

On transparent caching case, caching server will download content first time the server sees video traffic by asking the origin server for the content. After the first time, the caching server will just replay the traffic from the caching server.

5. Management server (MS)

MS manages configuration, registration and other management related operation. It serves as REST API central endpoint entry point performing the following operations:

  • Configuration management.
  • Endpoint registration and discovery.
  • Health-wellness.
  • Resource statistics

Communication channels:

1. Low-performance communication through AMQP (rabbitMQ or kafka)

  • Configuration
  • Registration/discovery
  • Health-wellness
  • Resource statistics

Inter Process Communication (IPC) channels:

  • IPPS communicates with MS server through AMQP rabbitMQ or kafka. The communication channels are bi-directional.
  • PPP communicates with MS server through AMQP rabbitMQ or kafka. The communication channels are bi-directional.
  • PHS communicates with MS server through AMQP rabbitMQ or kafka. The communication channels are bi-directional.

2. High performance communication through RXTXAL

  • PF_RING
  • DPDK
  • ZeroMQ

3. Management communication through REST API

User input REST commands to the management server

Future work

  • In-band solution
  • Web-caching server adapters (nginx, httpd, apache traffic server, etc)
  • https (TLS) packets capturing with datastore/openstack on the backend
  • network analytics

References