Skip to content

Signalogic streaming resource functions (SRF) software description, reference apps, demos, source codes, and SDK for telecom media, lawful intercept, speech recognition, and HPC applications

Go to file

Latest commit


Git stats


Failed to load latest commit information.
Latest commit message
Commit time

Table of Contents

SigSRF Overview
    Platforms Supported
    Telecom Mode
        Telecom Mode Data Flow Diagram
    Analytics Mode
        Analytics Mode Data Flow Diagram
    Stream Groups
    Multithreaded for High Performance
    Deployment Grade
    Software and I/O Architecture Diagram
    Packet and Media Processing Data Flow Diagram
    When "Software Only" is Not Enough
Demo and SDK Download
    Test File Notes
    Install Notes
    Run Notes
        mediaMin and mediaTest (streaming media, transcoding, speech recognition, waveform file and USB audio processing, and more)
        iaTest (image analytics)
        paTest (predictive analytics from log data)
Documentation, Support, and Contact

SigSRF Overview

The SigSRF (Streaming Resource Functions) SDK introduces a scalable approach to telecom, media, HPC, and AI servers. The basic concept is to scale between cloud, private cloud, and Edge and IoT servers, while maintaining a cloud programming model.

The primary objectives of SigSRF software are:

  • provide high performance software modules for telecom, media, AI (deep learning), and analytics streaming applications
  • provide both telecom and analytics modes for (i) telecom and CDN applications and (ii) data analytics and web IT applications
  • maintain a deployment grade solution. All modules and sources have been through customer acceptance testing
  • scale up without GPU if needed, and provide high capacity, "real-time at scale" streaming and processing
  • scale down without ARM if needed, and provide IoT and Edge solutions for SWaP 1 constrained applications
  • maintain full program compatibility with cloud servers, including open source software support, server architectures, latest programming languages, etc.


SigSRF software is currently deployed in the following application areas:

  • Session Border Controller (SBC)
  • Media Gateway
  • Lawful Intercept (LI)
  • RTP Decoder (EVS, AMR, G729, MELPe, etc)
  • Network Analyzers
  • Satcom and HF Radio Speech Compression
  • R&D Labs and Workstations

Platforms Supported

SigSRF software is designed to run on (i) private, public, or hybrid cloud servers and (ii) embedded system servers. Demos available on this page and the mediaTest/mediaMin demo pages are intended to run on any Linux server based on x86, ARM, and PowerPC, and on form-factors as small as mini- and micro-ITX.

SigSRF supports media delivery, transcoding, deep learning 1, OpenCV, speech recognition 1, and other calculation / data intensive applications. High capacity operation exceeding 2000 concurrent sessions is possible on multicore x86 servers. The High Capacity Operation section in SigSRF Documentation has information on thread affinity, htop verification, Linux guidelines, etc.

For applications facing SWaP 2, latency, or bandwidth constraints, SigSRF software supports a wide range of coCPU™ and SoC embedded device targets while maintaining a cloud compatible software architecture (see When Software Only is Not Enough below).

1 In progress
2 SWaP = size, weight, and power consumption

Telecom Mode

Telecom mode is defined as direct handling of IP/UDP/RTP traffic. This mode is sometimes also referred to as “clocked” mode, as a wall clock reference is required for correct jitter buffer operation. Examples of telecom mode applications include network midpoints such as SBC (Session Border Controller) and media gateway, and endpoints such as handsets and softphones. Typically telecom applications have hard requirements for real-time performance and latency.

Telecom Mode Data Flow Diagram

SigSRF software telecom mode data flow diagram

Analytics Mode

Analytics mode is defined as indirect handling of IP/UDP/RTP traffic, where traffic is encapsulated or "one step removed", having been captured, copied, or relayed from direct traffic for additional processing. This mode is sometimes also referred to as data driven or “clockless” mode, the latter description referring to jitter buffer packet processing either wholly or partially without a wall clock reference. In general, analytics mode applications operate after real-time traffic has already occurred, although it may be incorrect to say "non-real-time" as they may need to reproduce or emulate the original real-time behavior. Examples of analytics mode include Lawful Intercept (LI) and web IT data analytics such as speaker identification and automatic speech recognition (ASR).

Analytics Mode Data Flow Diagram

SigSRF software analytics mode data flow diagram

Stream Groups

SigSRF supports the concept of "stream groups", allowing multiple streams to be grouped together for additional processing. Examples including merging conversations for Lawful Intercept applications, conferencing, and identifying and tagging different individuals in a conversation (sometimes referred to as "diarization").

Multithreaded for High Performance

SigSRF library modules support multiple, concurrent packet + media processing threads. Session-to-thread allocation modes include linear, round-robin, and "whole group" in the case of stream groups. Thread stats include profiling, performance, and session allocation. Threads support an optional "energy saver" mode, after a specified amount of inactivity time. The SigSRF packet/media thread data flow diagram below shows per thread data flow.

High capacity operation exceeding 2000 concurrent sessions is possible on multicore x86 servers. The High Capacity Operation section in SigSRF Documentation has information on thread affinity, htop verification, Linux guidelines, etc.

Deployment Grade

SigSRF software is currently deployed by major carriers, LEAs, research organizations, and B2B enterprises. Under NDA, and with end customer permission, it may be possible to provide more information on deployment locations.

SigSRF software, unlike many open source repositories, is not experimental or prototype, and is constantly going through rigorous customer production testing. Some of the signal processing modules have deployment histories dating back to 2005, including telecom, communications, and aviation systems. Packet processing modules include some components dating back to 2010, such as jitter buffer and some voice codecs. The origins of SigSRF software are in telecom system deployment, with emphasis in the last few years on deep learning.

For calculation-intensive shared library components, such as codecs, signal processing, and inference, SigSRF implements the XDAIS standard made popular by Texas Instruments. XDAIS was designed to manage shared resources and conflict between calculation- and memory-intensive algorithms. Originally XDAIS was intended by TI to help produce robust, reliable software on highly resource-constrained embedded platforms. It continues to help achieve that on today's modern Linux servers.

In addition to customer production testing, stress tests are always ongoing in Signalogic lab servers. New releases must pass 672 hours (4 weeks) of continuous stress test at full capacity, running on HP DL380 series servers. For more information on these tests, and Linux configuration used for high capacity operation, see SigSRF Documentation below.

SigSRF Software and I/O Architecture Diagram

Below is a SigSRF software and I/O architecture diagram.

SigSRF software and streaming I/O architecture diagram

SigSRF Packet and Media Processing Data Flow Diagram

Below is a SigSRF software streaming packet and media processing data flow diagram. This is an expansion of the telecom mode and analytics mode data flow diagrams above, including shared library APIs used within a packet/media thread.

In addition to the APIs referenced below, SigSRF offers a simplified set of APIs that minimize user applications to session create/delete and packet push/pull. mediaMin and mediaTest are the reference applications for the minimum API level and more detailed level, respectively. Source code is published for both.

SigSRF streaming packet and media processing data flow diagram

Some notes about the above data flow diagram:

  1. Data flow matches mediaTest application C source code (packet_flow_media_proc.c). Subroutine symbols are labeled with pktlib, voplib, and alglib API names.

  2. A few areas of the flow diagram are somewhat approximated, to simplify and make easier to read. For example, loops do not have "for" or "while" flow symbols, and some APIs, such as DSCodecEncode() and DSFormatPacket(), appear in the flow once, but actually may be called multiple times, depending on what signal processing algorithms are in effect.

  3. Multisession. The "Input and Packet Buffering", "Packet Processing", and "Media Processing and Output" stages are per-session, and repeat for multiple sessions. See Session Config for more info.

  4. Multichannel. For each session, The "Input and Packet Buffering", "Packet Processing", and "Media Processing and Output" stages of data flow are multichannel and optimized for high capacity channel processing.

  5. Multithreaded. A copy of the above diagram runs per thread, with each thread typically consuming one CPU core in high performance applications.

  6. Media signal processing and inference. The second orange vertical line divides the "packet domain" and "media domain". DSStoreStreamData() and DSGetStreamData() decouple these domains in the case of unequal ptimes. The media domain contains raw audio or video data, which allows signal processing operations, such as sample rate conversion, conferencing, filtering, echo cancellation, convolutional neural network (CNN) classification, etc. to be performed. Also this is where image and voice analytics takes place, for instance by handing video and audio data off to another process.

When "Software Only" is Not Enough

Cloud solutions are sometimes referred to as "software only", but that's an Intel marketing term. In reality there is no software without hardware. With the recent surge in deep learning / neural net chips attempting to emulate human intelligence -- and the ultra energy efficiency of the human brain -- hardware limitations have never been more apparent. In addition to AI, a wide range of HPC applications face hardware contraints. For 30 years people have failed to solve this with generic x86 processors, and it isn't likely to happen any time soon.

One promising solution is heterogeneous (mixed) cores that "front" incoming data and perform calculation intensive processing, combined with x86 cores that perform general processing. The basic concepts are (i) move calculation intensive processing closer to the data, and (ii) use cores that are extremely energy efficient for data calculation purposes. To enable mixed core processing, SigSRF supports coCPU™ technology, which adds NICs and up to 100s of coCPU cores to scale per-box streaming and performance density. Examples of coCPU cores include GPU, neural net chips, and Texas Instruments multicore CPUs. coCPUs can turn conventional 1U, 2U, and mini-ITX servers into high capacity, energy efficient edge servers for media, HPC, and AI applications, solving SWaP, latency, and bandwidth contraints. For example, an embedded AI server can operate independently of the cloud, acquiring new data and learning on the fly.

Available media processing and image analytics demos can make use of optional coCPU cards containing Texas Instruments c66x multicore CPUs (the demo programs will auto-discover coCPU hardware if installed -- coCPU hardware is not required for any of the demos). Besides TI, the expectation is there will soon be additional, suitable multicore CPU cards due to the explosion in deep learning applications, which is driving new chip and card development. For the time being, c66x series CPUs, although implemented in 45 nm, still provide a huge per-box energy efficiency advantage for applications with high amounts of convolution, FFT, and matrix operations.

When used with coCPUs, SigSRF supports concurrent multiuser operation in a bare-metal environment, and in a KVM + QEMU virtualized environment, cores and network I/O interfaces appear as resources that can be allocated between VMs. VM and host users can share also, as the available pool of cores is handled by a physical layer back-end driver. This flexibility allows media, HPC, and AI applications to scale between cloud, enterprise, and remote vehicle/location servers.

1 SWaP = size, weight, and power consumption

Demo and SDK Download

There are two (2) options for SigSRF SDK and demo download (i) RAR package and install script or (ii) or Docker container. The SDK contains:

  1. A limited eval / demo version of SigSRF libraries and executables, including media packet streaming and decoding, media transcoding, image analytics, and H.264 video streaming (ffmpeg acceleration). For notes on demo limits, see Demos below.

  2. Makefiles and C/C++ source code for     

    • media/packet real-time threads, including API usage for packet queue receive/send, jitter buffer add/retrieve, codec decode/encode, stream group processing, and packet diagnostics
    • reference applications, including API usage for session create/modify/delete, packet push/pull, and event and packet logging. Also includes static and dynamic session creation, RTP stream auto-detect, packet pcap and UDP input
    • stream group output audio processing, user-defined signal processing
  1. Concurrency examples, including stream, instance, and multiple user

All demos run on x86 Linux servers.

For servers augmented with a coCPU card, the mediaTest and iaTest demos will utilize coCPU cards if found at run-time (coCPU drivers and libs are included in the demo .rar files). Example coCPU cards are shown here, and can be obtained from TI, Advantech, or Signalogic.

Test File Notes

Several pcap and wav files are included in the default install, providing input for example command lines. After these are verified to work, user-supplied pcaps, UDP input, and wav files can be used.

Additional advanced pcap examples are also available, including:

-multiple streams with different LTE codecs and DTX configurations
-multiple RFC8108 (SSRC transitions)
-sustained packet loss (up to 10%), both media and SID
-call gaps
-media server playout packet rate variation (up to +/-10%)
-sustained packet rate mismatches between streams
-dormant SSRC ("stream takeover")
-RFC7198 (temporal packet duplication)

For these pcaps, the "advanced pcap" .rar file must also be downloaded. This rar is password protected; to get the password please register with Signalogic (either from the website homepage or through e-mail). Depending on the business case, a short NDA covering only the advanced pcaps may be required. These restrictions are in place as as these pcaps were painstakingly compiled over several years of deployment and field work; they provide an advanced test suite our competitors don't have. If you already have multistream pcaps the demo will process these without limitation. Depending on your results you may want the Signalogic pcap examples for comparison.

Example command lines for both the default set of pcaps and wav files and advanced pcaps are given on the mediaMin and mediaTest demo page.

Install Notes

Separate RAR packages are provided for different Linux distributions. Please choose the appropriate one or closest match. For some files, the install script will auto-check for kernel version and Linux distro version to decide which file version to install.

To download the install script and one or more rar files directly from Github (i.e. without checking out a clone repository), use the following commands:

> wget -O- | tr -d '\r' >
> wget

where "distroNN" is the Linux distro and version and "date" is the package date.

All .rar files and the install script should be downloaded to the same folder.

Note that the install script checks for the presence of the unrar package, and if not found attempts to install it. There may be some additional prompts depending on the Linux version.

Sudo Privilege

The install script requires sudo root privilege. In Ubuntu, allowing a user sudo root privilege can be done by adding the user to the “administrators” group (as shown here). In CentOS a user can be added to the “/etc/sudoers” file (as shown here). Please make sure this is the case before running the script.

Running the Install Script

To run the install script enter:

> source

The script will then prompt as follows:

1) Host
2) VM
Please select target for coCPU software install [1-2]:

Host is the default. Selecting VM causes additional resource management to be installed that's needed if host and guest share DirectCore resources. If you are running in a container, either case still applies. After choosing either Host or VM, the script will next prompt for an install option:

1) Install SigSRF Software
2) Install SigSRF Software with coCPU Option
3) Uninstall SigSRF Software
4) Check / Verify SigSRF Software Install
5) Exit
Please select install operation to perform [1-4]:

If install operation 1) is selected, the script will prompt for an install path:

Enter the path for SigSRF software installation:

If no path is entered the default path is /usr/local. Do not enter a path such as "Signalogic" or "/home/Signalogic" as during the install a "Signalogic" symlink is created for the base install folder, which would conflict. Here are a few possible install path examples:


If needed, the Check / Verify option can be selected to generate a log for troubleshooting and tech support purposes.

Building Demo Applications

Demo application examples are provided as executables, C/C++ source code and Makefiles. Executables should run as-is, but if not (due to Linux distribution or kernel differences), they can be rebuilt using gcc and/or g++. To allow this, the install script checks for the presence of the following run-time and build related packages: gcc, ncurses, lib-explain, and redhat-lsb-core (RedHat and CentOS) and lsb-core (Ubuntu). These are prompted for and installed if not found.

Run Notes

Available demos are listed below. The iaTest and paTest demos do not have a functionality limit. mediaMin and mediaTest demo functionality is limited as follows:

  1. Data limit. Processing is limited to 100,000 frames / payloads of data. There is no limit on data sources, which include various file types (audio, encoded, pcap), network sockets, and USB audio.

  2. Concurrency limit. Maximum number of concurrent instances is two and maximum number of channels per instance is 2 (total of 4 concurrent channels).

If you would prefer an evaluation demo with increased concurrency limits for a trial period, contact us. This requires a business case and possibly an NDA.

mediaMin and mediaTest Demos

The mediaMin and mediaTest demo page gives example command lines for streaming media, transcoding, speech recognition, waveform file and USB audio processing, and more.

mediaMin is a production reference application, using a minimum set of APIs (create/delete session, push/pull packets), it can handle a wide range of RTP audio packet inputs. mediaTest targets test and measurement functionality and accepts USB and wav file audio input. Some things you can do with mediaMin and mediaTest demo command lines:

  • transcode between pcaps, for example EVS to AMR-WB, AMR-NB to G711, etc.
  • "AMR Player", play an AMR pcap (either AMR-WB or AMR-NB)
  • "EVS Player", play an EVS pcap
  • transcode multistream pcaps and merge all streams together into one output audio (for voice pcaps, this generates a "unified conversation")
  • Kaldi speech recognition on pcaps or audio files (ASR, 200k word vocabulary)
  • test codecs and compare output vs. 3GPP or ITU reference files 1
  • insert user-defined signal processing or inference into the real-time data flow
  • input and output .wav file and other audio format files
  • input and output USB audio
  • test and measure packet RFCs, jitter buffer, packet loss and other stats, and more

For both mediaMin and mediaTest, reference application C/C++ source code is included. The demos are based on deployed production code used in high capacity, real-time applications. Performance measurements can be made that are accurate and competitive with other commercially available software.

1 Includes non-3GPP and non-ITU codecs such as MELPe

iaTest Demo

The iaTest demo page gives example command lines for image analytics and OpenCV testing. The iaTest demo performs image analytics operations vs. example surveillance video files and allows per-core performance measurement and comparison for x86 and coCPU cores. .yuv and .h264 file formats are supported. Application C/C++ source code is included.

paTest Demo

The paTest demo page gives example command lines for a predictive analytics application that applies algorithms and deep learning to continuous log data in order to predict failure anomalies. Application Java and C/C++ source code is included.

Documentation, Support, and Contact

SigSRF Software Documentation

SigSRF documentation, including Quick Start command line examples, High Capacity Operation, API Usage, and other sections is located at:

    SigSRF Documentation

coCPU Users Guide

The coCPU User Guide provides information about coCPU hardware and software installation, test and demo applications, build instructions, etc.

Technical Support / Questions

Limited tech support for the SigSRF SDK and coCPU option is available from Signalogic via e-mail and Skype. You can ask for group skype engineer support using Skype Id "signalogic underscore inc" (replace with _ and no spaces). For e-mail questions, send to "info at signalogic dot com".


Signalogic streaming resource functions (SRF) software description, reference apps, demos, source codes, and SDK for telecom media, lawful intercept, speech recognition, and HPC applications





No releases published


No packages published
You can’t perform that action at this time.