Josh Blum edited this page Aug 27, 2013 · 15 revisions

This page contains a brief summary of each of the various features in GRAS. Each summary section contains a link to greater detail and code examples.


One of the big goals with GRAS was to be super intelligent about memory. This means minimizing runtime allocation overhead, providing for zero copy access whenever possible, and customizing the scheduler’s buffers. The backbone of GRAS's buffering is implemented on top of a smart-buffer; capable of reference counting, re-usability, and highly configurable. Smart buffers can represent, a slab of memory from malloc, a chunk of memory from a doubly mapped buffer, and even memory from a DMA device.

DMA device support

The new memory model allows devices that supply their own memory buffers to be easily integrated into the scheduler. There is no need to copy between DMA and scheduler memory.

Passive workflows

Often, a user will decompose a task into several sub-tasks, each implemented as their own block. Often this is done to either parallelize the task, or to separate it out conceptually. Some tasks however, will not mutate samples, but rather passively forward the samples, modifying only metadata. Others may need to move samples in between a stream and packet domain. GRAS allows users to implement these types of passive workflows without incurring unnecessary memory copies.


Polymorphic container

The PMC library is the keystone of the messaging and callable APIs. PMC allows users to pass any data type between block. When coding blocks in python, users can use native python types; and when in C++, users can use native C++ and STL types. Also, any custom data type can be used, and can be bridged transparently into python using PMC's registry feature. In addition, PMC is minimally invasive to the user's code, while flowing naturally with the language in use.

Message passing

Any input port can be passed asynchronous messages from an upstream port, and any output port can pass asynchronous message to a downstream port. This is a sort of duck-typing but for a block's IO ports. Ports can be for streams, packets, or really any type of message. Users can decide how ports are used in a way that makes sense to the topology.


Theron concurrency library

GRAS is implemented on top of the Theron concurrency library, which implements the Actor model. Theron is the real scheduler, handling all task dispatching and inter-thread communication. All complications of spawning threads, locking, and synchronization are taken care of by Theron.

Theron is under ongoing development as well. Theron version 6 will offer configurable schedulers for hand-tuned backoff and notification mechanisms.

Inherent thread safety

When used properly, the Actor model means that thread safety is inherent to the design. All topology access is implemented over thread-safe messaging to a Block’s internal actor.

NUMA and CPU affinity

Hooks from the Theron threading framework are exposed to affinitize thread pools to a group of processors or group of NUMA nodes. In addition, users can also enforce the affinity of memory allocations to a particular NUMA node.

Topology advances

Actor topology

With the actor topology library we get thread-safe, dynamic reconfiguration of the flowgraph. Connections can literally be created and destroyed while the flowgraph executes.

Fixed issues

The actor topology issue cleanly separates the topology from the design of the scheduler. This enabled the library to address many of the topological issues found in GNU Radio.

Enhanced API

Redesigned interface

The current GNU Radio scheduler has seen a lot of organic growth over the years. The GRAS API is an attempt to consider every feature, both old and new; and to implement a minimalist interface that best exposes the feature so that it is syntactically beautiful and natural to use. Check out the coding guide and see for yourself!

Block factory

The block factory allows users to register custom IP into the framework. The interface gives us automatic thread safety -- no need to "mutex" the code. Python integration can be done without the hastle of SWIG. Blocks can find each other in the topology for configuration and control purposes. Checkout the block factory wiki page for more details:

Backwards compatible

GRAS supports all existing GNU Radio IP through a simple backwards compatibility wrapper. All existing code will continue to work under GRAS. Users can choose to code to the new simple API or use the stock GNU Radio API. Whatever the case, all user IP will be able to fully interoperate.

GREX block package

GRAS is only a scheduler, and does not carry with it any block IP. However, the GREX project stands as a working example; using GRAS and many of its features, like zero copy, callable interface, messaging, full python support. See the GREX blocks page for more:



GRAS contains a small benchmark suite that compares the scheduler and produces bar plots with matplotlib. With the benchmark quite we intend to prove the following:

  • The stock GR scheduler and GRAS are on-par for basic performance.
  • The new features like zero-copy can offer a positive benefit.

Full Python support

All GRAS features are fully exposed by Python bindings. We have taken special care to use the native syntaxes of the language whenever applicable. Further, IP written in either language can be effortlessly mixed and matched.

Web-based status monitor

While your flowgraph executes, GRAS collects usage statistics. Usage statistics can be viewed and analysed in real-time with the web-based status monitor and GUI builder. To use, simply drop a server block into your flowgraph, and open a web browser to the correct address.