Skip to content
This repository has been archived by the owner on Jul 19, 2022. It is now read-only.

Release v0.3.0

Compare
Choose a tag to compare
@mormj mormj released this 29 Apr 18:04
· 52 commits to main since this release

It is time yet again to tag the newsched code and highlight the recent work. We are marching toward migrating newsched into the dev-4.0 branch of gnuradio, and it seems closer (maybe close enough??). Along the way, the documentation has been updated significantly, so please take a look:

Summary of proposed features for GR 4.0: https://wiki.gnuradio.org/index.php?title=GNU_Radio_4.0_Summary_of_Proposed_Features
User Tutorial: https://gnuradio.github.io/newsched/user_tutorial/01_Intro
Developer Tutorial: https://gnuradio.github.io/newsched/dev_tutorial/01_Intro

Much of the work going on in the project has centered around

  • Improving stability and usability
  • Separating out the Kernel Library from the block modules
  • RPC Support and mechanisms for distributed operation
  • New (ported) blocks

Distributed Node Support

At its simplest, a distributed flowgraph is just a collection of individual flowgraphs connected by some serialized interface. We already support this in GNU Radio with ZMQ blocks, but coordinating the whole thing requires an extra level of configuration and management.

Wouldn't it be nice to create one flowgraph, then tell GNU Radio that I want this part of the flowgraph to run here and that part to run over on those resources, and it seamlessly configures all the pieces.

The main components in this release that enable distributed node support comprise of:

  • Separable runtime component that can operate differently than the "default" runtime
  • Serialization of stream and message data (by default in every block
  • Runtime proxy to support signaling between flowgraph partitions
  • A generalized RPC interface - using block parameters getting many things for "free"

The first part that had to be done to enable a distributed runtime was separating the "runtime" and the "flowgraph". The flowgraph object now is really just a connection of blocks and doesn't control the flow of execution. Under the theme of trying not to solve everyone's problem, allowing for a different runtime component for different purposes allows the distributed mode operation to not impact the common use case of gnuradio that runs all on one compute node.

Serialization is handled very easily by ZMQ blocks, and the addition of runtime proxy objects that tell the other flowgraph partitions to start, stop, etc.

Here is an example flowgraph that would run the "distributed" runtime

nsamples = 1000
# input_data = [x%256 for x in list(range(nsamples))]
input_data = list(range(nsamples))

# Blocks are created locally, but will be replicated on the remote host
src = blocks.vector_source_f(input_data, False)
cp1 = blocks.copy()
mc = math.multiply_const_ff(1.0)
cp2 = blocks.copy()
snk = blocks.vector_sink_f()

fg1 = gr.flowgraph()
fg1.connect([src, cp1, mc, cp2, snk])

with distributed.runtime(os.path.join(os.path.dirname(__file__), 'test_config.yml')) as rt1:
    # There are 2 remote hosts defined in the config yml
    #  We assign groups of blocks where we want them to go
    rt1.assign_blocks("newsched1", [src, cp1, mc])
    rt1.assign_blocks("newsched2", [cp2, snk])
    rt1.initialize(fg1)

    # These calls on the local block are serialized to the remote block
    # This in effect means the local blocks are acting as proxy blocks
    mc.set_k(2.0)
    print(mc.k())

    # Run the flowgraph
    rt1.start()
    rt1.wait()

Kernel Library

Another major restructuring in this release includes the "kernel library". Throughout the GNU Radio codebase, but very inconsistently, are "kernel" namespaces to hold the non-block-work type of code. This includes things like an fft filter object that isn't necesarily tied to gnuradio flowgraph operation, but more general DSP and math operations.

Separating this out into a separate library accomplishes a few things:

  1. Reduce inter-module dependencies
  2. Clean block work functions
  3. Library itself can be useful outside of GNU Radio

Usability

The yaml file structure has evolved to hopefully be more simple and get the block developer to the work() function more quickly

Templated blocks now have simplified options that mimic the sigmf data type naming. Also, mult-way templating is made easier with the type_inst field to limit and label which combinations of types get instantiated. Here is an example from the iir_filter block:

typekeys:
    - id: T_IN
      type: class
      options:
          - cf32
          - rf32
    - id: T_OUT
      type: class
      options:
          - cf32
          - rf32
    - id: TAP_T
      type: class
      options:
          - cf64
          - cf32
          - rf64
          - rf32
type_inst:          
  - value: [rf32, rf32, rf64]
    label: Float->Float (Double Taps)
  - value: [cf32, cf32, rf32]
    label: Complex->Complex (Float Taps)
  - value: [cf32, cf32, rf64]
    label: Complex->Complex (Double Taps)
  - value: [cf32, cf32, cf32]
    label: Complex->Complex (Complex Taps)
  - value: [cf32, cf32, cf64]
    label: Complex->Complex (Complex Double Taps)

We can then rely on the magic of code generation to get all this nicely displayed in GRC

Known issues

  • Hier blocks not entirely working - possibly related to how the pybind11 bindings are set up

What's Next

The process up to this point has been

  • Try and port a block from GNU Radio
  • Find and fix limitations in the design and various APIs that arise as a result

A couple of main areas I'd like to target next would be

  • PDU based flowgraphs
    • the PMTF library and newsched message port implementation offers some huge speedups with PDUs
  • Broader distributed flowgraph runtimes and examples
  • Porting of more blocks
  • More heterogeneous processing examples
  • More benchmark performance examples
  • Generalized mechanism for callbacks similar to parameter_query and parameter_change