Skip to content

Software

daviderovell0 edited this page Apr 21, 2020 · 17 revisions

This section provides a detailed description of the software layer of bzzzbz on a high level perspective. The code is documented with Doxygen for details on the specific functions. You can find the HTML generated docuentation in the docs/ folder, or run

doxygen Doxyfile

in the main folder to generate it from the code.

Overview

To meet the real-time requirements of the system, the core functionalities of the software were implemented using C/C++.

Below you can find a conceptual software diagram that outlines single components and their interactions.

Software diagram

The diagram will be used as a reference for the following sections that will outline the implementation of the main functionalities:

  • Video Processing
  • Audio Processing
  • Control Processing

Video Processing

bzzzbz uses OpenGL ES 2.0 to generate visuals. OpenGL is the standard cross-platform API for video generation, used in the majority of interactive visuals (such as game development). The OpenGL ES (Embedded System) is optimised for microcontrollers and systems with limited hardware resources. It is therefore suitable for the Raspberry Pi environment.

The main.cpp program and all the ex_.cpp programs in the examples/ folder use OpenGL functions to:

  • Initialise the display window and the graphic algorithms: init_resources().
  • Set up an "update screen" function in between frames: onIdle().
  • Render the frame: onDisplay()
  • Start the rendering loop.

The graphical algorithms are called shaders and are defined externally in .glsl files, a language specific to create GPU accelerated graphics. During the init_resources() phase the shaders are "read in" by OpenGL and will be used later during the rendering. Dynamic variables from the main program or the external inputs (audio and control surface) can be bound to the shader so that video parameters can be modified in real-time. The dynamic variables (called uniforms in GLSL) binding is operated in the onIdle() phase of the rendering loop.

This design choice enables modularity and flexibility by allowing the use of different shaders with the same OpenGL rendering program.

This level of abstraction allows external users to add their own graphics without having to edit the main program making bzzzbz adaptable to the users' needs while maintaining a dynamic response from the external inputs.

Audio Processing

The incoming audio is processed using JACK2 audio, a low-latency audio server that interacts with the hardware through the ALSA API, natively integrated in the Linux kernel. Information on JACK2 can be found in the official github repository.

The bzzzbz application runs JACK server with ALSA at 48KHz sampling rate and 1024 frame size.

The AudioProcessing class is used to define methods to start a new JACK client that connects to the JACK server and registers the audio input and output ports on the audio interface connected to the Raspberry Pi. If the client-server initalisation and the ports registration is successful, a jack_process_callback is set on a separate thread with real time priority. This thread is maintained by Jack server and is activated only when there is incoming data. The data processing procedure is set via AudioProcessingCallback that can be set in the function that imports the class. This pushes the implementation of the audio data process to the end program, making the class flexible for different data processing routines.

The AudioProcessing class provides an additional method runFFT(..) that can be used to convert the real audio samples into the magnitudes of their corresponding complex frequencies.

bzzzbz maps the processed frequency information to the video output as follows:

AudioProcessing thread --> sample0    fft_input_buffer[n] --> runFFT() --> fft_output -(bind)-> shader:
                           sample1            ^ (deep copy)                                     update display! 
                             ..               ^
                           samplen --> buffer[n x sample]

How and what frequency information is used as video parameter is completely dependant on the shader.

Control Processing

The MCP3008Comm class implements all methods required to communicate with the MCP3008 ADC via SPI. When the start() command is executed it creates a new data acquisition thread and the run() command performs the readouts by multiplexing between the channels.

In the main.cpp global variables are initialized to contain the values from the readouts to allow OpenGL access it.

Testing

The test/ folder in the main repository contains the unit tests for the AudioProcessing and the MCP3008Comm classes. This test suites use the BOOST framework to sanity check the classes methods before integration to the main code. The tests are compiled through the main Cmake file. From the main repository folder, run:

cmake .
make
ctest -V

to execute the unit tests (-V to see the output of each individual test.

These tests will be added to a continuous integration pipeline running automatically when creating a new pull request in future releases.

Deployment

This repository contains a simple shell script to quickly deploy the entire repository to the raspberry Pi connected to your computer.

Ensure that you can ssh into the Pi and add the Raspberry Pi IP address to the \etc\hosts file on your local machine. Run:

sh deploy.sh

to copy the repository into the ~/test folder on the Raspberry Pi.