Skip to content

Evaluation of Real time Responsiveness

Peter Nagy edited this page Apr 20, 2020 · 2 revisions

This section reports the procedures and the results of the evaluation of the bzzzbz system, in terms of:

  • Stability: It is important that the output renders at a consistent frame rate and the shader implementation does not change the desired quality of the video. To assess this the timestamp at each render has to be recorded and the resultant frame rate can be calculated.

  • Real-time audio responsiveness: Measuring the latency between the audio routed to the Raspberry Pi and the render output Pi facilitates the assessment of the real-time capabilities of the synth. It also enables the quantitative evaluation of the software performance shining light on possible bottlenecks. (When programming complex shaders the GPU usage increases slowing down the render)

To evaluate these criteria an oscilloscope was used and the TestLatency class was utilized that builds on the gpio-sysfs repository made by Dr. Bernd Porr. This allows for low-level communication with the GPIO pins on the Pi introducing negligible latency, however the performance can also be tested using the serial bus with minor modifications to the code.

Stability

To gain an insight into the rendering speed a GPIO test pin was set up to change state after the buffers were swapped in the main.cpp by the GLUT library (glutSwapBuffers() function). Measuring the frequency of the signal on the oscilloscope corresponds to the frame rate as the square wave indicated by the blue signal has matching frequency as each cycle consists of two frames and two renders. It can be seen on the figure below that an average frame rate of ~30 FPS was measured. This frame rate is the industry-standard for digital video and yields a smooth output, meaning good performance with little stutter as long as frame rate variation is small.

Real time response

A similar approach was used to test the latency between the audio and the video, however the pin only changed state after the buffer-swap if a dedicated fft bin was above a certain threshold. This allowed for the investigation of the latency between an audio signal with sharp transients (yellow signal at point 'a') and the rendered frame (blue signal at point 'b'). The minimum latency reported was 17 ms (~60 FPS) which demonstrates that the implemented fft, audio processing and SPI threads do not increase latency significantly. Therefore the most computationally intensive process is running the fragment shader programs.