Skip to content
Igor Zinken edited this page Aug 1, 2022 · 7 revisions

Chances are you think the contents of your AudioChannels buffer aren't... that exciting. This is where processors come in.

A processor is basically a class that provides an operation to be performed on incoming audio. A processor can be part of a ProcessingChain which is nothing more than a collection of processors belonging to an AudioChannel.

The basic infrastructure to keep in mind when working with processors is that when the audio engine is running, it will render / query the contents of a given AudioChannel, and then apply the active processors in series to the signal. For instance, let's say you have an AudioChannel whose ProcessingChain holds both a BitCrusher and a Filter. What happens is that first the bit crushing process is applied, after which the bit crushed signal is fed into the Filter process, which will then apply its effect onto the outgoing buffer.

MWEngine comes with a range of processors such as BitCrusher, Filter, Formant Filter, Phaser, Delay, Limiter, etc. But the engine is written in such a manner that creating your own custom DSP process is very simple. As a matter of fact, you only have to focus on the logic of the DSP process, which is the hard bit.

BaseProcessor

If you wish to create a custom effect / DSP you can quickly integrate it within the MWEngine by having it extend the BaseProcessor-class.

The only method you need to override / implement is

void process( AudioBuffer* sampleBuffer, bool isMonoSource );

this is the method that is invoked by the engine when it is applying the ProcessingChain onto an AudioChannel. Given sampleBuffer is the source AudioBuffer to operate on while boolean value isMonoSource describes whether the source of the signal was/is mono (the sampleBuffer can be multi-channel, but the channels can all be equal in content and thus be mono-aural).

All the superhuman mathematics you can come up with to add excitement to the signal should occur inside this method.

Keeping in mind that the incoming buffer is possibly multi-channel, it is likely the body of your process-function will look like this:

int bufferSize = sampleBuffer->bufferSize; // size of a single buffer

// loop through all available channels in the sampleBuffer (likely to be 2 for stereo, 1 for mono output)
for ( int c = 0, ca = sampleBuffer->amountOfChannels; c < ca; ++c )
{
    // grab a pointer to the sample buffer of the current channel

    SAMPLE_TYPE* channelBuffer = sampleBuffer->getBufferForChannel( c );

    // loop through all samples inside the current channels buffer
    for ( int i = 0; i < bufferSize; ++i ) {
        // custom operation on channelBuffer[ i ]...
    }
}

When the processor is applied from a ProcessingChain the sampleBuffer is a temporary buffer (of the corresponding AudioChannel) with a length of BUFFER_SIZE (defined in global.h), which contains the mixed-in contents of the channels AudioEvents. As such the processor is not operating directly on the source content (e.g. SampleEvents, SynthEvents, etc.) but merely applying its effect for the current output buffer only.

Making a process light on CPU resources

It is recommended that all properties of your DSP are cached outside of the process-method instead of re-calculating these upon each iteration.

If your operation doesn't alter the stereo spread of the incoming sampleBuffer and the source is mono, you can omit processing the remaining channels of the sampleBuffer (as their contents should be identical to the first channel) and simply clone the printed effect of the first channel onto the remaining channels, like so:

// omit unnecessary cycles by copying the mono content
// at the end of the first iteration of the outer/channel loop

if ( isMonoSource ) {
    sampleBuffer->applyMonoSource();
    break;
}

Additionally, you can consider caching the output of a Processor by storing the result as a separate AudioBuffer or even a SampleEvent (see below). This is especially feasible for heavy DSP operations that can be taxing on the processor or on processors whose output is equal at any given time (if the incoming buffer is also equal!).

Committing the result of a DSP process to a SampleEvent

If you intend to apply a DSP process only once and have the affected output stored as a SampleEvent, it must be nice to know that this is possible! ;)

A BaseProcessor will operate directly on the contents of the given sampleBuffer and overwrite the source. As stated above, when the processor is applied in the ProcessingChain of an AudioChannel the incoming sampleBuffer is a temporary buffer. You can however, pass in in any kind of AudioBuffer-instance to overwrite its contents:

processorInstance->process( sampleEventInstance->getBuffer(), sampleEventInstance->amountOfChannels == 1 );

In the code above, the result of processorInstance has been applied directly to the AudioBuffer of given sampleEventInstance. If you wish to keep a copy of the original data of sampleEventInstance, you can create a clone and process that instead :

SampleEvent* sampleEventClone = sampleEventInstance->clone();
processorInstance->process( sampleEventClone->getBuffer(), sampleEventClone->amountOfChannels == 1 );

In the code above, we have cloned sampleEventInstance to sampleEventClone and applied the result of processorInstance directly to the clone's AudioBuffer.

Clone this wiki locally