Description
Pull request #2032 suggests to add a pre-allocated buffer to WaveFile. However, maybe there's a different route that would be better. Let's see if we can flesh it out.
It appears that in general, while actually DMA'ing the audio data to the DAC, PWM, or I2C destination, there's an unavoidable need for a chunk of audio data in an arrangement that suits the specific hardware implementation. For example, on nrf's PWMAudioOut, it is a number of 32-bit blocks, which may contain 1 or 2 channels each of data; the values are rescaled from the original 8- or 16-bit values according to a sample-rate dependent maximum value. It just so happens that this is always going to be at least as much data as the WaveFile will use, but often more by a factor of 2x or 4x.
What if, by careful processing of the audio data, WaveFile could be given its buffer by the AudioOut instance using it, so that it didn't need any excess allocations? This might require allocating a buffer to the max of the original-format sample size and the hardware sample format, and processing carefully so that sample data is overwritten only after it doesn't need to be used anymore.
Would this work with actual dac and i2s implementations? I did not study the samd implementations in detail, but it looks like their stereo audio (2 distinct DMA channels, which run independently?) could pose a potential problem.