Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Write directly into audio output buffer - discussion #102

Open
trackme518 opened this issue Feb 27, 2024 · 1 comment
Open

Write directly into audio output buffer - discussion #102

trackme518 opened this issue Feb 27, 2024 · 1 comment
Labels
question Question about setting up or using the library

Comments

@trackme518
Copy link

trackme518 commented Feb 27, 2024

Hi,
I am wondering what would be the best way to write directly into output buffer - I would like to receive audio from stream as float or byte array and I need to continuously write into the speakers. The only public facing object that seems good for this is AudioSample with write method. But I am not sure how to time the writing into it. Any tips welcomed.

What is the expected range of the audio data written (it assumes float in the source code...)?

Should I create always new AudioSample object when I have new data? Or should I rewrite the data in existing sample object...

//for example
AudioSample sample = new AudioSample(this, 1024, 44100);
sample.loop();
//when should I write into it?
//at what index I should write...
 for (int i = 0; i < floatArray.length; i++) {
        sample.write(i, intArray[i] );
 }
@kevinstadler
Copy link
Collaborator

Hello,

re-using one AudioSample (acting as a fixed length buffer) that is looping forever like in your code above seems like a good idea. If you are continuously writing new data into it then the sample is basically acting as a circular buffer, so you can just copy the data over into the AudioSample in the same consecutive order that it comes in. The target index you are writing to simply has to wrap over at the length of the AudioSample.

If your incoming data is perfectly in sync with the framerate of the synthesis engine that should be it, but I suspect that there might be pacing issues. If you want to make sure that you are not currently copying data over too fast or too slow, or to delay a write that would affect an area that is currently being played back, you can use AudioSample's positionFrame() method which tells you where in the buffer the playhead currently is. A simple strategy I could imagine is having a 2048 frame buffer, you wait for the playhead to enter the second half of the buffer, then copy 1024 new frames into the first half, once the playhead reaches the end of the buffer and jumps back to the very beginning you copy 1024 new frames in the second half, and so on ad infinitum.

I don't know where in your Processing sketch you are writing to the AudioSample but since the execution frequency of any piece of sketch code is not super high I can imagine that you might want to use a buffer much larger than 1024 samples.

As for the range of values it depends on your desired amplitude, but the web reference example for AudioSample's write() method uses values in the [-100, 100] range.

A potentially much faster and cleaner way to write directly to an output buffer would be to bypass the Sound library classes and implement your own JSyn UnitGenerator whose generate() method consumes the data from your stream. There is a simple example of a custom UnitGen in the JSyn docs which warns of doing any complex I/O in the synthesis thread, so it really depends on where your data is coming from if this is a feasible route for you or not.

Hope that helps!

@kevinstadler kevinstadler added the question Question about setting up or using the library label Mar 22, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Question about setting up or using the library
Projects
None yet
Development

No branches or pull requests

2 participants