Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Inserted a 1-buffer playback delay to the end of playback to sync external events #210

Merged
merged 1 commit into from
Aug 9, 2022

Conversation

JohnVidler
Copy link
Collaborator

Inserted a small delay into playback to account for the last buffer leaving the synth before emitting a complete event.

This should fix #189 whereby external events (like flipping a pin as playback ends) should now correctly sync up.

…eaving the synth before emitting a compelte event
@@ -207,6 +207,10 @@ ManagedBuffer SoundEmojiSynthesizer::pull()
done = true;
if (renderComplete || status & EMOJI_SYNTHESIZER_STATUS_STOPPING)
{
// Wait for a hanful of milliseconds while we finish playing the last buffer we sent out ...
fiber_sleep( buffer.length() / sampleRate );
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What kind of values should we normally expect here?
Have you tested with an oscilloscope to check the output?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

At worst the buffer length could be 1 byte short of a full buffer but as we generally sample at comparitively high rates, this should result in only a few milliseconds at most.
I can instrument this up and record it, but I'm not concerned as the buffers are usually quite small (<1k).

I haven't thrown this on a scope yet, as running this with a MakeCode program requires a custom stack, which I do not have access to right now.

Copy link
Collaborator

@microbit-carlos microbit-carlos Jul 29, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The main reason I ask is because I'm not familiar with what kind of numbers we are dealing with.
What is the smallest, average and larger buffer size and sample rate? Based on that, what would be the min, average and max values we would expect here?
Does the buffer size usually match the sample rate? double it, halve it?

Is the sampleRate value clamped somewhere? Are we protected from a division by zero?

What happens if the calculation results in a fiber_sleep(0)?

I haven't thrown this on a scope yet, as running this with a MakeCode program requires a custom stack, which I do not have access to right now.

Might be worth seeing how easy it is to replicate the original report with a CODAL-only programme and then see how this PR affects it.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What happens if the calculation results in a fiber_sleep(0)?

Thats an easy one - if you attempt to fiber_sleep() with a 0 or negative number, you'll effectively just get yourself descheduled for 1 scheduler loop and then you'll be back on the ready queue.

Copy link
Collaborator Author

@JohnVidler JohnVidler Aug 10, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But your point about sample rate is valid - it is certainly possible to hand-craft a stream with zero-samples-per-sec set, although quite why that would be useful I don't know right now.

We do later also do this:

float skip = ((EMOJI_SYNTHESIZER_TONE_WIDTH_F * frequency) / sampleRate);
which would have the same issue...

The only place we set the sample rate is here:

int SoundEmojiSynthesizer::setSampleRate(int sampleRate)
and we can introduce a check here for sub-zero values. (I'll patch that in this branch and re-PR)

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looking at this, I have two questions.

  1. Carlos and I have just tracked through the numbers and it looks from visual inspection like we're sleeping for 512/44100 milliseconds, which I don't think fiber_sleep can handle well. Do we have a unit mismatch between sample rate and fiber_sleep?

  2. If (as we guess) this should be (512/(44100/1000)) then we're still going to be calling fiber_sleep(11.6099773243) which will end up as 12ms sleep requested. With the jitter involved from the scheduler, isn't there a risk here that we end up sleeping both longer and shorter than that, which will stop us smoothly chaining sounds together?

It feels like it would be better to wait here until we get an event from the audio subsystem that the transfer has completed? @finneyj I know you were specifically optimising this so that buffers could be smoothly chained, so perhaps I've not understood the special casing here? WDYT?

@JohnVidler JohnVidler modified the milestones: v0.2.41, v0.2.42 Aug 8, 2022
@JohnVidler JohnVidler merged commit 31095d5 into master Aug 9, 2022
@JohnVidler JohnVidler deleted the bug/sound-emoji-timing branch August 9, 2022 14:09
@JohnVidler JohnVidler restored the bug/sound-emoji-timing branch August 10, 2022 18:37
@microbit-carlos
Copy link
Collaborator

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging this pull request may close these issues.

uBit.audio.soundExpressions.play() (blocking version) continues to output sound after the function returns
3 participants