Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

get realtime soundcard audio as input for shadertoy shaders #27

Closed
cyberic99 opened this issue Apr 12, 2021 · 6 comments
Closed

get realtime soundcard audio as input for shadertoy shaders #27

cyberic99 opened this issue Apr 12, 2021 · 6 comments

Comments

@cyberic99
Copy link

Hi,

I have tried some shadertoys examples.

It seems the audio input can be the 'output' from the sounds playing in jupylet, but is there a way to use realtime audio input from the soundcard?

Thank you

@nir
Copy link
Owner

nir commented Apr 13, 2021

If you can read the sound card audio as an array of samples you should be able to feed it to the shadertoy.

The get_shadertoy_audio() function accepts an optional data parameter that may contain arbitrary audio samples (I believe in the range [-1,1]):

def get_shadertoy_audio(amp=1., length=512, buffer=500, data=None, channel_time=None):

You can see how it is used in the piano example:

st0.set_channel(0, *get_shadertoy_audio(amp=5))

In the piano example the data parameter is not used so the function reads audio data from the jupylet audio buffer instead.

@cyberic99
Copy link
Author

I ended up with a semi-working solution.

I added this in the piano example:

sound_input = np.zeros((512, 2))

import soundcard as sd
import _thread

def _rec():
    global sound_input
    m = None
    for m in sd.all_microphones(include_loopback=True):
        if m.isloopback:
            break
    if m is None:
        m = sd.default_microphone()
    with m.recorder(samplerate=48000) as mic:
        while True:
            sound_input = mic.record(numframes=512)

_rec_tid = _thread.start_new_thread(_rec, ())

And I call get_shadertoy_audio like this:

si = sound_input.copy()
st0.set_channel(0, *get_shadertoy_audio(amp=5, data=si, length=512))

It is kind of working, but the waveform is a bit shaky... I guess it is due to lack of synchronization between the soundcard callback and the render loop...

do you think I should call render() inside the audio capture loop?

thanks for your hints

@nir
Copy link
Owner

nir commented Apr 14, 2021

The shakiness may be due to slight mistimings in the operation of the various moving parts (e.g. your recording thread) - I the get_shadertoy_audio() function I fix it by calling get_correlation() in:

ix = get_correlation(a0.mean(-1), buffer)

It finds a subset of the input buffer that minimizes that shakiness (by maximizing correlation with a previous buffer). You can use it too on your own, on your buffer to minimize that shakiness. let me know if it works.

In the next release I will make it an option to apply it to the user supplied buffer.

@cyberic99
Copy link
Author

Hi,

First of all, thank you for your detailed answer and for taking some time to look at this issue.

yeah you're right, it is much better when using get_correlation(). Maybe you could add it as an option in get_shadertoy_audio() too ?

But regarding capturing the audio, I think the best way would be call render() in the audio callback.

I have looked at the code to see how I could do it, but I'm not sure there is an easy way to call render() outside of the run()

Am I correct?

@nir
Copy link
Owner

nir commented Apr 14, 2021

I don't think you can do that. render() is a callback function called by the async loop.

@nir
Copy link
Owner

nir commented Apr 27, 2021

Added new parameter to auto-correlate user supplied audio buffers:

correlate=True,

Thanks for this bug report!

@nir nir closed this as completed Apr 27, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants