-
Notifications
You must be signed in to change notification settings - Fork 128
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
What changes are needed to support streaming/real-time operation? #17
Comments
Are you only using a single microphone? In that case maybe start with portaudio and some python-bindings, e.g., PyAudio. |
My impression was that modern PCs should be more than capable of handling 4 or 6 mics at 44.1KHz, including the signal processing of those streams. (Now, Python may not be up to the task, but that's a different story...) I was wondering what changes need to be made to Acoular to support streaming from something like PyAudio, performing the computation in real-time. For example, currently Acoular has the cache hard-wired in at several places. Obviously there is little use for that when the audio data is never written to disk in the first place. |
That is quite possibly true. The systems I have seen usually have a dedicated DSP chip or FPGA to handle the I/O and signal processing but that may stem from a time where processors were less powerful and more expensive. On the other hand, most microphone arrays have at least 32 channels and that might still be a bit too much for a pc that is not necessarily dedicated to that kind of I/O. It's an interesting idea nonetheless! And I think it would be a nice to see how much is capable with just a modern pc. Cannot suggest any other paths to follow unfortunately. |
While it strongly depends on what exactly you’re planning to do, realtime applications in Acoular are possible in principle. For our measurements, we actually do use classes with TimeSamples-like API (“result” generator etc.) whose task is to deliver measurement data from our hardware in realtime. At the moment, we do not use this to do beamforming, just to write the data directly on a hard drive. For time domain beamforming, you could tunnel the data through a BeamformerTime object. For this, you would have to solve two problems:
We are in fact working on both at the moment, but I don’t want to give a prediction as to when working solutions will be implemented in Acoular. |
I tried to use SoundDeviceSamplesGenerator, but when I called method
Nothing is printed. |
Ok, I've debugged some code and discovered:
We have infinity loop 'cause t.result(bs) returns generator (and also have data from mics every audio stream callback) |
The frequency domain methods are currently designed to deliver only one result, i.e. the average over all time. If you have an endless audio stream, this would indeed mean that the PowerSpectra object is collecting data forever. For doing frequency domain beamforming using only part of a stream you could try putting a MaskedTimeInOut object between the SoundDeviceSamplesGenerator and the PowerSpectra objects (where you appropriately set "start" and "stop" sample values). |
If you want to do realtime beamforming, you should only use classes that are derived from TimeInOut. The PowerSpectra class is not meant for this. You check out the examples for the time domain beamformers for a start. |
from acuma16 import UMA16SamplesGenerator ModuleNotFoundError: No module named 'acuma16' |
Hello! I am looking for a library to perform analysis on a live stream from a microphone. Based on the example scripts, Acoular seems to do generally what I need, but it seems like it is only designed for prerecorded data. And in fact, in #12 you mentioned that it isn't designed for real-time use.
What changes would need to be made to do live processing? I am a programmer and can make modifications, but I would like to get a general idea about the scale of the work involved.
The text was updated successfully, but these errors were encountered: