You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, the AudioSource is only available after the recorder StopAsync. It doesn't work for the case if I want to handle the recording data in real-time.
I'm wonder if it's possible to add the similar API as the following to the AudioRecorder to support task like sending the recording to backend in real-time.
If we use Linear PCM (LPCM), which doesn't include headers and allows any segment of the audio to be played independently, we can send the audio chunks to the backend. With a method for silence detection, we could split the real-time audio into chunks and send these to OpenAI's Whisper for transcription. This approach could enable near real-time transcription display on the frontend, store the transcription, and perform summarization or other AI tasks.
I'm interested in finding out how to send the minimum amount of bytes with sufficient quality to a cloud-based Whisper deployment for transcription. This transcription could be saved as metadata for the audio recording, and the audio itself backed up to a cloud location. Additionally, generating .srt files with timestamps would allow users to jump to specific audio segments corresponding to the subtitles.
Currently, the
AudioSource
is only available after the recorderStopAsync
. It doesn't work for the case if I want to handle the recording data in real-time.I'm wonder if it's possible to add the similar API as the following to the
AudioRecorder
to support task like sending the recording to backend in real-time.The text was updated successfully, but these errors were encountered: