Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to get single value for xi_hat? #15

Closed
olafthiele opened this issue Dec 17, 2019 · 12 comments
Closed

How to get single value for xi_hat? #15

olafthiele opened this issue Dec 17, 2019 · 12 comments

Comments

@olafthiele
Copy link

Thanks for your work, we would like to test whether your approach works better than what we are currently using to detect "good" audio. We are inferring with
deepxi.py --infer 1 --out_type xi_hat --gain mmse-lsa

and get the mat files containing the output arrays. How do we interpret this data or do you see an easy function to boil it down to a single value?

@anicolson
Copy link
Owner

anicolson commented Dec 17, 2019 via email

@olafthiele
Copy link
Author

Thanks, will look into that direction. But we are also interested in "eliminating" the noise and have tried your tool with some success. We are considering to transfer/retrain it with our own data as we are already using DeepSpeech and know what chunks are of good quality. But first, we would like to see how good the current model is. Haven't looked too deep into your code and were therefore wondering what to do with the mt files.

@anicolson
Copy link
Owner

Did I reply to this?

@olafthiele
Copy link
Author

not yet :-) would be great, if you think a simple std variation of the included 257-item vectors yields sth useful

@anicolson
Copy link
Owner

You could simply use deepxi.py --infer 1 --out_type y --gain srwf

to save the enhanced speech .wav files, and then give them to DeepSpeech. This would be very easy to do.

A more complex alternative would be to include the enhanced speech magnitude spectrum produced by Deep Xi as part of the front-end of Deep Speech. Deep Speech utilises MFCCs as features, which are computed from the magnitude spectrum of the given wav file.

@olafthiele
Copy link
Author

Thanks, we already tried that with mixed results. We would therefore try to find out what type of background noise your algo detects better. Therefore it would great to have some sort of measurement that shows how noisy your algo rates a certain chunk. Do you see a way to do that?

@anicolson
Copy link
Owner

With the audio that you are using, do you have a reference version? i.e. and ideal version, or a version without noise?

@olafthiele
Copy link
Author

No, we have around 100 000 chunks and around a third are manually labelled as noisy with heavy or light noise labels. It would be great to see, whether your algo would label them the same way or where it differs. We could then label them automatically or clean them before feeding them to DeepSpeech to get better results

@anicolson
Copy link
Owner

anicolson commented Dec 21, 2019

you could use the a priori SNR in dB averaged over the frame to understand how much noise is in each time-region of a chunk, or averaged over the chunk if you just want to know the overall SNR of the chunk.

The overall SNR of the chunk could then be used as the label

@olafthiele
Copy link
Author

Great, so if I understand you correctly, I could average the vector output in the mat files as each 257-element vector represents a 16 ms window. And the mat-values are the normalized db values. Is there any indication of what values are noisy or clean?

@anicolson
Copy link
Owner

So the window size is 32 ms, where the windows overlap by 16 ms. So there is a 32 ms window every 16 ms. The .mat file has the a priori SNR values. 10*log10( ) would give the a priori SNR values in dB. Averaging the 257 point vectors would give the average a priori SNR in dB for each of the frames. A value of 30 dB would indicate that the frame would be largely dominated by speech. A value of -10 dB would indicate that the frame is largely dominated by noise.

Hope this helps.

@olafthiele
Copy link
Author

Perfect, thanks a lot mate and happy holidays

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants