You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, we are just using the acoustic energy of the input signal and equalizing these, which does not match human perception. The main problem is that lower notes have a lot of energy but humans don't perceive them as particularly loud, so people with low voices and much quieter in the mix than people with high voices.
If we used a-weighting in evaluating the volume of the input signal, people would sound more like they were all singing at the same volume.
The text was updated successfully, but these errors were encountered:
After poking around a bit, I'm not convinced this is worth doing. I didn't find any ready-to-use library/invocation which can apply a-weighting to a buffer so we can compute it's a-weighted volume. We're also still going to have lots of differences due to people's sense of how loud they should sing during calibration.
I'm thinking we should focus more on making it easier to manually adjust until it sounds good: #15
Currently, we are just using the acoustic energy of the input signal and equalizing these, which does not match human perception. The main problem is that lower notes have a lot of energy but humans don't perceive them as particularly loud, so people with low voices and much quieter in the mix than people with high voices.
If we used a-weighting in evaluating the volume of the input signal, people would sound more like they were all singing at the same volume.
The text was updated successfully, but these errors were encountered: