You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Could you please let me know in the paper, how could you deal with the sound location?
You assume it co-located with the microphone?
Because RIR is related with the microphone-sound path, i didn;t find the information in the paper.
Do you randomly locate a sound?
The text was updated successfully, but these errors were encountered:
Hi, apologies for the late response; missed this earlier!
We only have scene-level correspondance, so we focus on late reverberation (which is less sensitive to within-room position and source-mic distance) and don't use other metrics (DRR, EDT) which reflect these other kinds of variation. This is also part of the motivation for using a stochastic mapping. However, you may be interested in this very interesting recent work.
Paper quotes (from ours, describing this) in case they're of interest:
From Data Aggregation (3.1): "Although this dataset contains high variability in several reverberant parameters, e.g. early reflections and source-microphone distance, it allows us to learn characteristics of late-field reverberation."
From Limitations and Future Work (4.4): "Our dataset also contains much variation in other relevant parameters (e.g. DRR and EDT) in a way we cannot semantically connect to paired images, given the sources of our data."
Dear authors,
Could you please let me know in the paper, how could you deal with the sound location?
You assume it co-located with the microphone?
Because RIR is related with the microphone-sound path, i didn;t find the information in the paper.
Do you randomly locate a sound?
The text was updated successfully, but these errors were encountered: