Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ear-render cannot handle ADM with included "binaural" (AP_00050001") #21

Open
WernerBleisteiner opened this issue Mar 23, 2021 · 2 comments
Labels
enhancement New feature or request

Comments

@WernerBleisteiner
Copy link

WernerBleisteiner commented Mar 23, 2021

Appreciating ADM as universal for not just creating genuine object-based and interactive experiences, but also a unique format for storing and archiving various audio formats/mixes in one asset (like to call this "stacked legacy").
However, ear-render up to know cannot handle correctly described 2 channel "binaural" audio - AP/AT_00050001 and states
"Don't know how to produce rendering items for type Binaural".
See attached screenshot.
ss-ADMTest-SEQUENCE-v02-EBU-EAR-renderer
For us (BR), this interferes the implementation of ADM in a proposed piloting workflow.
This issue is related to the respective one for ear-production-suite

@tomjnixon
Copy link
Member

Hi,

If I remember correctly there was some discussion about this issue a long time ago in the EBU group.

To resolve this, I think these are the relevant questions, for standardisation at least:

  • What should the renderer do with binaural signals when rendering to loudspeakers? Two options:

    • Discard the signal. This is not really acceptable, because this is clearly not the intended behaviour if the input just contains a binaural signal, but sometimes this is what you want, so it should be indicated in the metadata
    • Reproduce it using the left and right loudspeakers. This is probably not what was intended either. I know this is sometimes done in broadcast, but it's generally considered to be a last resort, so should probably also be indicated in the metadata.
  • What should the renderer do with directspeakers/objects/HOA when rendering to headphones? Two options:

    • Render these channels binaurally If we're going to do this, the output needs to be good quality. There were some experiments using plain virtual loudspeaker rendering, which was not great. IRT implemented something better https://github.com/IRT-Open-Source/binaural_nga_renderer , and we're working on another iteration.
    • Discard the signal. Sometimes this is what you want, but should be indicated in the metadata.

For the first question, it seems clear that we need more metadata, because both options are not good in some circumstances.

For the second, it's clearer what to do, but it's not really within the scope of the EAR. Hopefully this will be resolved soon.

We could make some changes in the EAR to help in the meantime:

  • Add a flag which controls the binaural to loudspeaker rendering (first question, with the default being to produce an error).
  • Implement headphone output, which just knows how to render Binaural signals (so, if you try to render Objects to headphones you still get an error).

Would either of these help? Other suggestions would be welcome!

@WernerBleisteiner
Copy link
Author

Hi Tom,

thanks for regarding this.
From a practical (broadcasting operations) point-of-view I argue, that a binaural-labelled ADM signal should be passed straight as a two channel signal into M-/+30° - regardless the chosen loudspeaker layout, respectively by-passing any processing in a binaural renderer.
A additional binaural rendering of a genuine binaural signal is in any case absolutely to be avoided.

We do have quite a few legacy and recently created binaural "dummy head stereo" assets in our archive, that should be described accordingly in ADM. Their applicability within an EPS and EAR based workflow would really be an advantage.

There's one technically similar/related but perception-wise absolutely different use case (that's important in audio-only/radio as a dramaturgic element):
"head-locked stereo" (as FB360 calls it) or "in-head mono" - both kind of signals require to by-pass any binaural rendering. But that's rather another issue
As far as I've tested IRT's nga-binaural-renderer, in-head localisation is generated (bypassing binaural rendering) when an object is located at 0/0/0.

The processing of HOA for -static- binaural rendering is also not my concern here.

@tomjnixon tomjnixon added the enhancement New feature or request label Aug 19, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants