Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Embedder API for gstreamer audio setup #24068

Open
asajeffrey opened this issue Aug 27, 2019 · 12 comments
Open

Embedder API for gstreamer audio setup #24068

asajeffrey opened this issue Aug 27, 2019 · 12 comments

Comments

@asajeffrey
Copy link
Member

@asajeffrey asajeffrey commented Aug 27, 2019

This came up in the context of magicleap, but it applies to any device which needs to be able to customize the audio back end.

Currently we pass in the (E)GLContext from the embedder to Servo, and then to gstreamer for use as the video target, but we don't have a similar setup for audio. This makes it tricky to support devices like magicleap's 3D audio, even though gstreamer can handle it, because Servo is sitting between gstreamer and the embedder.

@jdm
Copy link
Member

@jdm jdm commented Aug 27, 2019

Can you be more specific about the needs of magicleap here? What sort of callbacks does the embedder require from the audio playback?

@asajeffrey
Copy link
Member Author

@asajeffrey asajeffrey commented Aug 27, 2019

@asajeffrey
Copy link
Member Author

@asajeffrey asajeffrey commented Aug 27, 2019

@jdm: When gstreamer creates an audio node, it needs the embedding app to create it and anchor it in 3D space. @xclasse's code does this by having an event that the embedder is meant to handle, but we could also do it by passing in a callback that creates the audio node.

@asajeffrey
Copy link
Member Author

@asajeffrey asajeffrey commented Aug 27, 2019

(The magicleap case is particularly complex as the node has to be created on the audio thread apparently, which the application doesn't know about at startup.)

@asajeffrey
Copy link
Member Author

@asajeffrey asajeffrey commented Aug 27, 2019

cc @xclaesse who's github handle I got wrong.

@jdm
Copy link
Member

@jdm jdm commented Aug 27, 2019

So this would be the first time we need to have embedder-specified callbacks that execute in the media backend. Currently we smuggle embedder data to the gstreamer backend as usize values since they're all pointers, and that might work for function pointers as well but it's also kind of terrifying.

@jdm
Copy link
Member

@jdm jdm commented Aug 27, 2019

Actually, a bunch of that smuggling might not be necessary any more - now that we initialize the GStreamer backend in particular in this code, we should be able to pass embedder callbacks there as well.

@asajeffrey
Copy link
Member Author

@asajeffrey asajeffrey commented Aug 27, 2019

Yes, it looks like we might be able to get configuration data from the embedder to gstreamer without casting to/from usize.

@gterzian
Copy link
Member

@gterzian gterzian commented Aug 29, 2019

So this would be the first time we need to have embedder-specified callbacks that execute in the media backend.

The magicleap case is particularly complex as the node has to be created on the audio thread apparently, which the application doesn't know about at startup.

I think it's worth highlighting that the "audio rendering thread" and the "backend" are currently part of the same process, and that they likely will be separated in the future, in the light of the requirements of implementing AudioWorklet, see discussion started at #23807 (comment)

The ideal setup is for the backend to live in it's own process, and for a rendering thread to run in the content-process(with a audio worklet running there as well).

Also, from what I understand its the "audio rendering thread" that creates audio nodes(although I think the terminology between the lumin::AudioNode is different from an "audio node" that is part of the rendering graph).

So I think this will require some coordination between script(the rendering thread), the backend(in it's own process), and the embedder. And it might be a good idea to do this in way that takes the potential future changes into account, meaning not assuming backend/rendering thread are in the same process. This would seem mostly relevant when calling directly into lower-level gstreamer functions and using callbacks, since I assume that cannot be done across process.


I think it would also be interesting to know if a lumin::AudioNode could be made serializable, which would enable creating it on the embedder thread, and then sending it over ipc to the media backend. In such a case, the lower leve gstreamer stuff could be hidden from the embedder, which could basically just handle another message, create a lumin::AudioNode, and send it in an ipc reply.

If not, I guess you'd have to run the code that creates a lumin::AudioNode from somewhere that can invoke low-level gsstreamer functions directly(which I assume cannot be done across process).
If the lumin::AudioNode needs to be created "on the main-thread", it almost seems to imply you'd have to run gstreamer on the main thread too, versus on a separate process.

@xclaesse
Copy link

@xclaesse xclaesse commented Aug 29, 2019

As the one who wrote the GStreamer audiosink that uses lumin::AudioNode, I could provide some details here:

  • The lumin::AudioNode object must be created in application's main thread, the one who runs the lumin::LandscapeApp object. Most of AudioNode APIs must be called from that thread.
  • GStreamer will send a message on the bus, from that app thread, when the AudioNode object must be created.
  • Due to MagicLeap platform bugs (or at least very weird behaviour) the AudioNode object creation and some follow-up setupping calls (that gstreamer does) must be done without returning to app's main loop between them. That means it must be done within the gstreamer message handler call.
@gterzian
Copy link
Member

@gterzian gterzian commented Aug 29, 2019

@xclaesse Thanks!

So that means we cannot pass a callback from the main-thread to the audio backend upon initialization, since that callback would not be executing on the main-thread.

Instead, it sounds like we need to send a message from the audio backend(dealing internally with the gstreamer message-bus) to the main-thread(the "embedder" really), and asking it to create the lumin::AudioNode and send it back to the backend, probably then itself blocking on a reply back from the backend as a way to block the main-thread event-loop while the backend is busy hooking-up the received lumin::AudioNode(and maybe the backend needs to send a message back also so that some additional API of the audionode are called on the main-thread).

For me the only question is whether the message sending the lumin::AudioNode to the backend could be over IPC or not. Do you know whether lumin::AudioNode is something that could be serialized and sent across process?

If not, that means we know we have to run the media backend on the "main-process", alongside the embedder...

@gterzian
Copy link
Member

@gterzian gterzian commented Aug 30, 2019

On second thoughts, it probably doesn't make sense to send the lumin::AudioNode to the backend, whether that's in another process or not, since all usage of lumin::AudioNode will have to occur on the main thread.

So the embedder can just store it as audio_node or something, and use it in response to receiving a message from the audio backend.

One thing we need to figure out is how to do the equivalent of g_object_set(G_OBJECT(msg->src), "audio-node", self->mAudioNode, NULL); see https://gitlab.freedesktop.org/xclaesse/gstreamer_demo/commit/3a394883f9804b524c01fee0823e8a27ba4b60ec

Perhaps the backend could send a closure in the message requesting the creation of an audio node, and the node should then be passed to the closur,e which internally would then hook it up to gstreamer. This does imply the audio backend runs in the same process as the main-thread, I think.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Linked pull requests

Successfully merging a pull request may close this issue.

None yet
4 participants
You can’t perform that action at this time.