-
Notifications
You must be signed in to change notification settings - Fork 277
2.0 roadmap #142
Comments
This is great! I actually have a use case for processing webcam footage through a face detection/processing phase (e.g. OpenCV) before sending over the WebRTC wire so the external video source/tracks is a welcome feature. A suggestion for the signaling portion, maybe an Azure server could be stood up specifically as a demo signaling server. To prevent abuse of the server the signal sending could be purposefully slowed down so that sending the SDP from local to remote could be delayed by a few seconds (would hopefully prevent someone not using it for production). I've been testing the webrtc-uwp-sdk and I like that they include a simple signaling server implementation that you can run from the command line. Other than that, keep up the great work! I'm taking a stab at the MixedReality-WebRTC SDK but so far I like what I see :) Cheers, |
Thanks! We are actually building upon webrtc-uwp-sdk and using it to support UWP. But not their signaling server. We know people are asking for some Azure solution for signaling (see #45) but this is a large task to make a production-ready one and we don't have resources for now, unfortunately. But yes I agree this makes a lot of sense as a service that one can deploy to their own subscription. For simple testing we rely for now on |
Is there any progress on android support?I'm looking forward to it on android |
@Anberm : @eanders-ms is making some good progress on his fork, and I started catching up with him to merge back his changes. There has been no commit yet on this repository though, but we both have some local builds set up on our dev machines. But we still have some technical issues and are not quite there yet. We are still actively working on this though. |
Is there any progress on android support? |
@qazqaz12378 there is a PR open #193, please feel free to comment on it and sync with @eanders-ms who is leading that dev work. |
Hi, |
@parad74 > 2.0.0 was released yesterday |
Closing this after release. |
Now that 1.0 is out with decent H.264 hardware encoding support, providing a WebRTC solution for HoloLens 1/2 and other MR devices as well as PC/Laptop, it is time to plan ahead for the next 2.0 milestone and the features we want to put in it. The list below is based on the strongest signals we got from various sources including GitHub issues, Slack messages, and some direct contacts with partners and internal Microsoft teams using this project.
Checked items are already committed in the repository on the
master
branch, although they are not available in any NuGet package yet as most of them include breaking changes. In order to balance the waiting time vs. the availability of NuGet packages, the idea here is to commit tomaster
the various changes required for those items, making them available to developers who are willing to rebuild the project by themselves, and eventually wrap all those up together into a NuGet v2.0.0 package once they are all done, limiting the amount of NuGet packages with breaking changes.This is a best-effort planning considering the amount of work and the limited resources available to work on those. We are tentatively aiming for some time around Q1 2020 for the overall milestone, although getting each item stable and polished to a state we feel is production ready will dictate the final release date of the NuGet packages. Some items like Android support also have a fair amount of uncertainty associated with them.
Already started
External video tracks (can Support customize local Video source in Unity like renderTexture? #35) : ability to create a local video track based on a user callback providing the video frames for that track. This enables many uses cases, from device-less testing to custom video processing.
LocalVideoTrack
object separate fromPeerConnection
ExternalVideoTrackSource
based on a custom callback delivering some I420A or ARGB32 raw video frames, and some methods to create aLocalVideoTrack
based on that sourceAudio source surfacing into C# and Unity (spatial audio support) (Support for connecting a WebRTC audio track to a Unity AudioSource #92) : ability to disable the default audio output directly to the sound card, and instead receive audio frame callbacks, much like the video ones, for audio output or any other use. This enables in particular injecting the remote audio into the Unity DSP pipeline, to be rendered alongside any other Unity audio source, and allowing effects like spatial audio to be applied.
Support for Android deployment in Unity (Support iOS and Android SDK #28) : we had strong signals about users asking for Android/iOS support. In a first step, and in partnership with another team, we have started working on Android support via Unity deployment. We are not yet able to commit to iOS too in that same time frame, but this is still considered for a next milestone.
Expose some statistics (Get network data amount (Mbps) during WebRTC usage #128) : some users asked for statistics about the connection, and this also helps us diagnosing issues with concrete numbers.
Tentative
C++ API refactor (Missing headers in C++ API #123) : the current NuGet packages take a dependency on WebRTC core headers, which are not shipped. More generally the use of inline/template functions across DLL boundaries is dangerous. A refactor is needed to ensure the NuGet packages are self-contained, and easy and safe to use in a shared module context (DLL).
Signaling improvements (NodeDssSignaler Replacement #45) : the current debugging-level non-production-ready signaling solution shipping with MixedReality-WebRTC (
node-dss
) is brittle and confusing for new users, and is limited. This task integrates a user-friendly solution for local signaling, which does not require any configuration or external server.The text was updated successfully, but these errors were encountered: