-
Notifications
You must be signed in to change notification settings - Fork 278
Support iOS and Android SDK #28
Comments
Hi @anujb, thanks for opening that task. Indeed, the underlying Google implementation has some form of support for iOS and Android. However this has never been tested with the current project, so would require some work to make sure this can be leveraged, and to setup some infrastructure for testing and CI for those platforms to ensure proper support. At the moment this looks like a time investment, and we have other areas of focus with clear immediate customer needs, so this is not a priority. I am happy to reprioritize if there is a strong signal that this can be helpful to customers. Note however that we do plan to support iOS and Android deployment via Unity at some point. Would that work for you, or are you looking for iOS/Android support for non-Unity native apps (C++ and/or C#)? |
Maybe it is obvious, but i want to know if it will be possible to use it in Xamarin.Android native or even Xamarin.Forms? |
Thanks @valentasm1! I don't think anyone ever tried Xamarin, and I don't have any experience with it, so I cannot tell you if it works. I can only tell that this is not an officially supported scenario/platform. |
Add experimental Android ARM64 support via Unity. This changes adds support for compiling MixedReality-WebRTC for Android ARM64 for use inside a Unity project and deploy as a Unity build to an Android device. Support for non-Unity Android apps is out of scope. The Android build produces 2 archives: - libwebrtc.aar: the core Google WebRTC implementation for Android ARM64. - mrwebrtc.aar: the MixedReality-WebRTC API wrapping the Google implementation. Both archives are copied to the `Assets/Plugins/arm64-v8a` folder of the Unity sample project, and deployed to the device. The current change builds over the main WebRTC Google repository instead of WebRTC UWP, therefore represents a temporary diverging in the MixedReality-WebRTC. For this reason, it is strongly recommended to use the `branch-heads/71` Google branch to keep the code as close as possible to the one used for other platforms (Windows Desktop and UWP). Building from Google's `master` branch may work but is not supported. This change is experimental. The `tools/build/libwebrtc` and `tools/build/android` folders provides some utility scripts to help building the Google repository for Android ARM64 and the MixedReality-WebRTC project, respectively (see `tools/README.md`). However this process is involved, and there is currently no precompiled binaries provided to avoid it, nor any CI in place to validte changes. This will come in time. Known limitations and issues: - Video capture on Android device currently does not work. This requires specific interop code to open the device camera from Java, which is not yet available. (Bug: #246) - The two Android archives are huge, much too big for production. Current testing shows a deploy at 326 MB on device, including Unity's project and without any optimization nor stripping (dev build). (Bug: #247) - The build should produce a single archive, not two. (Bug: #248) - Because Android and Windows use different Google repository commits, there is a small chance to get discrepancies in behavior and/or incompatibility bugs between those. No such bug has been observed so far, but please keep this in mind. Android users are encouraged to try this change and provide feedback via GitHub issues, be it on the build process itself, on the missing features not listed above, or anything else related to Android support. The change has been tested manually on Google Pixel 3A and Occulus Quest. Other Android devices should work too, but have not been tested explicitly. A huge thanks to Eric Anderson from AltspaceVR for contributing this change. Bug: #28
It looks like the Android support was added with the linked PR. From the readmes it looks to me that android support only works in android studio? Is there any information on xamarin.android available yet? |
Android support is limited to Unity deployments. There is no plan to add generic Android support (native apps or Xamarin) at this time. |
Hello! I would like to dive into using the MixedReality SDK for Unity mobile, but I'm curious about how Android and iOS support is fairing. I see that work for Android support has already been merged to master (woot!), but from poking around it appears as though it's not functional yet? For example there are some crashes/deadlocks (#335, #329) and video capture is not implemented (#246). The only documentation I found so far is how to build the Android MixedReality .aar from sources: https://github.com/microsoft/MixedReality-WebRTC/tree/master/tools/build/android If I'm not mistaken to get started with MixedReality Android I need to build Google's Android libwebrtc from sources? Is it not possible to use the pre-built Android libs from Google here: Or does it need to be built from the Google source in order to link and produce mrwebrtc.aar? Apologies if some of the questions are newbish, but I'm just starting to dip my toes in the water and don't know if I should dive right in! Regarding iOS I understand that is on the roadmap. Is there an approximate timeline when work on it will begin? Thanks, |
It's not in a great state, as it's missing video capture (#246), which directly triggers #335 so they're essentially the same unit of work (kind of). #329 is easily worked around; I just have to figure some time slot to investigate how to make the change permanent since it's in a Unity-generated file. Other than that if you are not capturing video on Android but only receiving, and/or using audio or data, everything works. So this is not a great experience but shouldn't be an immediate blocker unless you need video capture. Note that remote video rendering (displaying video received from a non-Android remote peer) works.
Yes the README files checked in should describe the process. There are some bash scripts to run which will do all the work for you. It requires a Linux environment (or WSL2, but I'd recommend a proper Linux environment; I use a VM in Hyper-V locally); this is a constraint from Google. Checkout is large and slow, but only need to be done once. Building is reasonably fast after that. Start with
As far as I know they deprecated them at the same time we were starting to look at this.
Yes. Currently we build
I don't have any more info at this time. We are actively working on the 2.0 release, we are roughly feature complete and need to fix and polish the last few items, so this is our work for the next few weeks or so. |
Hi @djee-ms, Thank you for taking the time to respond to my questions.
Understood. In my case we do need local/outgoing video in the short term, but as you say it's not a showstopper. As long as local/outgoing video will be available in the next few weeks :) I was also wondering if the following features are available with the Android version:
Regarding iOS, although it's not on a specified timeline anywhere, can you say if it's planned to be on the next roadmap? It's something key for planning mobile development since Android and iOS are like peas in a pod 😄 Cheers, |
@djee-ms Sending out a friendly poke if you could take a look at my follow-up questions above. 🙏 Thanks, |
Hi @drejx, sorry I missed it!
It will.
Yes, this is independent of the platform. Actually this should already work; did you try?
What is the feature needed here exactly? I would expect the volume to be controlled by the rendering/output code on the receiver side, no? I think a priori (didn't check) that by default 1) there's no volume associated with an audio track other than the intrinsic amplitude of the raw audio data itself (so there's no gain variable), and 2) automated gain control (AGC) is active on the source when using a microphone. When using a custom audio source (not available yet; external audio track coming soon too) then you're in control of what you produce. Can you describe what you are looking for?
ACK on the request and the criticality for cross-platform dev, which I completely understand and agree with; unfortunately I cannot share anything about our roadmap beyond 2.0 at this time, sorry. |
Not yet, but was curious since it's a key feature for my case at least. I've been setting up the Android build and now starting to fidget with code. But I imagine you're trying to keep the mrwebrtc API surface the same across all platforms.
My question was based on my previous experience with the Windows (webrtc) version where there is no control on the client/user side to control the webrtc voice volume of remote audio streams. For a use case example, say there is a 2 player game/app and you want to increase the voice volume of the remote (incoming) stream because you can't hear them speak.
Oh, that's too bad. Would you be able to say when you think there will be an update/announcement related to "beyond 2.0" so I can be on the lookout? I'm sure I'm not the only one ;) Cheers, |
Summary:
Support iOS and Android devices
Value proposition:
Creates standard communications model for 1st party and 3rd party devices (iOS, Android, Magic Leap) and introduces network effects that can be leveraged with existing investments in the cloud/edge infrastructure across providers (Azure, AWS, Verizon, etc).
Background:
Existing native SDKs:
iOS WebRTC SDK: https://webrtc.org/native-code/ios/
Android WebRTC SDK: https://webrtc.org/native-code/android/
Cross Platform support: TBD
The text was updated successfully, but these errors were encountered: