-
Notifications
You must be signed in to change notification settings - Fork 121
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
WIP: Magic Leap 2 Integration #840
base: develop
Are you sure you want to change the base?
WIP: Magic Leap 2 Integration #840
Conversation
Signed-off-by: Austin Hale <haleau@gmail.com>
Hey! Nice to see this pop up, that looks pretty decent in the video! :) I believe I've seen someone running SK on ML2 before, but only briefly, this is a really nice run-down. Do you have any info/logs on why Debug mode isn't working? Their comments on "buffers" are phrased really weird. I think it's unrelated to any stuttering, I believe they're just trying to say that UI shouldn't move on its own when the user is trying to interact with it? Which is fine, StereoKit's core stuff doesn't move anything unless directed to by the user. The stuttering does sound concerning though, does it happen throughout the device, or is it specific to SK apps? If eye gaze isn't working, is it possible the device needs to explicitly request permission with a pop-up? This is how eye tracking currently works on Quest Pro. Does StereoKit already recognize the ML2 controllers at all? In theory if their input system is well built, the runtime should map the controllers to one of SK's existing bindings. You would be able to see a controller rendered in the Controller demo, and it would log the name of the input profile it uses. Also very nice to see ML already using |
Oh thank god somebody is seeing the same issues I am! We've had our ML2 replaced twice by Magic Leap because of these issues and I could find nobody else with an ML2 who was experiencing this kind of ghosting. |
Signed-off-by: Austin Hale <haleau@gmail.com>
Signed-off-by: Austin Hale <haleau@gmail.com>
Just updated the failed log for Debug mode! I've also tried disabling "Fast Deployment" with no luck
You're right, it's the same here too! The eye tracking & spatial mapping permissions are considered dangerous, so they must be granted at runtime: App Permissions. I put in a fix for granting those on the main thread once SK is running
Turns out it's not specific to SK apps! Though it's more noticeable than their samples (C API & OpenXR). Even in the main menu, they seem to account for the effect by lerping the menu's orientation & translating a bit delayed so it appears "smoother" (video below). So I don't think there is much we can do here except put in a request to ML about it
I'm not seeing it from the log 😬 I'll look into trying to enable it, but Debug mode would be helpful here
Support for spatial perception is still experimental but their sample for I updated the list for what this PR could contain. I guess only controller support is needed at the moment? I thought about also supporting the occlusion filter and spatial anchors but it seems too experimental in their OpenXR code. Global dimmer would be fun to implement into SK too if you'd want it |
Thanks for the logs! These two lines stand out:
It looks like the OpenXR Loader is trying to load Not sure if it's the same sort of issue, or if this is even the correct fix for it, but it might be worth trying. I added this line to the manifest, I'd try the same thing, but with
There's also the in-app log in StereoKitTest's hand menu :) But yeah, if you need to add the profile, check something like this one for reference. Someday I should make those customizable from top-level, but it's not too tricky to add there.
Which were the occlusion filter and spatial anchors? I didn't quite spot those. I see Global dimmer is pretty intriguing! I have no idea where that would go though :D I'm tempted to say it might be better implemented at top level, but if you can find a good spot for it in the core, I don't think I'd object. |
This reminds me a bit of the HoloLens 2 scanning artifacts! I wonder if maybe it's a similar sort of thing? IIRC, the R, G, and B channels of the HL2 display are actually drawn separately at different times! So you could see colored ghosts if you were moving your head quickly. ML2 is similarly using a waveguide display, but I don't really know the details of their implementation. It's possible this might be a similar issue with a more artifact-ey implementation, or it could be off in a different direction and be some display persistence thing? I'm not really sure! Edit: an interesting experiment might be to turn the text color to primary R, G, or B, and see if that improves the smearing. White would basically be the worst-case, since it contains all 3 colors. |
You can indeed easily see the color separation with head movement, and GUI text as Austin mentioned. But the weird thing I've found is that the ghosting doesn't happen in all situations (and it doesn't appear to be frame-rate related). For example, just loading the google home page in the browser app easily shows color separation in the google logo, with nothing particularly graphics-heavy loaded. But take another graphics-light situation, such as the LeapBrush app, which allows you to draw lines/ribbons in 3D using the controller. When you draw a few lines and move your head around these lines there's hardly any ghosting in them. But the app also show help text on the controller buttons and moving the controller in your field-of-view, while keeping your head still, very much shows ghosting of that text. Yet, put the controller stationary in a place (e.g. on a table) and make the same relative movements with your head and no ghosting is visible on the text. |
I talked to some representatives from Magic Leap 2 and this is what they mentioned:
The Graphics team is constantly trying to improve this effect, with the goal that the developer doesn't have to manually set a focus distance by default. For this PR, I'm waiting on their recommended spatial anchor and controller interaction OpenXR samples. Another tidbit I learned is about their future release of aligning holograms to the camera image. For their version of mixed reality capture, they'll be having a 3rd-party plugin in Unity for this alignment. It always sounds like some extra hurdles for them to fully focus on OpenXR, since their customers are primarily using Unity. However, they are sharing StereoKit around internally and even had some more requests for it! :) |
@austinbhale Nice feedback! Some questions though:
I guess this is the main reason I see the different color separation behaviour with LeapBrush and the controller overlay, as described above. Even when using very erratic head movement and the controller lying stationary I see no color separation. If it really was in the display tech itself it should be noticeable even in this situation. Makes me wonder if they do position prediction during the RGB cycle, but it simply is not precise enough, hence the color separation.
Is this specific to the Unity integration for ML2? I mean, how would you even do this in SK?
Have you managed to get above 60 FPS rendering using StereoKit? So far, I only manage 60FPS, even with simple scenes (based on a simple custom FPS counter). |
On HoloLens, we called this Late Stage Reprojection, and there were two methods of doing it. One was to submit a depth buffer to the compositor, and let the compositor reproject based on the depth at each pixel. The other (older) one was to just pick a point, or plane, and let the compositor reproject as if everything was on that plane. When we did have to specify a plane, it was a UWP API call, I don't believe the point/plane method survived to make it into OpenXR. The depth method seems to have pretty much superseded the plane/point method by this point in time! I guess I'd probably be looking for an ML API for this, or an unpublished OpenXR extension?
HoloLens did this one through
If you're locked at 60, I might hunt around for some ML specific Android Manifest items? |
Hmmm, from https://forum.magicleap.cloud/t/how-do-i-run-at-120-hz/939/2:
|
Ohhh, I see. Render at 60hz, reproject to 120hz. Might even be a side-effect or "number magic" related to the way they draw R G and B channels separately. |
Hey Nick, the Magic Leap team is loaning us a device so I'll be working on its integration in my spare time. I'll be keeping this PR updated in the meantime, so that it's easier to show you updates and ask questions.
Progress
Supported Extensions
🔴 not supported by this PR, unless requested in the comments
🟡 work in progress
🟢 active
Known issues
This may be a limitation of the on-device SLAM, which causes perceived motion of stationary content. For example, reading UI text while moving around becomes illegible as it seemingly "repeats" multiple times until the tracking restabilizes. This is probably why ML2 offers advice on how to mitigate user micro-movements for UI. If StereoKit is in immediate mode, is it correct to assume there's no buffer in place?
Please note that this visual artifact won't be seen in the mixed reality video.
StereoKitTest_NetAndroid
Initialization Log for Debug