Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

vNext: Refactor pose update events into passive transforms #2608

Closed
Ecnassianer opened this issue Aug 17, 2018 · 9 comments
Closed

vNext: Refactor pose update events into passive transforms #2608

Ecnassianer opened this issue Aug 17, 2018 · 9 comments
Assignees
Labels
Architecture Feature Request Feature request from the community
Milestone

Comments

@Ecnassianer
Copy link
Contributor

Overview

Right now tracked objects fire pose events every time they move. I understand the expectation to be that every object that wants to know about the position of a controller, tracker, headset, etc implements those pose events and updates their transforms accordingly. An event based movement model makes a lot of sense for mice, touch screens, or other traditional devices, which spend the vast majority of their time static. However, because of the nature of tracking devices, they are firing pose updates every frame; even if you hold your head as perfectly still as possible, it's still going to be in a new position every time it polls.

Each class that cares about tracked positions handles an increasing number of pose events every frame. A class might find itself updating a pointer pose for each controller, a grip pose for each, plus a headset pose. @keveleigh and I easily imagined a world where future controllers add additional tracked points. Analog sticks tilt independently of tracking. Finger tracking like Knuckles, Leap Motion, or ManuVR add an arbitrary number of additional position events (over 20 per hand in the case of Leap Motion!). A flip up WMR headset may offer the position of the display in addition to the position of the user's head. A unity class concerned with these things now implements and maintains a massive list of event handles, all firing every frame.

There is nothing explicitly wrong with this approach, but it is not very "Unityish". Unity convention presents constantly updating positions as Transforms which offer a suite of developer convenience like parenting and local offsets, in addition to a host of invisible optimizations such as network quantization.

Proposal

Adding position Transforms for all tracked objects

I'm proposing that we adopt a method of exposing tracked positions as transforms. A user's code can request a transform for the Left Motion Controller, parent itself with an offset, and trust that it will always follow the tracked device. Additionally that transform can be cached in Start() and read from at a later date, not having to worry about keeping it updated for some future calculation.

Promote Transforms as the primary method of accessing position data

I believe this is the the most intuitive method for Unity users, and we should encourage the use of Transforms as the primary method of accessing tracked position data. I do not believe there is any case where an event based position update is superior for continuously tracked objects. This means updating example maps and documentation to use this method.

Optional: Removing pose events

I think we would be best served by not offering pose events at all, since they only offer a less ideal choice and demand an additional maintenance burden.

However, these are relatively low costs compared to offering the familiarity of event driven positions for developers who are used to developing on traditional position devices like mice. This also would have been a better change to make before Alpha, but in the bigger picture, we're still very early in the infancy of vNext.

Do you know of a time when pose events are a better choice than transforms?

What it's not: Removing position data from IInputEvents

@johnppella pointed out to me that it does 100% make sense to include various position data in other input events. A user will definitely want to know things like what the controller's rotation was when a pointing ray activates a button. We might even want to include some additional information, like the velocity of both the palm and the finger when a grab event happens. This data should still exist in the input event itself, and should be the position when the event fired, not the current position of that object (consider especially an overlap event being triggered by a fast moving object that moved a large distance in the span of a single frame).

@StephenHodgson
Copy link
Contributor

We should def keep the pose events.

Although I do see the appeal to having a easy way to get the transform reference.

@keveleigh
Copy link
Contributor

We should def keep the pose events.

As @Ecnassianer asked above:

Do you know of a time when pose events are a better choice than transforms?

@StephenHodgson
Copy link
Contributor

StephenHodgson commented Aug 17, 2018

Do you know of a time when pose events are a better choice than transforms?

When you don't have a transform to reference and your data comes directly from the input source.

@Ecnassianer
Copy link
Contributor Author

@StephenHodgson Can you elaborate on when that happens? My proposal is that our API would return transforms for all devices, regardless of what the hardware-specific API offers. The goal is to remove any case where you don't have a transform.

@StephenHodgson
Copy link
Contributor

The main thing I think we'll run into here, is that our main systems are all c# based, and don't inherit from mono to have more control over the life-cycle of our objects. But if you think you can do it, I'd much rather see this as well.

@StephenHodgson
Copy link
Contributor

StephenHodgson commented Aug 17, 2018

The only reason why I'm in favor of keeping the event system is because it really decouples the systems in a way that makes communicating across them very simple, and you know exactly what you're going to get.

I'm also curious to see if there's much a perf impact on the number of objects and the number of events raised. They events aren't always tied to the unity update either (in fact we're not tied to unity for some inputs, so the data is coming in at the rate the driver is giving it to us).

@david-c-kline
Copy link

With MRTK v3 moving toward building upon XR Interaction Toolkit, is this something that needs to be owned by MRTK?

@david-c-kline
Copy link

The root question comes down to how to access the "left hand", for example, at runtime.

XRI and the Tracked Pose Driver look to handle much of this.

@Zee2
Copy link
Contributor

Zee2 commented Aug 4, 2021

Moving forward in future MRTK versions, we will be moving towards a transform-based pattern, rather than pose events.

@Zee2 Zee2 closed this as completed Aug 4, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Architecture Feature Request Feature request from the community
Projects
None yet
Development

No branches or pull requests

8 participants