Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Solvers need to track controller transforms #2785

Closed
2 tasks done
keveleigh opened this issue Sep 13, 2018 · 17 comments
Closed
2 tasks done

Solvers need to track controller transforms #2785

keveleigh opened this issue Sep 13, 2018 · 17 comments

Comments

@keveleigh
Copy link
Contributor

keveleigh commented Sep 13, 2018

Overview

The solvers in vNext are currently head-only. The also need to support tracking motion controllers.

Review (SJ)
Solvers are components added to GameObjects in a scene. These Solvers then need a "target" to bind to which may be the users head or could be a physical controller

Question
Are ONLY controller entities (head / controller / hand / finger) bound to? Or could it in theory be bound to ANY other GameObject?

Requirements

  • Solvers can have a Transform reference to the tracked controller to "solve" against.
    • This means there must be a way to ask something for the currently tracked controllers (which exists, but shouldn't be used?)
    • This does not mean listening for source pose events, as that breaks the paradigm of solver independence and adds yet another set of global listeners.

Acceptance Criteria

A list of the specific things that must be done for approval / acceptance of the task in checklist form.
This may correspond with the above requirements.

  • Solvers can be configured to obtain and solve towards motion controller transforms.
  • Doesn't add another global listener to the scene.
@keveleigh keveleigh added this to the MRTK 2018 milestone Sep 13, 2018
@keveleigh keveleigh self-assigned this Sep 13, 2018
@keveleigh keveleigh added this to To do in Mixed Reality Toolkit via automation Sep 13, 2018
@keveleigh keveleigh added this to To do in Version Next Beta via automation Sep 13, 2018
@keveleigh
Copy link
Contributor Author

Proposed solution:

MRControllers add a Transform reference to their own transform, so a script can access it either in SourceDetected or by asking the controller manager for the specifically handed controller.

@keveleigh
Copy link
Contributor Author

Additionally, asking the controller manger for its controllers was made more difficult as more "device managers" came online. Previously, the OpenVRDeviceManager or the WindowsMixedRealityDeviceManager would be the only active "device managers" in the scene. Now, there's a whole list of them, but only one of them will have tracked motion controllers.

@StephenHodgson
Copy link
Contributor

The solvers just need to handle events for the controller they're assigned to.

Look at the pointer implementation for details.

You should be able to fulfill the requirements with SourcePoseSynchonizer

@keveleigh
Copy link
Contributor Author

Solvers have their own update cycle and track towards a specific transform that gets set once. This is important, because a benefit of solvers is that they can interact with each other in a known moment, state, and order.

Additionally, once attachment points are added (again, as passive transforms that get set once), the pose synchronizer is incompatible.

@StephenHodgson
Copy link
Contributor

Solvers have their own update cycle and track towards a specific transform that gets set once. This is important, because a benefit of solvers is that they can interact with each other in a known moment, state, and order.

Totally understand and agree

Additionally, once attachment points are added (again, as passive transforms that get set once), the pose synchronizer is incompatible.

Totally disagree

@Ecnassianer
Copy link
Contributor

Totally disagree

Can you elaborate?

@StephenHodgson
Copy link
Contributor

I disagree that it is incompatible.

@Ecnassianer
Copy link
Contributor

When a passive transform updates, there are no events to update the SourcePoseSynchonizer. That means we won't be able to update the Transform that all the solver handlers are coordinating around. I believe this is Kurtis's original point.

In the same vein, we're going to hit problems with tracking APIs like Leap Motion which provide transforms instead of events. There won't be any update events there either.

Unless you know some secret juju, I think Kurtis is right about it being incompatible. If you see some way to connect these disparate models of operation, please share! :)

As I described in proposal #2608, using transforms is a much more unityesque way of doing business. If we want to avoid people reaching into the device layer to get transforms, we should have the device layer provide transforms in a way that we're ok with. It's also true that if we want to continue using the pose events to control position, at some point in the future we'll find ourselves building a class that checks if a passive transform has moved and fires an event to simulate a pose update.

I don't think it's a crazy idea to have the SourcePoseSynchronizer (or something like it) update a transform for WMR motion controllers, but that should be happening in the MRTK Constructs layer, not the User Abstraction Layer where Solvers live.

@StephenHodgson
Copy link
Contributor

StephenHodgson commented Sep 14, 2018

When a passive transform updates, there are no events to update the SourcePoseSynchonizer.

They're not "events" in the traditional sense, as they're not that much different than the "passive" updates you're mentioning. (under the hood it's calling the method directly)

As I described in proposal #2608, using transforms is a much more unityesque way of doing business.

We are doing things the "unity" way, as the entire input system is based off of unity's event system... soooo... I'm not really sure what the difference here is.

I don't think it's a crazy idea to have the SourcePoseSynchronizer (or something like it) update a transform

I don't think so either, in fact, that's exactly what's already happening. 😅

I encourage you guys to go back and look everything over. I'll work on a PR with some proposed changes to adhere to the current architecture.

@SimonDarksideJ do you have anything else to add that might help them understand what we're trying to convey?

@StephenHodgson
Copy link
Contributor

StephenHodgson commented Sep 14, 2018

The "passive" transform you want to listen to is what ever game object's transform has the pose synchinizer. That is the scenes only knowledge of the controller, and should stay that way or we break the architecture we've laid out and satisfies the requirements.

@Ecnassianer
Copy link
Contributor

Ecnassianer commented Sep 14, 2018

I'll work on a PR with some proposed changes to adhere to the current architecture.

Thank you. I can't speak for Kurtis, but I know for me an answer like this is much more helpful than "Totally disagree" because it follows directly to an action item. "Totally disagree" leaves the ball up in the air with no one as the responsible party and no clear next steps. There's no way to figure out what to do in response. On the other hand, it's easy and stress free to wait for a PR. :)

@StephenHodgson
Copy link
Contributor

Do you guys want to schedule a call today where we can discuss this much more effectively?

@StephenHodgson
Copy link
Contributor

StephenHodgson commented Sep 14, 2018

Imo this task is moot because you can already do everything you need to do by attaching the solver handler to what ever prefab/ game object that has a IMixedRealityControllerPoseSynchronizer. From there you just need to reference that component's transform (usually the same game object the handler is on) and do whatever you need.

@StephenHodgson
Copy link
Contributor

StephenHodgson commented Sep 14, 2018

In the BaseController class we create our "scene" representation of our controller (again it's a one way thing). That's the IMixedRealityControllerPoseSynchronizer you need to be "passively" tracking. (I believe that's on the model prefabs we set in the controller mapping profile).

We can even do the same for the pointers as well considering they also implement IMixedRealityControllerPoseSynchronizer

@SimonDarksideJ
Copy link
Contributor

Added a review comment in the Task description to nail down the actual requirement, as I still don't see it.

Review (SJ)

Solvers are components added to GameObjects in a scene. These Solvers then need a "target" to bind to which may be the users head or could be a physical controller

Question

Are ONLY controller entities (head / controller / hand / finger) bound to? Or could it in theory be bound to ANY other GameObject?

@Ecnassianer
Copy link
Contributor

Any game object, both in editor and at runtime.

I've used them for a menu that tracks a controller by default, but can be attached to a docking object when the user wants to put it down. It has the same solving behavior whether it's following the controller or the docking object.

@SimonDarksideJ
Copy link
Contributor

That's what I worked with for #2795
Which seems to meet all the requirements set out here without altering the device layer.

Also raise Task #2798 to look at updating the SolverHandler to use the updated MRTK configurable metaphor. The Solvers are solid, however I feel the scene implementation and config could benefit from the pattern used for other MRTK features. (and also allow the core solvers to move in to the core as pure c# scripts)

Mixed Reality Toolkit automation moved this from To do to Done Oct 5, 2018
Version Next Beta automation moved this from To do to Done Oct 5, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
No open projects
Development

No branches or pull requests

4 participants