Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for 'palmspace' #1310

Open
cabanier opened this issue Jan 26, 2023 · 7 comments
Open

Add support for 'palmspace' #1310

cabanier opened this issue Jan 26, 2023 · 7 comments

Comments

@cabanier
Copy link
Member

OpenXR added support for a palm pose. This allows an experience to figure out where the user's hand/palm would be when they use the standard OS controller.

https://youtu.be/g-zupOaJ6dI?t=392 has some great visualizations that show the issues that arise when this is missing.

It seems that the explainer is basically describing this for grip pose but the spec has the correct language for grip.

Should we add this as a new optional space?

/agenda Add support for 'palmspace'

@probot-label probot-label bot added the agenda Request discussion in the next telecon/FTF label Jan 26, 2023
@AdaRoseCannon
Copy link
Member

AdaRoseCannon commented Jan 26, 2023

I guess I have been using grip space as a fallback for this, it would be handy to have it for both hand and gamepad to make it more reliable to pick up objects

@cabanier
Copy link
Member Author

I guess I have been using grip space as a fallback for this

What is group space?

it would be handy to have it for both hand and gamepad to make it more reliable to pick up objects

A potential issue is that this space is designed for the default controller model ie I'm unsure if it's the same for every quest controller (but I can check)

@AdaRoseCannon
Copy link
Member

I meant 'grip' space (I edited after)

As in the controller model can still use grip space, but for items of potentially variable size this would be better.

The way I have been working is to make all models with handles match the controller sizes so they get held nicely but if there is a space that sits flush with where the skin would be there can be some more flexibility

@AdaRoseCannon AdaRoseCannon removed the agenda Request discussion in the next telecon/FTF label Feb 21, 2023
@AdaRoseCannon
Copy link
Member

Removing label, because it was discussed at the last meeting, if this is in error or more discussion is needed please reuse '/agenda'

@bialpio
Copy link
Contributor

bialpio commented Feb 28, 2023

Potentially related: https://twitter.com/Mitsuownes/status/1630355532652376065

I wonder if we should plan for the day when the controllers are going to approach full hand-tracking in capabilities. I could see exposing a grip space via hand-tracking even now w/o needing a spec change - it would be confusing for existing sites that probably expect a finer-grained level of tracking, but there might be a point at which we decide that a controller is better-suited to expose this data via an XRHand.

@cabanier
Copy link
Member Author

The Quest touch controllers already sense finger touching (and positions in case of the the Pro controllers)
Palm space is for a different use case though; it's specific to determine where the controller is held.

@bialpio
Copy link
Contributor

bialpio commented Feb 28, 2023

My point is more about the fact that "where the controller is held" (or rather, where we expect the controller to be held, unless we actually have sensors on the controller that can report the exact palm pose) can also be surfaced as the XRHand, and I can see it become more accurate in the future (& more appropriate e.g. if the controllers start exposing the state for all the fingers of a hand on the controller).

To me the noteworthy thing from the video is that it seems that the virtual palm gets slightly adjusted based on finger touches, even though the real hand's palm doesn't seem to have this adjustment. I'm assuming hand rendering in WebXR has to be driven entirely by the site based on the palm space and the input source state?

I'm approaching it from "the use case is rendering a hand" (is that the actual use case here?) and trying to figure out if we have other options here. Naively, I think it would be easier to render a hand given an XRHand instance, since this could be made to work both for apps using hand-tracking and controllers (assuming we expose XRHand for controllers too), and seems to be future-proof. It does place more burden on browser implementations though.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants