Skip to content

Latest commit

 

History

History
40 lines (24 loc) · 3.86 KB

spatialgesturerecognizer.md

File metadata and controls

40 lines (24 loc) · 3.86 KB
-api-id -api-type -api-device-family-note
T:Windows.UI.Input.Spatial.SpatialGestureRecognizer
winrt class
xbox

Windows.UI.Input.Spatial.SpatialGestureRecognizer

-description

Interprets user interactions from hands, motion controllers, and system voice commands to surface spatial gesture events, which users target using their gaze or a motion controller's pointing ray.

-remarks

Spatial gestures are a key form of input for Mixed Reality headsets such as HoloLens. By routing interactions from the SpatialInteractionManager to a hologram's SpatialGestureRecognizer, apps can detect Tap, Hold, Manipulation, and Navigation events uniformly across hands, voice, and motion controllers.

Note that spatial gestures are not detected for input from gamepads, keyboards or mice.

SpatialGestureRecognizer performs only the minimal disambiguation between the set of gestures that you request. For example, if you request just Tap, the user may hold their finger down as long as they like and a Tap will still occur. If you request both Tap and Hold, after about a second of holding down their finger, the gesture will promote to a Hold and a Tap will no longer occur.

To use SpatialGestureRecognizer, handle the SpatialInteractionManager's InteractionDetected event and grab the SpatialPointerPose exposed there. Use the user's gaze ray from this pose to intersect with the holograms and surface meshes in the user's surroundings, in order to determine what the user is intending to interact with. Then, route the SpatialInteraction in the event arguments to the target hologram's SpatialGestureRecognizer, using its CaptureInteraction method. This starts interpreting that interaction according to the SpatialGestureSettings set on that recognizer at creation time or by TrySetGestureSettings.

When targeting a spatial interaction, such as a hand gesture, motion controller press or voice interaction, apps should choose a pointing ray available from the interaction's SpatialPointerPose, based on the nature of the interaction's SpatialInteractionSource:

  • If the interaction source does not support pointing (IsPointingSupported is false), the app should target based on the user's gaze, available through the Head property.
  • If the interaction source does support pointing (IsPointingSupported is true), the app may instead target based on the source's pointer pose, available through the TryGetInteractionSourcePose method.

The app should then intersect the chosen pointing ray with its own holograms or with the spatial mapping mesh to render cursors and determine what the user is intending to interact with.

For applications using the gaze-and-commit input model, particularly on HoloLens (first gen), SpatialGestureRecognizer can be used to to enable composite gestures built on top of the 'select' event. By routing interactions from the SpatialInteractionManager to a hologram's SpatialGestureRecognizer, apps can detect Tap, Hold, Manipulation, and Navigation events uniformly across hands, voice, and spatial input devices, without having to handle presses and releases manually.

-examples

-see-also

Spatial interaction source sample